article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
in many real world problems , instead of the complete signal , we have some observations of the signal of interest from which we want to reconstruct the original signal . in the simplest case , which is fortunately applicable to many practical situations ,the observation process can be approximated as a linear operator : where is the original signal of interest , is the observed features , i.e. samples , and is the ( linear ) observation operator .we are interested in the case where the number of observations is much fewer than the length of the original signal .that is , has much fewer rows than columns , i.e. .furthermore , in practice the observation process is not exact and typically we can only obtain a distorted version of the observed features .this distortion is usually modeled by an additive error term .so the observation process can be modeled as in which represents the ( additive ) observation error , e.g. noise .the objective is to find the original signal , , from the set of available observations , , while is also known . that is , to solve the linear inverse problem . in order to doso one might minimize the discrepancy . however , except for the very special case that the operator has a trivial null space ( see for details ) , the minimizer is not unique . in order to address this problem , i.e. to regularize the inverse problem, one might suppose a priori assumptions and impose different constraints on the solution , which is usually taken into account by adding a penalization term to the discrepancy .in this letter we are especially interested in the case where we have a sparsity constraint on the solution . for the sake of a more precise explanation , suppose there exists an orthonormal basis in which the signal of interest can be expanded with the expansion coefficients , a large number of which are zero or negligibly small . in order to make use of this constraint for solving the inverse problem ,define the -norm .one might then find the minimizer of the following functional , as the solution of with the aforementioned sparsity constraint : where and is the regularization parameter that can be chosen based on application .this is a well - known problem and different approaches to solving it have been proposed . herewe will not go through the details of the problem , which have been widely studied by other authors .the reader is referred to for a comprehensive discussion of the problem . for the sake of consistency, we follow the same mathematical notations as those used in .furthermore , it is worth mentioning that beside the -norm introduced above , some authors have as well used the total variation ( tv ) norm as the constraint .see , for example , .there are several methods of recovery available in the literature , among of which iterative thresholding algorithms are an important class .iterative thresholding algorithms are , more or less , based on a thresholded version of the landweber iterations ( see , for example , ) .i.e. the sequence of iterates has the general form where denotes the conjugate of , and is a thresholding operator . in authors prove the convergence of the above iterative algorithm to the ( unique ) minimizer of , when is the soft thresholding operator ( see the definition of soft thresholding in section [ sec : description of algorithm ] ) .noticeable effort has been put into accelerating the original algorithm . in authors propose a method for accelerating thresholded landweber iterations , which is based on alternating subspace corrections .other methods for this purpose are introduced in , and .although use of soft thresholding is more common , some authors have , as well , used _hard _ thresholding to address the above inverse problem .see , or as examples .in order to explain the underlying idea of the proposed method , let us begin with the following problem. in papoulis introduces _ an iteration method _ for reconstruction of a band - limited signal from a known segment .suppose is a signal of which we only know a small segment , , where .also suppose is the fourier transform of and for ( bandlimitedness ) .the objective is to reconstruct from . in order to solve this problem ,we begin with , the fourier transform of , and form , i.e. truncate for . in other words, we change so that it satisfies the constraint on the original signal ( bandlimitedness in this case ) . , the inverse transform of , is then used to form , which recovers the known segment of . is supposed to be a better estimate of the desired signal , , than .this estimate can be further improved by repeating the above procedure in an iterative manner .that is , in the iteration , we form the function : compute its inverse transform , , and recover the known segment of the original signal it can be proved that tends to as . in brief , in each iteration , we change the latest estimate of the desired signal , i.e. the output of the previous iteration , so that it satisfies the constraint ( bandlimitedness in this case ) . since this process might affect the entire signal , including the known segment , the known segment is then recovered before further progress .this problem is obviously different from our original problem stated in , because , firstly , it concentrates on the special case of recovering a _ continuous _ signal from _ a known segment _ and , secondly , the constraint on the signal is _bandlimitedness _ while the constraint of is _sparsity_. nevertheless , we will implement the above idea to solve our own problem as explained below .based on the above algorithm , our _ iterative _ algorithm involves two main operations in each iteration , namely , an operation to maintain the constraint followed by an operation to recover the original observations . since we are interested in problems with sparsity constraint , a thresholding operation can maintain this constraint for us , i.e. where is the latest estimate of the original signal , obtained in the previous iteration , and is the _ soft _ thresholding operator , defined as where analogous to , the original observations are then recovered by the sequence of iterates can , thus , be expressed in the following form : with .although is not exactly a sequence of landweber iterations , it can still be viewed as a modified version of the thresholded landweber iterations .note , especially , the analogy between and . in this letterwe only introduce the algorithm and experimentally evaluate its performance , compared to similar state - of - the - art algorithms . a detailed discussion of the convergence of the iterative algorithm and its relation to the thresholded landweber iterations is beyond the scope of the current letter and will be postponed to future publications .the motivation behind the proposed algorithm was briefly discussed , though .in all the experiments described below , the thresholding operator is applied to stationary wavelet transform ( swt ) coefficients , obtained using db1 ( haar ) mother function for 1 level of decomposition .all thresholds are obtained using the well - known birge - massart strategy .the iterative algorithm continues until a convergence criterion , e.g. , is met . for the sake of comparison , the results are compared with those obtained by norm minimization and total - variation ( tv ) norm minimization , which are two well - known state - of - the - art methods of sparse signal recovery . due to space constraints , the results of the experiments are included very concisely .more comprehensive results can be found at http://mkayvan.googlepages.com/sparsesignalrecovery .first , we consider the ideal case of sampling with no distortion , i.e. we assume in .the heavisine test signal ( figure [ fig : heavisine phantom ] ) , from the well - known donoho - johnstone collection of synthetic test signals , is reconstructed from different numbers , , of randomly selected samples .table [ tbl : mse nonoise 1d heav ] shows the the mean squared error ( mse ) between the reconstructed and the original signal for reconstruction by as well as by and tv norm minimization . as it is obvious from results , reconstruction byoutperforms the two other methods in almost all cases ..mse between the reconstructed and the original signal for reconstruction by as well as by and tv norm minimization , from observed samples .the original heavisine test signal constitutes samples . [ cols="^,^,^,^",options="header " , ]motivated by the papoulis - gerchberg algorithm , a method for recovery of sparse signals from very limited numbers of observations was proposed .iterative thresholding algorithms have been widely used to address this problem .our algorithm also takes advantage of thresholding to maintain the sparsity constraint in each iteration .the signal is then reconstructed by iteratively going through a constraint - maintaining operation followed by recovery of the known features .the performance of the method was experimentally evaluated and compared to other state - of - the - art methods .i. daubechies , m. defrise , c. d. mol , `` an iterative thresholding algorithm for linear inverse problems with a sparsity constraint , '' communications on pure and applied mathematics , vol.57 , pp.1413 - 1457,2007 .e. j. candes , m. b. wakin , `` an introduction to compressive sampling [ a sensing / sampling paradigm that goes against the common knowledge in data acquisition ] , '' ieee signal processing magazine , vol.25 , pp.21 - 30 , 2008 . j. m. bioucas - dias , m. a. t. figueiredo , `` a new twist : two - step iterative shrinkage / thresholding algorithms for image restoration , '' ieee trans . on image processing , vol.16 , no.12 , pp.2992 - 3004 , 2007 .l. birge and p. massart , `` from model selection to adaptive estimation , '' research papers in probability and statistics : festschrift for lucien le cam , ( d. pollard , e. torgersen and g. yang , eds . ) , springer , new york , 1996 .
|
motivated by the well - known papoulis - gerchberg algorithm , an iterative thresholding algorithm for recovery of sparse signals from few observations is proposed . the sequence of iterates turns out to be similar to that of the thresholded landweber iterations , although not the same . the performance of the proposed algorithm is experimentally evaluated and compared to other state - of - the - art methods .
|
the signals from a process of interest are often contaminated by signals from extraneous processes , which are thought to be statistically independent of the process of interest but are otherwise unknown .this raises the question : can one use the observed signals to determine if two or more independent processes are present , and , if so , can one derive a representation of the evolution of each of them ?in other words , if a system is effectively evolving in a closed box , can one process the signals emanating from the box in order to learn the number and nature of the subsystems within it ?there is a variety of methods for solving this blind source separation ( bss ) problem for the special case in which the signals are linearly related to the underlying independent subsystem states ( , ) .however , some observed signals ( e.g. , from biological or economic systems ) may be nonlinear functions of the underlying system states .computational methods of separating such nonlinear mixtures are limited ( , ) , even though humans seem to do it in an effortless manner .consider an evolving physical system that is being observed by making time - dependent measurements ( where ) , which are coordinates on the system s state space . in conclusion , we describe how to choose measurements that comprise such coordinates .the objective of blind source separation is to determine if the measurement time series is separable ; i.e. , to determine if it can be transformed into another coordinate system , ( called the `` source '' or `` separable '' coordinate system ) , in which the transformed time series describes the evolution of statistically independent subsystems .specifically , we want to know if there is an invertible , possibly nonlinear , `` unmixing '' function , , that transforms the measurement time series into a source time series : , \ ] ] where the components of can be partitioned into statistically independent , possibly multidimensional groups .this paper utilizes a criterion for `` statistical independence '' that differs from the conventional one .specifically , let be the probability density function ( pdf ) in , where .namely , let be the fraction of total time that the location and velocity of are within the volume element at location . in this paper ,the data are defined to be separable if and only if there is an unmixing function that transforms the measurements so that is the product of the density functions of individual components ( or groups of components ) where is a subsystem state variable , comprised of one or more of the components of .this criterion for separability is consistent with our intuition that the statistical distribution of the state and velocity of any independent subsystem should not depend on the particular state and velocity of any other independent subsystem .this criterion for statistical independence should be compared to the conventional criterion , which is formulated in ( i.e. , state space ) instead of ( the space of states and state velocities ) .in particular , let be the pdf , defined so that is the fraction of total time that the trajectory is located within the volume element at location . in some formulations of the bss problem ,the system is said to be separable if and only if there is an unmixing function that transforms the measurements so that is the product of the density functions of individual components ( or groups of components ) in _ every _ formulation of bss , multiple solutions can be created by applying `` subsystem - wise '' transformations , which transform each subsystem s components among themselves .these solutions are the same as one another , except for differing choices of the coordinate systems used to describe each subsystem . however , the criterion in ( [ state space factorization ] ) is so weak that it suffers from a much worse non - uniqueness problem : namely , solutions can almost always be created by mixing the state variables of _ different _ subsystems of other solutions ( see , , ) .there are at least two reasons why ( [ phase space factorization ] ) is the preferred way of defining `` statistical independence '' : 1 .if a physical system is comprised of two independent subsystems , we normally expect that there is a unique way of identifying the subsystems .as mentioned above , ( [ state space factorization ] ) is too weak to meet this expectation . on the other hand , ( [ phase space factorization ] )is a much stronger constraint than ( [ state space factorization ] ) . specifically , ( [ state space factorization ] ) can be recovered by integrating both sides of ( [ phase space factorization ] ) with respect to velocity .this shows that the solutions of ( [ phase space factorization ] ) are a subset of the solutions of ( [ state space factorization ] ) .therefore , it is certainly possible that ( [ phase space factorization ] ) reformulates the bss problem so that it has a unique solution ( up to subsystem - wise transformations ) , although this is not proved in this paper .2 . for all systems that obey the laws of classical physics and are in thermal equilibrium at temperature ,the pdf in is proportional to the maxwell - boltzmann distribution where is the system s energy and is the boltzmann constant .if the system consists of two non - interacting subsystems , the system s energy is the sum of the subsystem energies where and are subsystem state variables comprised of one or more components of .this demonstrates that , for all classical systems composed of non - interacting subsystems , the system s pdf in is the product of the subsystem pdfs in , as stated in ( [ phase space factorization ] ) .there are several other ways in which the proposed method of nonlinear bss differs from methods in the literature : 1 .as stated above , in this paper the bss problem is reformulated in the joint space of states and state velocities .although there is some earlier work in which bss is performed with the aid of velocity information ( , ) , these papers utilize the _ global _ distribution of measurement velocities ( i.e. , the distribution of velocities at all points in state space ) . in contrast, the method proposed here exploits additional information that is present in the _ local _ distributions of measurement velocities ( i.e. , the velocity distributions in each neighborhood of state space ) .many investigators have attempted to simplify the bss problem by assuming prior knowledge of the nature of the mixing function ; i.e. , they have modelled the mixing function .for example , the mixing function has been assumed to have parametric forms that describe post - nonlinear mixtures , linear - quadratic mixtures , and other combinations ( , , ) .in contrast , the present paper proposes a model - independent method that can be used in the presence of any invertible diffeomorphic mixing function .3 . in many other approaches ,nonlinear bss is reduced to the optimization problem of finding the unmixing function that maximizes the independence of the source signals corresponding to the observed mixtures .this usually requires the use of iterative algorithms with attendant issues of convergence and computational cost ( e.g. , , ) .in contrast , the method proposed in this paper is analytic and constructive .specifically , the observed data are used to construct a small collection of mappings , , that must contain an unmixing function , if one exists . to perform bss, it then suffices to determine if any of these functions transforms the measured time series , , into a time series , ] . _ + _ 4 .it is determined if the components of ] , and is its time derivative .alternatively , we can compute a large set of correlations of multiple components of ] are found to be statistically independent in step 4 , it is obvious that the data are separable and is an unmixing function . on the other hand ,if the components of ] , similarly , each curve is a vertical straight line passing through ] . in a similar manner, it can be shown that and are also related by some monotonic function .this means that and are component - wise transformations of and . because such component - wise transformations do not affect separability, it immediately follows that is an unmixing function , as asserted above .this subsection describes how the procedure in subsection [ two - dimensional systems ] can be generalized to perform nonlinear bss of systems having degrees of freedom , where .the overall strategy is to determine if the system can be separated into two ( possibly multidimensional ) independent subsystems .if the data can not be so separated , they are simply inseparable .if such a two - fold separation is possible , the data describing the evolution of each independent subsystem can be examined in order to determine if it can be further separated into two lower - dimensional subsystems .this recursive process can be repeated until each independent subsystem can not be further divided into lower - dimensional parts .for example , for , we can first determine if the system can be separated into a subsystem with one degree of freedom and a subsystem having two degrees of freedom .if such a separation is possible , the data describing the two - dimensional subsystem can then be examined to determine if it can be further subdivided into two one - dimensional subsystems .the five - step procedure for performing nonlinear bss is described below and illustrated in figure [ figure1 ] .+ _ 1 . the local second- and fourth - order correlations of the measurement velocity ( )are computed in small neighborhoods of the measurement space .these correlations are used to compute local vectors ( ) . _ + this is done exactly as in step in subsection [ two - dimensional systems ] , except for the fact that : 1 ) each subscript can have any value between and ( instead of and ) ; 2 ) each vector has components ( instead of two components ) .the are used to construct a small set of functions , , each of which is defined to be the union of two functions constructed with fewer components , and ._ + one such mapping is constructed for each way of partitioning the into two groups ( groups and ) , without distinguishing the order of the two groups or the order of vectors within each group .for example , for a three - dimensional system ( ) , three functions must be constructed , each one corresponding to one of the three distinct ways of partitioning three vectors into two groups : , , and .in contrast , for two - dimensional systems , there is only one way to divide the vectors into two groups , and , therefore , only one function , , has to be constructed in order to perform bss , as described in subsection [ two - dimensional systems ] . for each grouping, let and denote the number of vectors in groups and , respectively , and let and denote the collections of values of for the vectors in groups and , respectively .each mapping , , is comprised of the union of the components of an function , , and the components of an function , , which are constructed as described in the next paragraph . for example , for the above - mentioned three - dimensional system , the first mapping to be computed , , has three components , comprised of the single component of and the two components of .the construction of is initiated by picking any point in the coordinate system .we then find an curvilinear subspace , consisting of all points that can be reached by starting at and by moving along all linear combinations of the local vectors in group .this subspace can be described by a function , where the components of ( ) parameterize the subspace by labelling its points in an invertible fashion .formally , can be chosen to be a solution of the differential equations for with the boundary condition , .then , for each value of , we define an curvilinear subspace , consisting of all points that can be reached by starting at and by moving along all linear combinations of the local vectors in group .this subspace can be described by a function , where the components of ( ) parameterize the subspace by labelling its points in an invertible fashion . can be chosen to be a solution of the differential equations for with the boundary condition , .finally , the function is defined so that it is constant on each one of the subspaces . specifically , whenever is in the subspace containing . the function is defined by following an analogous procedure in which the roles of groups and are switched .finally , the union of the components of and the components of is taken to define the mapping , , that corresponds to the chosen grouping of the vectors , , into groups and .the foregoing procedure can be illustrated by considering the construction of from the first grouping of vectors in the three - dimensional case mentioned in the previous paragraph . in that case : * describes a curved line that passes through , that is parallel to at each point , and that is parameterized by ; * each function , , describes a curved surface , which intersects that curved line at some value of the parameter and which is parallel to all linear combinations of and at each point ; * along each of these curved surfaces , is equal to the corresponding value of . likewise , for the construction of in the three - dimensional case : * describes a curved surface that passes through , that is parallel to all linear combinations of and at each point , and that is parameterized by the two components of ; * each function describes a curved line , which intersects that surface at a value of the parameter and which is parallel to at each point ; * along each of these curved lines , is equal to the corresponding value of .each mapping , , is used to transform the time series of measurements , , into a time series of transformed measurements , ] , is the union of the components of ] .it is determined if at least one mapping leads to transformed measurements , ] and ] , has a pdf that factorizes as here , denotes ] and that is spanned by the first group of unit vectors .likewise , each , which was used to define , describes a linear subspace that contains ] . in a similar manner, it can be shown that and are also related by some invertible function .because and are the state variables of independent subsystems and because and , respectively , are invertibly related to them , and must be subsystem state variables in some other subsystem coordinate systems .this completes the proof of the assertion at the beginning of the previous paragraph : namely , if the data are separable , at least one way of grouping the local vectors ( e.g. , the grouping corresponding to the above - mentioned blocks ) leads to a mapping , , that describes a pair of statistically independent state variables ( and ) .in this section , the new bss technique is illustrated by using it to disentangle synthetic nonlinear mixtures of two audio waveforms .the audio waveforms consisted of two thirty - second excerpts from audio books , each one read by a different male speaker .the waveform of each speaker , denoted for , was sampled 16,000 times per second with two bytes of depth .the thick gray lines in figure [ figure2 ] show the two speakers waveforms during a short ( 30 ms ) interval .these waveforms were then mixed by the nonlinear functions where .this is one of a variety of nonlinear transformations that were tried with similar results .the synthetic mixture measurements , , were taken to be the variance - normalized , principal components of the sampled waveform mixtures , ] has a factorizable density function ( or factorizable correlation functions ) .if the density function does factorize , the data are patently separable , and ] describe the evolution of the independent subsystems . on the other hand , if the density function does not factorize , the data must be inseparable . in this illustrative example ,the separability of the coordinate system was verified by a more direct method .specifically , figure [ fig_warped_grid ] shows that the isoclines for increasing values of ( or ) nearly coincide with the isoclines for increasing values of ( or ) .this demonstrates that the and coordinate systems differ by component - wise transformations of the form : where and are monotonic functions . because the data are separable in the system and because component - wise transformations do not affect separability , the data s pdf must factorize in the coordinate system .therefore , we have accomplished the objectives of bss : namely , by blindly processing the mixture measurements , , we have determined that the system is separable , and we have computed the transformation , , to a separable coordinate system . the transformation , , can be applied to the mixture measurements , , to recover the original unmixed waveforms , up to component - wise transformations .the resulting waveforms , ] , are depicted by the thin black lines in figure [ figure2 ] , which also shows the trajectory of the unmixed waveforms in the coordinate system .notice that the two trajectories , ] along the positive axis .when each of the recovered waveforms , ] , was played as an audio file , it sounded like a completely intelligible recording of one of the speakers . in each case , the other speaker was not heard , except for a faint `` buzzing '' sound in the background .therefore , the component - wise transformations ( e.g. , the above - mentioned `` stretching '' ) , which related the recovered waveforms to the original unmixed waveforms , did not noticeably reduce intelligibility .this paper describes how to determine the separability of time - dependent measurements of a system , ; namely , it shows how to determine if there is a linear or nonlinear function ( an unmixing function ) that transforms the data into a collection of signals from statistically independent subsystems .first , the measurement time series is shown to endow state space with a local structure , consisting of vectors at each point .if the data are separable , each of these vectors is directed along a subspace traversed by varying the state variable of one subsystem , while all other subsystems are kept constant .because of this property , these vectors can be used to derive a small number of mappings , , which must include an unmixing function , if one exists . in other words ,the data are separable if and only if one of the describes a separable coordinate system .therefore , separability can be determined by testing the separability of the data , after they have been transformed by each of these mappings . 1 .the original problem of looking for an unmixing function , , among an _ infinite set of functions _ was reduced to the simpler problem of constructing a _ small number of mappings _ , , and then determining if one of them transforms the data into separable form . 2 .the bss method described in this paper is model - independent in the sense that it can be used to separate data that were mixed by any invertible diffeomorphic mixing function .in contrast , most other approaches to nonlinear bss are model - dependent because they assume that the mixing function has a specific parametric form ( ,, ) .notice that the proposed method is analytic and constructive in the sense that the candidate unmixing functions are constructed directly from the data , by locally manipulating them with linear algebraic techniques .in contrast , many other approaches search for an unmixing function by utilizing more complex techniques , involving neural networks or iterative computations .theoretically , the proposed method can be applied to measurements described by any diffeomorphic mixing function . however, more data will have to be analyzed in order to handle mixing functions with more pronounced nonlinearities .this is because rapidly varying mixing functions may cause the local vectors ( ) to vary rapidly in the measurement coordinate system , making it necessary to compute those vectors in numerous small neighborhoods .5 . more data will also be required to apply this method to systems with many degrees of freedom . in experiments ,thirty seconds of data ( 500,000 samples ) were used to recover two audio waveforms from measurements of two nonlinear mixtures . in other experiments , approximately six minutes of data ( 6,000,000 samples ) were used to cleanly recover the waveforms of four sound sources ( two speakers and two piano performances ) from four signal mixtures . as expected , blind separation for the 4d state space did require more data , but it was not a prohibitive amount . 6 . the proposed method does not require unusual computational resources . in any event ,the most computationally expensive tasks are the binning of the measurement data and the computation of the local vectors , , in each bin . if necessary, these calculations can be parallelized across multiple cpus .this paper shows how to perform nonlinear bss for the case in which the mixture measurements are invertibly related to the state variables of the underlying system .invertibility can almost be guaranteed by observing the system with a sufficiently large number of independent sensors : specifically , by utilizing at least independent sensors , where is the dimension of the system s state space . in this case, the sensors output lies in an subspace embedded within a space of at least dimensions .dimensional reduction techniques ( e.g. , ) can be used to find the subspace coordinates corresponding to the sensor outputs . because an embedding theorem asserts that this subspace is very unlikely to self - intersect , the coordinates on this subspace are almost certainly invertibly related to the system s state space , as desired .separability is an intrinsic or coordinate - system - independent property of data ; i.e. , if it is true ( or false ) in one coordinate system , it is true ( or false ) in all coordinate systems . the local vectors ( )also represent a kind of intrinsic structure on state space , and , as mentioned previously , these contain some information about separability , which is available in all coordinate systems .these vectors `` mark '' state space and are analogous to directional arrows , which mark a physical surface and which can be used as navigational aids , no matter what coordinate system is being used .many other vectors can be derived from the local velocity distributions of a time series .however , most of them will not have the special property of the : namely , the property of being aligned with the directions traversed by the system when just one subsystem is varied and all others are held constant . for example , the would not have this critical property if the definition of ( see ( [ m definition 1 ] ) and ( [ m definition 2 ] ) ) was changed by replacing with higher order correlations ( e.g. , ) .l. t. duarte and c. jutten , `` design of smart ion - selective electrode arrays based on source separation through nonlinear independent component analysis , '' _ oil & gas science and technology _ , vol .293 - 306 , 2014 .f. merrikh - bayat , m. babaie - zadeh , and c. jutten , `` linear - quadratic blind source separating structure for removing show - through in scanned documents , '' _ internat .journal on document analysis and recognition _ , vol .319 - 333 , 2011 .b. ehsandoust , m. babaie - zadeh , and c. jutten , `` blind source separation in nonlinear mixture for colored sources using signal derivatives '' , in _ latent variable analysis and signal separation , e. vincent et al , ( eds . ) _ , lncs 9237 , springer , pp .193 - 200 , 2015 .s. lagrange , l. jaulin , v. vigneron , c. jutten , `` analytic solution of the blind source separation problem using derivatives , '' in _ independent component analysis and blind signal separation _ , lncs , vol .3195 , c. g. puntonet and a. g. prieto ( eds ) .heidelberg : springer , 2004 , pp .81 - 88 .d. n. levin , nonlinear blind source separation using sensor - independent signal representations , _ proceedings of itise 2016 : international work - conference on time series analysis , june 27 - 29 , 2016 , granada , spain _ , pp .84 - 95 .d. n. levin , `` model - independent method of nonlinear blind source separation , '' _ proceedings of lca - ica 2017 : international conference on latent variable analysis and signal separation , february 21 - 23 , 2017 , grenoble , france_.
|
consider a time series of measurements of the state of an evolving system , , where has two or more components . this paper shows how to perform nonlinear blind source separation ; i.e. , how to determine if these signals are equal to linear or nonlinear mixtures of the state variables of two or more statistically independent subsystems . first , the local distributions of measurement velocities are processed in order to derive vectors at each point in . if the data are separable , each of these vectors must be directed along a subspace of that is traversed by varying the state variable of one subsystem , while all other subsystems are kept constant . because of this property , these vectors can be used to construct a small set of mappings , which must contain the `` unmixing '' function , if it exists . therefore , nonlinear blind source separation can be performed by examining the separability of the data after it has been transformed by each of these mappings . the method is analytic , constructive , and model - independent . it is illustrated by blindly recovering the separate utterances of two speakers from nonlinear combinations of their audio waveforms . source separation , nonlinear signal processing , invariants , sensor , analytic , model - independent
|
[ s : one ] the importance of accounting for statistical errors is well established in astronomical analysis : a measurement is of little value without an estimate of its credible range .various strategies have been developed to compute uncertainties resulting from the convolution of photon count data with _instrument calibration products _ such as effective area curves , energy redistribution matrices , and point spread functions .a major component of these analyses is good knowledge of the instrument characteristics , described by the instrument calibration data . without the transformation from measurement signals to physically interesting units afforded by the instrument calibration, the observational results can not be understood in a meaningful way . however , even though it is well known that the measurements of the instrument s properties ( e.g. , quantum efficiency of a ccd detector , point spread function of a telescope , etc . ) have associated measurement uncertainties , the calibration of instruments is often taken on faith , with only nominal estimates used in data analysis , even when it is recognized that these uncertainties can cause large systematic errors in the inferred model parameters . in many subfields( exceptions include : e.g. gravitational wave astrophysics , virgo collaboration 2010 , ligo collaboration 2010 and references therein ; cmb analyses , mather et al . 1999 ,rosset et al . 2010 , jarosik et al .2011 , and references therein ; and extra - solar planet / planetary disk work , e.g. butler et al .1996 , maness et al .2011 , and references therein ) , instrument calibration uncertainty is often ignored entirely , or in some cases , it is assumed that the calibration error is uniform across an energy band or an image area .this can lead to erroneous interpretation of the data .calibration products are derived by comparing data from well - defined sources obtained in strictly controlled conditions with predictions , either in the lab or using a particularly well - understood astrophysical source .parametrized models are fit to these data to derive best - fit parameters that are then used to derive the relevant calibration products .the errors on these best - fit values carry information on how accurately the calibration is known and could be used to account for calibration uncertainty in model fitting .unfortunately , however , the errors on the fitted values are routinely discarded . even beyond the errors in these fitted values , calibration products are subject to uncertainty stemming from differences between the idealized calibration experiments and the myriad of complex settings in which the products are used . suspected systematic uncertainty can not be fully understood until suitable data are acquired or cross - instrument comparisons are made ( david et al .prospectively , this source of uncertainty is difficult to quantify but is encompassed to a certain extent in the experience of the calibration scientists .different mechanisms have been proposed to quantify this type of uncertainty , ranging from adopting ad hoc distributions such as truncated gaussian ( drake et al .2006 ) to uniform deviations over a specified range .as long as it can be characterized even loosely , statistical theory provides a mechanism by which this information can be included to better estimate the errors in the final analysis .users and instrument builders agree that incorporating calibration uncertainty is important ( see davis 2001 ; drake et al .2006 ; grimm et al .for example , drake et al . (2006 ) demonstrated that error bars on spectral model parameters are underestimated by as much as a factor of 5 ( see their figure 5 ) for high counts data when calibration uncertainty is ignored ( counts for typical ccd resolution spectra ) .such underestimations can lead to incorrect interpretations of the analysis results . despite this , calibration uncertainties are rarely incorporated because only a few ad hoc techniques exist and no robust principled method is available . in short , there is no common language or standard procedure to account for calibration uncertainty . historically , at the international congress of radiology and electricity held in brussels in september 1910 , mme .curie was asked to prepare the first standard based on high energy photon emission ( x-/-ray ) : 21.99 milligrams of pure radium chloride in a sealed glass tube , equivalent to 1.67x10 curies of radioactive radium ( e.g. , brown 1997 pg 9ff and references therein ) .the problem then became : how to measure other samples , in reference to this standard ?although the sample preparation was done by very accurate chemistry techniques , the tricky part was designing and building the instrument to quantify the high - energy photon emission . at the next international committee meeting ( 1912 ,paris ) calibrating the standard was done by specialized electroscopes balancing the ` ionization current ' from two sources .this instrument was deemed to have an uncertainty of one part in 400 ( rutherford and chadwick 1911 ) .the original paper also describes a method for calibrating the detector .although these measurements were quite carefully done , and complex for their time , the result was a single value ( the intensity ) and had a single number quantifying its error ( ; rutherford and chadwick 1911 ) . in this case , the effect of this original unavoidable measurement error on one s final measurement of a source intensity ( in curies ) is straightforward to propagate , such as by the delta - method .nowadays , meetings about absolute standards and measuring instruments are much more complex , incorporating multiple kinds of measurements for a single standard ( e.g. codata ; mohr , taylor , and newell 2008 ) . as well , in the general literature, one finds increasingly complex methods dealing with e.g. multivariate data and calibration ( sundberg 1999 , osbourne 1991 ) , and even methods for ` traceability ' back to known standards ( cox and harris 2006 ) .these approaches formulate their complexities in terms of cross - correlations of parameters .this methodology has also been successfully used in modern astrophysics , such as in combining optical observations of supernovae for cosmological purposes ( e.g. kim and miquel 2006 ) .initially , j. drake and other co - authors did try formulating the dependencies and anticorrelations of the final calibration product uncertainties in terms of correlation coefficients .however , after considerable exploration , they found this approach unable to capture the complexities of spacecraft calibration , especially at high energies .first , each part of a modern instrument such as the chandra observatory is measured at multiple energies and multiple positions , as well as calibrating the whole system on the ground .second , interestingly , the instrument is modeled by a complex physics - based computer code .the original calibration measurements are not used directly , but are benchmarks for the physical systems modeled therein .high energy astrophysics brings a third difficulty : the previous papers assumed a gauss - normal distribution for the calibration - product uncertainties ; this certainly does not hold for most real instruments in the high energy regime . hence , expanding beyond drake et al .( 2006 ) , in this paper , we describe how to ` short - circuit ' tracing back to the original calibration uncertainties by using the entire instrument - modeling code as part of statistical computing techniques .we see this in the context of the movement towards `` uncertainty quantification '' ( uq ) of large computer codes ( see , e.g. , christie et al .2005 ) .until recently , the best available general strategy in high - energy astrophysics was to compute the root - mean - square of the measurement errors and the calibration errors and then to fit the source model using the resulting error sum ( see bevington and robinson 1992 ) .unfortunately , the use and interpretation of the standard deviation relies on gaussian errors , that the calibration errors are uncorrelated , and that the uncertainty on the calibration products can be uniquely translated to an uncertainty in each bin in data space .none of these assumptions are warranted .furthermore , this method , equivalent to artificially inflating the statistical uncertainty on the data , will lead to biased fits , error bars without proper coverage , and incorrect estimates of goodness of fit .individual groups have also tried various instrument - specific methods .these range from bootstrapping ( simpson and mayer - hasselwander 1986 ) to raising and lowering response `` wings '' by hand ( forrest 1988 , forrest vestrand and mcconnell 1997 ) , and in one case , analytical marginalization over a particular kind of instrumental uncertainty ( bridle et al .2002 ) . in general and in important cross - instrument comparisons , however , all but the crudest methods ( e.g. , multiplying each instrument s total effective area by a fitted `` uncertainty factor '' as in hanlon et al .1995 , schmelz et al .2009 ) are very difficult to handle .methods for handling systematic errors exist in other fields such as particle physics ( heinrich and lyons 2007 and references therein ) and observational cosmology ( bridle et al .2002 ) . in their review of systematic errors , heinrich and lyons ( 2007 ) advocate parameterizing the systematics into statistical models and marginalizing over the nuisance parameters of the systematics .they described various statistical strategies to incorporate systematic errors which range from simple brute force fitting to fully bayesian hierarchical modeling .unfortunately these analytical methods rely on gaussian model assumption that are inappropriate for high energy astrophysics and are also highly case specific .accounting for calibration uncertainty is further complicated by complex and large scale correlation in the calibration products .the value of the calibration product at one point can depend strongly on far away values and even data collected using a different instrument .for example , the _ chandra _ low energy transmission grating spectrometer ( letgs ) + high resolution camera - spectroscopic readout ( hrc - s ) effective area is calibrated using the power - law source pks 2155 - 304 . because the high - order contributions to the spectrum can not be disentangled, the index of the power - law depends strongly on an analysis of the same source with data obtained contemporaneously with the high energy transmission grating spectrometer ( hetgs ) + acis - s .thus , changes in the hetgs+acis - s effective area will affect the longer - wavelength letgs+hrc - s effective area .the complex correlations can result in a diverse set of plausible effective area curves .the choice among these curves can strongly affect the final best fit in day - to - day analyses .the nominally better strategy of folding the calibration uncertainty through to the final statistical errors on fitted model parameters is unfortunately unfeasible : the complex correlations make it difficult to quantify the affect on the final analysis of uncertainty in the calibration product .drake et al . ( 2006 ) proposed a strategy that accounts for these correlations by generating synthetic datasets from a nominal effective area and then fitting a model separately using each of a number of instance of a simulated effective area and then estimating the effect of the calibration error via the variance in the resulting fitted model parameters .this procedure can be implemented using standard software packages such as _ xspec _ ( arnaud 1996 ) and _ sherpa _ ( freeman et al .2001 , refsdal et al .2009 ) and demonstrates the importance of including calibration errors in data analysis . however , in practice there are some difficulties in implementing it with real data where the true parameters are not known _a priori_. the ad hoc nature of the bootstrapping - type procedure means its statistical properties are not well understood , requiring the sampling distributions to be calibrated on a case - by - case basis .that is , the procedure requires verification whenever different models are considered or different parts of the parameter space are explored .the large number of fits required also imposes a heavy computational cost .most importantly , it requires numerous simulated calibration products that must be supplied to end users either directly through a comprehensive database or through instrument specific software for generating them . in general , both these strategies impose a heavy burden on calibration or analysis software maintainers . the primary objective of this article is to propose well - defined and general methods to incorporate complex calibration uncertainty into spectral analysis in a manner that can be replicated in general practice without precise calibration expertise .although we develop a general framework for incorporating calibration uncertainty , we limit our detailed discussion to accounting for uncertainty in the effective area for _ chandra_/acis - s in spectral analysis . we propose a bayesian framework , where knowledge of calibration uncertainties is quantified through a prior probability . in this way ,information quantified by calibration scientists can be incorporated into a coherent statistical analysis .operationally , this involves fitting a highly - structured statistical model that does not assume the calibration products are known fixed quantities , but rather incorporates their uncertainty through a prior distribution .we describe two statistical strategies below for incorporating this uncertainty into the final fit .multiple imputation fits the model several times using standard fitting routines , but with a different value of the calibration product used in each fit .alternatively , using an iterative markov chain monte carlo ( mcmc ) sampler allows us to incorporate calibration uncertainty directly into the fitting routine by updating the calibration products at each iteration . in either case, we advocate updating the calibration products based solely on information provided by calibration scientists and not on the data being analyzed ( i.e. , not updating products given the data being analyzed ; see also discussion about computational feasibility in [ sec : disc : fullbayes ] ) .this strategy leads to simplified computation and reliance on the expertise of the calibration scientists rather than on the idiosyncratic features of the data .we adopt the strategy of drake et al .( 2006 ) to quantify calibration uncertainty using an ensemble of simulated calibration products , that we call the _ calibration sample_. we use principal component analysis ( pca ) to simplify this representation .a glossary of the terms and symbols that we use is given in table [ tab : glossary ] .in [ s : cs ] we describe the calibration sample and illustrate the importance of properly accounting for calibration uncertainty in spectral analysis .our basic methodology is outlined in [ s : meth ] , where we describe how the calibration sampler can be used to generate the replicates necessary for multiple imputation or can be incorporated into an mcmc fitting algorithm .we also show how pca can provide a concise summary of the complex correlations of the calibration uncertainty .specific algorithms and strategies for implementing this general framework for spectral analysis appear in [ s : alg ] .our proposed methods are illustrated with a simulation study and an analysis of 15 radio loud quasars ( siemiginowska et al .2008 ) in [ s : ex ] . in [ sec : disc ] we discuss future directions and a general framework for handling calibration uncertainties from astrophysical observations with similar form as our yx - ray examples .we summarize the work in [ sec : summ ] .to coherently and conveniently incorporate calibration uncertainty into spectral fitting , we follow the suggestion of drake et al .( 2006 ) to represent it using a randomly generated set of calibration products that we call the _ calibration sample_. in this section we begin by describing this calibration sample , and how it can be used to represent the inherent systematic uncertainty .the methods that we discuss in this and the following sections are quite general and in principle can be applied to account for systematic uncertainty in any calibration product . for clarity ,we illustrate their application to instrument effective areas .we begin with a simple model of telescope response that assumes position and time invariance . in particular , suppose the response of a detector to an incident photon spectrum , where represents the detector channel at which a photon of energy is recorded , represents the parameters of the source model , and , , and are the effective area , point spread function , and energy redistribution matrix of the detector , respectively .we aim to develop methods to estimate and compute error bars that properly account for uncertainty in .of course and are also subject to uncertainty and in [ sec : disc : gen ] we discuss extensions of the methods described here to handle more general sources of calibration uncertainty . as an illustration , we consider observations obtained using the spectroscopic array of the _ chandra _ axaf ccd imaging spectrometer detector ( acis - s ) .according to drake et al .( 2006 ) , it is possible to generate a calibration sample of effective area curves for this instrument by explicitly including uncertainties in each of its subsystems ( uv / ion shield transmittance , ccd quantum efficiency , and the telescope mirror reflectivity ) .the result is a set of simulations of the effective area curves .these encompass the range of its uncertainty , with more of the simulated curves similar to its most likely value , and fewer curves that represent possible but less likely values . in principle , some may be more likely than others , in which case weights that indicate the relative likelihood are required . in this article, we assume that all of the simulations in the set are equally likely , that is the simulations are representative of calibration uncertainty .the set of simulations is the _ calibration sample _ and denoted , where is one of the simulated effective area curves .the complicated structure in the uncertainty for the true effective area is illustrated in figure [ fig : arf ] using the calibration sample of size generated by drake et al .a selection of six of the from are plotted as colored dashed lines and compared with the default effective area , that is plotted as a solid black line .the second panel plots the differences , for the same selection .the light gray area represents the full range of and the dark gray area represents intervals that contain 68.3% of the at each energy .the complexity of the uncertainty of is evident .we use the calibration sample illustrated in figure [ fig : arf ] as the representative example throughout this article .we discuss here the effect of the uncertainty represented by the calibration sample on fitted spectral parameters and their error bars .we employ simulated spectra representing a broad range in parameter values . in particular, we simulated four data sets of an absorbed power - law source with three parameters ( power - law index , absorption column density , and normalization ) using the fakeit routine in xspecv12 .the data sets were all simulated without background contamination using the xspec model wabs*powerlaw , nominal default effective area from the calibration sample of drake et al .( 2006 ) , and a default rmf for acis - s .the power law parameter ( ) , column density ( ) , and nominal counts for the four simulations ( see also table [ t : sim ] ) were simulation 1 : : : , , and counts ; simulation 2 : : : , , and counts ; simulation 3 : : : , , and counts ; and simulation 4 : : : , , and counts respectively . to illustrate the effect of calibration uncertainty , we selected the 15 curves in with the largest maximum values and the 15 curves with the smallest maximum values . in some sense, these are the 30 most extreme effective area curves in .they are plotted as in the first panel of figure [ fig : arf_shift ] , along with a horizontal line at zero that represents the default ( ) .we used the bayesian method of van dyk et al .( 2001 ) to fit simulation 1 and simulation 2 each 31 times , using each of the 31 curves of plotted in figure [ fig : arf_shift ] .the resulting marginal and joint posterior distributions for and appear in rows 2 - 4 of figure [ fig : arf_shift ] ; the contours plotted in the third row correspond to a posterior probability of 95% for each fit .were constructed by peeling ( green 1980 ) the original monte carlo sample .this involves removing the most extreme sampled values which are defined as the vertices of the smallest convex set containing the sample ( i.e. , the convex hull ) .this is repeated until only 95% of the sample remains .the final hull is plotted as the contour .this is a reasonable approximation because the posterior distributions appear roughly convex . ]the figure clearly shows that the effect of calibration uncertainty swamps the ordinary statistical error . the scientist who assumes that the true effective area is known to be dramatically underestimate the error bars , and may miss the correct region entirely . as a second illustration we fit simulation 1 and simulation 3 each 31 times , using the same as in figure [ fig : arf_shift ] and with , again using the method of van dyk et al .the resulting posterior distributions of and are plotted in figure [ fig : e4e5_shifts ] .comparing the two columns of the figure , the relative contribution of calibration uncertainty to the total error bars appears to grow with counts .for this reason , accounting for calibration uncertainty is especially important with rich high - count spectra .in fact , in our simulations there appears to be a limiting value where the statistical errors are negligible and the total error bars are due entirely to calibration uncertainty .the total error bars do not go below this limiting value regardless of how many counts are observed .we must emphasize , however , that we are assuming that the observed counts are uninformative as to which of the calibration products in the calibration sample are more or less likely .if we were not to make this assumption , however , and if a data set were so large that we were able to exclude a large portion of the calibration sample as inconsistent with the data , the remaining calibration uncertainty would be reduced and its effect would be mitigated . in this case , the default effective area and effective area curves similar to the default could potentially be found inconsistent with the data and thus the fitted model parameters could be different from what we would get if we simply relied on the default curve . in this article , however , we assume that either the data set is not large enough to be informative for the calibration products or that we do not wish to base instrumental calibration on the idiosyncrasies of a particular data set . both figures [ fig : arf_shift ] and [ fig :e4e5_shifts ] suggest that while the fitted values depend on the choice of , the statistical errors for the parameters given any fixed are more - or - less constant .the systematic errors due to calibration uncertainty shift the fitted value but do not effect its variance . of course , in practice we do not know and must marginalize over it , so the total error bars are larger than any of the errors bars that are computed given a particular fixed . how to coherently compute error bars that account for calibration uncertaintyis our next topic .in this section , we outline how the calibration sample can be used in principled statistical analyses and describe how the complex calibration sample can be summarized in a concise and complete manner using pca . in a standard astronomical data analysis problem ,as represented by equation [ eq : sim_arf ] , it is assumed that and that is estimated using , where is the observed counts and is an objective function used for probabilistic estimation and calculation of error bars .typical choices of are the bayesian posterior distribution , the likelihood function , the cash statistic , or a statistic .we use the notation because we generally take a bayesian perspective , with representing a probability distribution and the notation `` '' referring to conditioning , e.g. , is to be read as `` the probability of _ given _ that is true . ''when is unknown , it becomes a nuisance parameter in the model , and the appropriate objective function becomes .using bayesian notation , where the primary source of information for is not the observation counts , , but the large datasets and physical calcuations used by calibration scientists , and which we denote here by . generally speaking, we expect the information for to come from rather than , at least given and we expect the information for to come from rather than .this can be expressed mathematically by two conditional independence assumptions : 1 . , and 2 . .we make these conditional independence assumptions , and implicitly condition on throughout this article . in this case, we can rewrite the above equation as which effectively replaces the posterior distribution with the prior distribution .finally , we can focus attention on by marginalizing out , that is , the objective function is simply the average of the objective functions used in the standard analysis , but with replaced by each of the .thus , the marginalization in equation [ eq : marginal ] does not necessarily involve estimating nor specifying a parametric prior or posterior distributions for .when this marginalization is properly computed , systematic errors from calibration uncertainty are rigorously combined with statistical errors without need for gaussian quadrature . of course , when is large as in the calibration sample of drake et al .( 2006 ) , evaluating and optimizing equation [ eq : newobjective ] would be a computationally expensive task . in this sectionwe outline two strategies that aim to significantly simplify the necessary computation .the first is a general purpose but approximate strategy that can be used with any standard model fitting technique and the second is a simple adaptation that can be employed when monte carlo is used in bayesian model fitting .details and illustrations of both methods appear in [ s : alg ] .the first strategy takes advantage of a well - established statistical technique known as _ multiple imputation _ that is designed to handle missing data ( rubin 1987 , schafer 1997 ) .multiple imputation relies on the availability of a number of monte carlo replications of the missing data .the replications are called the _ imputations _ and are designed to represent the statistical uncertainty regarding the unobserved values of the missing data .although the calibration products are not missing data _ per se _ , the calibration sample provides exactly what is needed for us to apply the method of multiple imputation : a monte carlo sample that represents the uncertainty in an unobserved quantity . with the calibration sample in hand ,it is straightforward to apply multiple imputation .a subset of of size is randomly selected and called the multiple imputations or the _ multiple imputation sample_. the standard data analysis method is then applied times , once with each of the imputations of the calibration products .this produces sets of parameter estimates along with their estimated variance - covariance matrices .] , which we denote and , respectively , for . in the simplest form of the method of multiple imputation, we assume that each follows a multivariate normal distribution with mean .the final fitted values and error bars are computed using a set of simple moment calculations known as the _ multiple imputation combining rules _( e.g. , harel and zhou 2005 ) .the parameter estimate is computed simply as the average of the individual fitted values , to compute the error bars , we must combine two sources of uncertainty : the statistical uncertainty that would arise even if the calibration product were known with certainty and the systematic uncertainty stemming from uncertainty in the calibration product .each of the standard analyses is computed as if the calibration product were known and therefore each is an estimate of the statistical uncertainty .our estimate of the statistical uncertainty is simply the average of these individual estimates , the systematic uncertainty , on the other hand , is estimated by looking at how changing the calibration product in each of the analyses affects the fitted parameter .thus , the systematic uncertainty is estimated as the variance of the fitted values , finally the two components of variance are combined for the total uncertainty , where the term accounts for small number of imputations .if is small relative to the dimension of , will be unstable , and more sophisticated estimates should be used ( e.g. , li et al . 1991 ) .here we focus on univariate summaries and error bars which depend only on one element of and the corresponding diagonal element of .when computing the error bars for one of the univariate fitted parameters in , say component of , it is generally recommended that the number of sigma used be inflated to adjust for the typically small value of .that is , rather than using one- and two - sigma for 68.3% and 95.4% intervals as is appropriate for the normal distribution , a distribution should be used , requiring a larger number of sigma to obtain 68.3% and 95.4% intervals . in the univariate case ,the _ degrees of freedom _ of the distribution determine the degree of inflation and can be estimated by where and are the diagonal terms of and .the method of multiple imputation is based on a number of assumptions .first , it is designed to give approximate error bars on that include the effects of the imputed quantity , but if a full posterior distribution on is desired , then a more detailed bayesian calculation must be performed ( see below ). it will provide an approximately valid answer in general when the imputation model is compatible with the estimation procedure , i.e. , when is the posterior mode from essentially the same distribution as is used for the imputation ( meng 1994 ) .furthermore , the computed standard deviations can be identified with 68% credible intervals only when the posterior distributions are multi - variate normal .additionally , when is small , the coverage must be adjusted using the -distribution ( equation [ eq : mi - df ] ) .multiple imputation offers a simple general strategy for accounting for calibration uncertainty using standard analysis methods .because this method is only approximate , however , our preferred solution is a monte carlo method that is robust , reliable , and fast . in principle, monte carlo methods can handle any level of complexity present in both the astrophysical models and in the calibration uncertainty .monte carlo can be used to construct powerful methods that are able to explore interesting regions in high - dimensional parameter spaces and , for instance , determine best - fit values of model parameters along with their error bars . in this context, it is used as a fitting engine , similar to levenberg - marquardt , powell , simplex , and other minimization algorithms .one of its main advantages is that it is highly flexible and can be applied to a wide variety of problems .a single run is sufficient to describe the variations in the model parameters that arise due to both statistical and systematic errors , which therefore leads to reduced computational costs .. ] consider a monte carlo sample obtained by sampling the model parameters given the data , , and the calibration product , , where is the iteration number and are the values of the parameters at iteration .the set of parameter values thus obtained is used to estimate the best - fit values and the error bars .when calibration uncertainty is included , we can no longer condition on as a known value of the calibration product .instead we add a new step that updates according to the calibration uncertainties .in particular , is updated using the same iterative algorithm as above , with an additional step at each iteration that updates .suppose at iteration , is the realization of the calibration product .then the new algorithm consists of the following two steps : under the conditional independence assumptions of section [ s : meth : stat : marg ] , we can simplify this sampler by replacing with in the first step : this independence assumption gives us the freedom not to estimate the posterior distribution and simplifies the structure of the algorithm .it effectively separates the complex problem of model fitting in the presence of calibration uncertainties into two simpler problems : ( i ) fitting a model with known calibration and ( ii ) the quantification of calibration uncertainties independent of the current data .the methods that we propose so far require storage of a large number of replicates of .since calibration products can be observation specific this requires a massive increase in the size of calibration databases .this concern is magnified when we consider uncertainties in the energy redistribution matrix , , and point spread function , , and combining multiple observations , each with their own calibration products .although in principle this could be addressed by developing software that generates the calibration sample on the fly , we propose a more realistic and immediate solution that involves statistical compression of .compression of this sort takes advantage of the fact that many of the replicates in differ very little from each other and in principle we can reduce the sample s dimensionality from thousands to only a few with little loss of information .here we describe how principal component analysis ( pca ) can accomplish this for the _ chandra_/acis - s calibration sample generated by drake et al .( 2006 ) and illustrated in figure [ fig : arf ] .pca is a commonly applied linear technique for dimensionality reduction and data compression ( jolliffe 2002 , anderson 2003 , ramsay and silverman 2005 , bishop 2007 ) .mathematically , pca is defined as an orthogonal linear transformation of a set of variables such that the first transformed variable defines the linear function of the data with the greatest variance , the second transformed variable define the linear function _ orthogonal to the first _ with the greatest variance , and so on .pca aims to describe variability and is generally computed on data with mean zero . in practice ,the mean of the data is subtracted off before the pca and added back after the analysis .computation of the orthogonal linear transformation is accomplished with the singular value decomposition of a data matrix with each variable having mean zero .this generates a set of eigenvectors that correspond to the orthogonal transformed variables , along with their eigenvalues that indicate the proportion of the variance correlated with each eigenvector .the eigenvectors with the largest values are known as the _ principal components_. by selecting a small number of the largest principal components , pca allows us to effectively summarize the variability of a large data set with a handful of orthogonal eigenvectors and their corresponding eigenvalues .our aim is to effectively compress using pca . using the singular vector decomposition of a matrix with rows equal to the with , we compute the eigenvectors and corresponding eigenvalues , ordered such that the fraction of the variance of in the direction of is in practice , this gives us the option of using a smaller number of components , in the reconstruction , that is sufficient to account for a certain fraction of the total variance . a large amount of compression can be achieved because very few components are needed to compute the effective area to high precision . for example , in the case of acis effective areas , 8 - 10 components ( out of 1000 ) can account for 95% of the variance , and components can account for 99% of the variance .note that this approximation is valid only when considered over the full energy range ; small localized variations in that contribute little to the total variance , even if they may play a significant role in specific analyses ( the depth of the c - edge , for example ) may not be accounted for . with the pca representation of in hand ,we wish to generate replicates of that mimic . in doing so ,however , we must account for the fact that calibration products typically vary from observation to observation to reflect deterioration of the telescope over time and other factors that vary among the observations .however , even though the magnitudes of the calibration products may change , the underlying uncertainties are less variant and are comparable across different regions of the detector at different times . we thus suppose that the differences among the calibration samples can be represented by simply changing the default calibration product , at least in many cases . that is , we assume that the distribution in the calibration samples differ only in their ( loosely defined ) average and that differences in their variances can be ignored . under this assumption , we can easily generate calibration replicates based on the first principal components as where is the observation - specific effective area that would currently be created by users , is the nominal default effective area from calibration , , , and are independent standard normal random variables .in addition to the first principal components , this representation aims to improve the replicates by including the residual sum of the remaining components .equation [ eq : pca0 ] shows how we account for .if were equal to , equation [ eq : pca0 ] would reduce to the standard pca representation . to account for the observation - specific effective area ,we add the offset . equation [ eq : pca ] rearranges the terms to express as the sum of calibration quantities that we propose to provide in place of .in particular , using equation [ eq : pca ] , we can generate any number of monte carlo replicates from , using only , , , and . in this way we need only provide instrument - specific and not observation - specific values of , and .figure [ fig : pca ] illustrates the use of pca compression on the calibration sample generated by drake et al .( 2006 ) and illustrated in figure [ fig : arf ] .we generated 1000 replicate effective areas using equation [ eq : pca ] with and .the dashed and dotted lines in the upper left panel respectively superimpose the full range and 68.3% intervals of these replicates on the corresponding intervals for the origi nal calibration sample , plotted in light and dark grey . in this case , using captures 96% of the variation in , as computed with equation [ eq : pcafrac ] .the remaining three panels give cross sections at 1.0 , 1.5 , and 2.5 kev .the distributions of the 1000 replicates generated using equation [ eq : pca ] appears as solid lines , and those of the original calibration sample as a gray regions .the figure shows that pca replicates generated with are quite similar to the original calibration sample .although the pca representation can not be perfect ( e.g. , it does not fully represent uncertainty overall or in certain energy regions ) it is much better than not accounting for uncertainty at all .in this section we describe specific algorithms that incorporate calibration uncertainty into standard data analysis routines . in [ s : alg : mi ] we show how multiple imputation can be used with popular scripted languages like _ heasarc_/xspec and _ ciao/_sherpa _ _ for spectral fitting , and in [ s : alg : mc ] we describe some minor changes that can be made to sophisticated markov chain monte carlo samplers to include the calibration sample . in both sectionswe begin with cumbersome but precise algorithms and then show how approximations can be made to simplify the implementation .our recommended algorithms appear in [ s : alg : mi : pca ] and [ s : alg : prag : pca ] . in [ s : ex ] we demonstrate that these approximations have a negligible effect on the final fitted values and error bars .multiple imputation is an easy to implement method that relies heavily on standard fitting routines .an algorithm for accounting for calibration uncertainty using multiple imputation can is described by : step 1 : : : for , repeat the following : + step 1a : ; ; randomly sample from .step 1b : ; ; fit the spectral model ( e.g. , using _ sherpa _ ) in the usual way , but with effective area set to step 1c : ; ; record the fitted values of the parameters as step 1d : ; ; compute the variance - covariance matrix of the fitted values and record the matrix as .( in _ sherpa _this can be done using the covariance function . )step 2 : : : use equation [ eq : mi - est ] to compute the fitted value , of .step 3 : : : use equations [ eq : mi - win][eq : mi - total ] to compute the variance - covariance matrix , , of .the square root of the diagonal terms of are the error bars of individual parameters . step 4 : : : use equation [ eq : mi - df ] to compute the degrees of freedom for each component of which are used to properly calibrate the error bars computed in step 3 .asymptotically , error bars correspond to equal - tail intervals under the gaussian distribution .when the number of imputations is small , error bars should be used instead , where , a number , can be looked up in any standard -distribution table using `` df '' equal to the degrees of freedom computed in step 4 , see [ s : ex : mi ] for an illustration .if the correlations among the fitted parameters are not needed , the error bars of the individual fitted parameters can be computed one at a time using equations [ eq : mi - win][eq : mi - total ] with and replaced by the fitted value of the individual parameter and the square of its error bars , both computed using . using the pca approximation results in a simple change to the algorithm in [ s : alg : mi : pca ] : step 1ais replaced by ( see equation [ eq : pca ] ) : step 1a : : : set , where are independent standard normal random variables .the choice between this algorithm and the one described in section [ s : alg : mi : full ] should be determined by the availability of a sample of size from a ( in which case the algorithm in section [ s : alg : mi : full ] should be used ) or of the pca summaries of a required for the algorithm in this section . in [s : meth : stat : mc ] we considered simple monte carlo methods that simulate directly from the posterior distribution , . more generally , markov chain monte carlo ( mcmc ) methods can be used to fit much more complicated models .( good introductory references to mcmc can be found in gelman 2003 and gregory 2005 . )a markov chain is an ordered sequence of parameter values such that any particular value in the sequence depends on the history of the sequence only through its immediate predecessor . in this waymcmc samplers produce dependent draws from by simulating from a distribution that depends on the previous value of in the markov chain , .that is , is designed to be simple to sample from , while the full may be quite complex .the price of this , however , is that the may not be statistically independent of the ; and in fact may have appreciable correlation with ( that is , an autocorrelation of length ) .the distribution is derived using methods such as the metropolis - hastings algorithm and/or the gibbs sampler that ensures that the resulting markov chain converges properly to .van dyk et al . ( 2001 ) show how gibbs sampling can be used to derive in high - energy spectral analysis .their method has recently been generalized in a _ sherpa_ module called pyblocxs ( bayesian low count x - ray spectral analysis in python , to be released ) that relies more heavily on metropolis - hastings than on gibbs sampling and can accommodate a larger class of spectral models .] in this section we show how pyblocxs can be modified to account for calibration uncertainty .for clarity we use the notation to indicate a single iteration of pyblocxs run with the effective area set to .in [ s : meth : stat : mc ] we describe how a monte carlo sampler can be constructed to account for calibration uncertainly under the assumption that the observed counts carry little information as to the choice of effective area curve . in particular , we must iteratively update and by sampling them as described in equations [ eq : mc1 ] and [ eq : mc2 ] .sampling from can be accomplished by simply selecting an effective area curve at random from .updating is more complicated , however , because we are using mcmc . we can not directly sample from as stipulated by equation [ eq : mc2 ] .the pyblocxs update of depends on the previous iterate , .thus , we must iterate step 2 of the fully bayesian sampler several times before it converges and delivers an uncorrelated draw from . in this way , we iterate step 2 in the following sampler until the dependence on is negligible . to simplify notation, we display iteration rather than iteration ; notice that after repetitions , step 2 returns . in practice we expect a relatively small value of ( or fewer ) will be sufficient , see [ s : ex : pbayes ] .the mcmc step for a given is as follows : step 1 : : : sample .step 2 : : : for , : : sample . once the mcmc sampler run is completed , the ` best - fit ' and confidence bounds for each parameter are typically determined from the mean and widths of the histograms constructed from the traces of ; or mean and widths of the contours ( for multiple parameters ) , as in figures [ fig : arf_shift ] and [ fig : comp_e5_1to4 ] ; see park et al . ( 2008 ) for discussion . using the pca approximation results in a simple change to the algorithm in [ s : alg : prag ] : step 1 is replaced by step 1 : : : set , where are independent standard normal random variables .because of the advantages in storage that this method confers , and the negligible effect that the approximation has on the result ( see [ s : ex : comp ] ) , this is our recommended method when using mcmc to account for calibration uncertainty with data sets with ordinary counts .in this section we investigate optimal values of the tuning parameters needed by the algorithms and compare the performance of the algorithms with simulated and with real data . throughout , we use the absorbed power law simulations described in table [ t : sim ] to illustrate our methods and algorithms .the eight simulations represent a design with the three factors being ( 1 ) data simulated with and with an extreme effective area curve from , ( 2 ) and nominal counts , and ( 3 ) two power law spectral models .these simulations include the four described in [ s : cs : ex ] .we investigate the number of imputations required in multiple imputation studies in [ s : ex : mi ] , and the number of subiterations required in mcmc runs in [ s : ex : pbayes ] .we compare the results from the different algorithms ( multiple imputation with sampling and with pca , and pyblocxs with sampling and pca ) in detail in [ s : ex : comp ] , and apply them to a set of quasar spectra in [ s : ex : quasar ] .when using multiple imputation , we must decide how many imputations are required to adequately represent the variability in .although in the social sciences as few as 3 - 10 imputations are sometimes recommended ( e.g. , schafer 1997 ) , larger numbers more accurately represent uncertainty . to investigate thiswe fit spectra from simulation 1 and simulation 2 using _, with different values of , the number of imputations . for each value of generate effective area curves , , using equation [ eq : pca ] , fit the simulated spectrum times , once with each , derive the error bars , and combine the fits using the multiple imputation combining rules in equations [ eq : mi - est][eq : mi - total ] .this gives us a single total error bar for each parameter .we repeat this process 200 times for each value of to investigate the variability of the computed error bar for each value of .the result appears in the first two rows of figure [ fig : mi ] . for small values of the error bars are often too small or too large . with than about 20 , however , the multiple imputation error bars are quite accurate . even with , however , the error bars computed with multiple imputation are more representative of the actual uncertainty than when we fix the effective area at , which is represented by in figure [ fig : mi ] . generally speaking , is usually adequate , but to is better if computational time is not an issue . note that the size of the calibration sample is generally much larger than this , and it is therefore a fair sample to use in the bayesian sampling techniques described in [ s : alg : mc ] . when is relatively small, the computed error bars may severely underestimate the uncertainty , and must be corrected for the degrees of freedom in the imputations ( see equation [ eq : mi - df ] ) . to illustrate this ,we compute the nominal coverage of the standard interval for each of the mi analyses described in the previous paragraph . when is large , such intervalsare expected to contain the true parameter value 68.3% of the time , the probability that a gaussian random variable is within one standard deviation of its mean . with small , however , the coverage decreases because of the extra uncertainty in the error bars .the bottom two rows of figure [ fig : mi ] illustrate the importance of adjusting for the degrees of freedom , especially when using relatively small values of .the plots give the range of nominal coverage rates for one error bars . for large the coverage approaches , but for small it can be as low as 50 - 60% .this can be corrected by computing the degrees of freedom using equation [ eq : mi - df ] and using instead of , as described in [ s : alg : mi : full ] . as noted in [ s : alg : prag ] , in order to obtain a sample from the as in equation [ eq : mc2 ] we must iterate pyblocxs times to eliminate the dependence of . to investigate how large must be ,we run pyblocxs on the spectra from simulations 1 and simulation 5 of table [ t : sim ] , which were generated using the `` default '' and an `` extreme '' effective area curve .since simulation 5 was generated using the `` extreme '' effective area curve , it is the `` extreme '' curve that is actually `` correct '' and the `` default '' curve that is `` extreme '' . when running pyblocxs with the `` default '' effective area curve , we initiated the chain at the posterior mean of the parameters given the `` extreme '' curve , and vis versa .this ensures that we are using a relatively extreme starting value and will not underestimate how large must be to generate an essentially independent draw . the resulting autocorrelation and time series plots for in figure [ fig : acf ] .the autocorrelation plots report the correlation of and for each value of plotted on the horizontal axis .the plots show that for the autocorrelations are essentially zero for both spectra , and we can consider and to be essentially independent .similarly , the time series plots show that there is no effect of the starting value past the tenth iteration .similar plots for and the normalization parameter ( not included ) are essentially identical .thus , in all subsequent computations we set in the pragmatic bayesian samplers .generally speaking , the user should construct autocorrelation plots to determine how large must be in a particular setting .when we iterate step 2 in the pragmatic bayesian method , we are more concerned with the mixing of the chain once it has reached its stationary distribution , rather than convergence of the chain to its stationary distribution .this is because convergence to the stationary distribution will be assessed using the final chain of in the regular way , i.e. , using multiple chains ( gelman & rubin 1992 , van dyk et al .2001 ) . even after the stationary distribution has been reached , we need to obtain a value of in step 2 that is essentially independent of the previous draw , given .thus , we focus on the autocorrelation of the chain for fixed .this said , if the posterior of is highly dependent on and and are extreme within the calibration sample , that the conditional posterior distribution of given and may be be quite different and we may need to allow to converge to its new conditional posterior distribution .the time series plots in figure 6 investigate this possibility when extreme values of are used .luckily , the effect of these extreme starting values still burns off in just a few iterations , as is evident in figure [ fig : acf ] .we discuss two classes of algorithms in [ s : alg ] to account for calibration uncertainty in spectral analysis : multiple imputation , and a pragmatic bayesian mcmc sampler . for each , we consider two methods of exploring the calibration product sample space : first by directly sampling from the set of effective areas , and second by simulating an effective area from a compressed principal component representation . here, we evaluate the effectiveness of each of the four resulting algorithms , and show that they all produce comparable results , and are a significant improvement over not including the calibration uncertainty in the analysis .we fit each of the eight simulated data sets described in table [ t : sim ] using each of the four algorithms .the first four simulations are identical to those described in [ s : cs : ex ] .analyses carried out using multiple imputation all used imputations . for analyses using the pca approximation to , we used .for pragmatic bayesian methods , we used inner iterations . figure [ fig : comp_e5_1to4 ] gives the resulting estimated marginal posterior distributions for for each of the eight simulations and each of the four fitting algorithms along with the results when the effective area is fixed at .parameter traces ( also known as time series ) are also shown for all the simulations for the two mcmc algorithms ( see [ s : alg : mc ] ) .although the fitted values differ somewhat ( see simulations 1 , 2 , 3 , and 6 ) among the four algorithms that account for calibration uncertainty , the differences are very small relative to the errors and overall the four methods are in strong agreement .when we do not account for calibration uncertainly , however , the error bars can be much smaller and in some cases the nominal 68% intervals do not cover the true value of the parameter ( see simulations 1 , 2 , 5 , and 6 , corresponding to larger nominal counts ) . when we do account for calibration uncertainty , only in simulation 6did the 68% intervals not contain the true value , and in this case the 95% ( not depicted ) do contain the true value .results for are similar but omitted from figure [ fig : comp_e5_1to4 ] to save space .an advantage of using mcmc is that it maps out the posterior distribution ( under the conditional independence assumptions of section [ s : meth : stat : marg ] ) rather than making a gaussian approximation to the posterior distribution .notice the non - gaussian features in the posterior distributions plotted for simulations 1 , 3 , 5 , and 7 ( corresponding to the harder spectral model ) .here we illustrate our methods with a realistic case , using x - ray spectra available for a small sample of radio loud quasars observed with the _ chandra _ x - ray observatory in 2002 ( siemiginowska et al .we performed the standard data analysis including source extraction and calibration with ciao software ( _ chandra _ interactive analysis of observations ) .the x - ray emission in radio loud quasars originates in a close vicinity of a supermassive black hole and could be due to an accretion disk or a relativistic jet .it is well described by a compton scattering process and the x - ray spectrum can be modeled by an absorbed power law : where is the absorption cross - section , and the three model parameters are : the normalization at 1 kev , ; the photon index of the power law , ; and the absorption column , . the number of counts in the x - ray spectra varied between 8 and 5500 . after excluding two datasets ( obsid 3099 which had 8 counts , and obsid836 which is better described by a thermal spectrum ) , we reanalyzed the remaining 15 sources to include calibration uncertainty . in fitting each source, we included a background spectrum extracted from the same observation over a large annulus surrounding the source region .we adopted a complex background model ( a combination of a polynomial and 4 gaussians ) that was first fit to the blank - sky data provided by the _chandra _ x - ray center to fix its shape .while fitting the models to the source and background spectra , we only allow for the normalization of the background model to be free .this is an appropriate approach for very small background counts in the chandra spectra of point sources .we used this background model for all spectra ( except for two obsids 3101 and 3106 that had short 5 ksec exposure times and small number of counts , for which the background was ignored ) . the original analysis ( siemiginowska et al .2008 ) did not take into account calibration errors , and as we show below the statistical errors are significantly smaller than the calibration errors for sources with a large number of counts .we fit each spectrum accounting for uncertainty in the effective area in two ways : 1 . with the multiple imputation method in [ s : alg : mi : pca ] using sherpa for the individual fits , and 2 . with the pragmatic bayesian algorithm in [ s : alg : prag : pca ] using pyblocxs for mcmc sampling . both of these fits use the pca approximation using 14 observation - specific default effective area curves , in equation [ eq : pca ] with .we use multiple imputations and subiterations in the pragmatic bayesian sampler . to illustrate the effect of accounting for calibration uncertainty , we compared the first fit with the sherpa fit that fixes the effective area at and each of the second and third fits with the pyblocxs fit that also fixes the effective area at .the results appear in figure [ fig : quasar ] which compares the error bars computed with ( ) and without ( ) accounting for calibration uncertainty .the left panel uses _ sherpa _ and computes the total error using multiple imputation , and the right panel uses pyblocxs and computes the total error using the pragmatic bayesian method .the plots demonstrate the importance of properly accounting for calibration uncertainty in high - counts , high - quality observations .the systematic error becomes prominent with high counts because the statistical error is small , and deviates from , asymptotically approaching a value of .this asymptotic value represents the limiting accuracy of any observation carried out with this instrument , regardless of source strength or exposure duration . for the absorbed power law model applied here ,the systematic uncertainty on becomes comparable to the statistical error for spectra with counts , with the largest correction seen in obsid 866 , which had counts .in the previous sections , we have worked through a specific example ( chandra effective area ) in some detail .now , in this section , we present two more complete generalizations .the first is the case ignored previously , when the data have something interesting to say about the calibration uncertainties . in the second , we explain how to generalize the algorithms we worked through earlier to the full range of instrument responses , including energy redistribution matrices and point spread functions . to avoid the assumption that the observed counts carry little information as to the choice of effective area curve , we can employ a fully bayesian approach that bases inference on the full posterior distribution . to do this via mcmc, we must construct a markov chain with stationary distribution , which can be accomplished by iterating a two - step gibbs sampler , for . * a fully bayesian sampler *step 1 : : : sample .step 2 : : : sample .notice that unlike in the pragmatic bayesian approach in [ s : alg : mc ] , step 1 of this sampler requires to be updated given the current data . unfortunately , sampling is computationally quite challenging .the difficulty arrises because the fitted value of can depend strongly on .that is , calibration uncertainty can have a large effect on the fitted model , see drake et al .( 2006 ) and [ s : cs : ex ] . from a statistical point of view, this means that given , and can be highly dependent and can depend strongly on .thus a large proportion of the replicates in may have negligible probability under and it can be difficult to find those that have appreciable probability without doing an exhaustive search .the computational challenges of a fully bayesian approach are part of the motivation behind our recommendation of the pragmatic bayesian method . despite the computational challenges ,there is good reason to pursue a fully bayesian sampler .insofar as the data are informative as to which replicates in are more or less likely , the dependence between and can help us to eliminate possible values of along with replicates in , thereby reducing the total error bars for .work to tackle the computational challenges of the fully bayesian approach is ongoing .in general , the response of a detector to incident photons arriving at time can be written as where and are the measured photon location and energy ( or the detector channel ) , while and are the true photon sky location and energy ; the source physical model describes the energy spectrum , morphology ( point , extended , diffuse , etc . ) , and variability with parameters ; and are the expected counts in detector channel space .calibration is carried out using well known instances of to determine the quantities it is important to note that all of the quantities in equation [ eq : resp ] have uncertainties associated with them .our goal is providing a fast , reliable , and robust strategy to incorporate the jittering patterns in all of the calibration products and to draw proper inference , best fits and error bars , reflecting calibration uncertainty . in principle , using a calibration sample to represent uncertainty and the statistical methods for incorporating the calibration sample described in [ s : meth ] and [ s : alg ] can be applied directly to calibration uncertainty for any of the calibration products .the use of pca , however , to summarize the calibration sample may not be robust enough for higher dimensional and more complex calibration products .more sophisticated image analysis techniques or hierarchically applied pca may be more appropriate .our basic strategy , however , of providing instrument - specific summaries of the variability in the calibration uncertainty and observation - specific measures of the mean ( or default ) calibration product , is quite general .thus , in this section , we focus on the generalization of equation [ eq : pca0 ] and begin by rephrasing the equation as here the mean is the mean of the calibration sample , the offset is the shift that we impose on the center of distribution of the calibration uncertainty to account for observation - specific differences , the explained variability is the portion of the variability that summarize in parametric and/or systematic way ( e.g. , using pca ) , and the residual variability is the portion of the variability left unexplained by the systematic summary .these four terms correspond to the four terms in equation [ eq : pca0 ] .the formulation in equation [ eq : gencompgen ] removes the necessity of depending solely on pca to summarize variance in the calibration sample , and allows us to use a variety of methods to generate the simulated calibration products .for example , we can even include such loosely stated measures of uncertainty as `` the effective area is uncertain by x% at wavelength y '' .this formulation is not limited to describing effective areas alone , but can also be used to encompass the calibration uncertainty in response matrices and point spread functions. the precise method by which the variance terms are generated may vary widely , but in all foreseeable cases they can be described as in equation [ eq : gencompgen ] , with an offset term and a random variance component added to the mean calibration product , and with an optional residual component . the calibration sample simulated in this wayform an informative prior that could be used like in equation [ eq : mc1 ] .some potential methods of describing the variance terms are : 1 .when a large calibration sample is available , the random component is simply the full set of calibration products in the sample .when using a monte carlo for model fitting , as in [ s : meth : stat].3 , a random index is chosen at each iteration and the calibration product corresponding to that index is used for that iteration .this process preserves the weights of the initial calibration sample . in this scenariothe residual component is identically zero . 2 .if the calibration uncertainty is characterized by a multiplicative polynomial term in the source model , the explained variance component in equation [ eq : gencompgen ] can be obtained by sampling the parameters of the polynomial , from a gaussian distribution , using their best - fit values and the estimated errors .these simulated calibration products can then be used to modify the nominal products inside each iteration .thus , the offset and residual terms are zero , and only the polynomial parameter best - fit values and errors need to be stored .3 . if a calibration product is newly identified , it may be systematically off by a fixed but unknown amount over a small passband , and users can specify their own estimate of calibration uncertainty as a randomized additive constant term over the relevant range .this is essentially equivalent to using a correction with a first - order polynomial .the stored quantities are the average offset , the bounds over which the offset can range , and a pointer specifying whether to generate uniform or gaussian deviates over that range .we have developed a method to handle in a practical way the effect of uncertainties in instrument response on astrophysical modeling , with specific application to _ chandra_/acis instrument effective area .our goal has been to obtain realistic error bars on astrophysical source model parameters that include both statistical and systematic errors . for this purpose, we have developed a general and comprehensive strategy to describe and store calibration uncertainty and to incorporate them into data analysis .starting from the full , precise , but cumbersome objective - function of the parameters , data , and instrument uncertainties , we adopt a bayesian posterior - probability framework and simplify it in a few key places to make the problem tractable .this work holds practical promise for a generalized treatment of instrumental uncertainties in not just spectra but also imaging , or any kind of higher - dimensional analyses ; and not just x - rays , but across wavelengths and even to particle detectors .our scheme treats the possible variations in calibration as an informative prior distribution while estimating the posterior probability distributions of the source model parameters .thus , the effects of calibration uncertainty is automatically included in the result of a single fit .this is different from a usual sensitivity study in that we provide an actual uncertainty estimate .our analysis shows that systematic error contribution in high counts spectra is more significant than when there are few counts ; therefore , including calibration uncertainty in a spectral fitting strategy is highly recommended for high quality data .we adopt the calibration uncertainty variations , in particular the effective area variations for the _ chandra_/acis - s detector , described by drake et al .( 2006 ) , as an exemplar case . using the effective area sample simulated by them , we 1 .show that variations in effective areas lead to large variations in fitted parameter values ; 2 .demonstrate that systematic errors are relatively more important for high counts , when statistical errors are small ; 3 . describe how the calibration sample can be effectively compressed and summarized by a small number of components from a principal components analysis ; 4 .outline two separate algorithms with which to incorporate systematic uncertainties within spectral analysis : 1 .an approximate , but quick method based on the multiple imputation combining rule that carries out spectral fits for different instances of the effective area and merges the mean of the variances with the variance of the means ; and 2 . a pragmatic bayesian method that incorporates sampling of the effective areas as from a prior distribution within an mcmc iteration scheme .detail two methods of sampling : directly from the calibration sample , and via a pca decomposition 6 .show that representative samples of are needed to obtain relatively reliable estimates of uncertainty ; 7 .apply the method to a real dataset of a sample of quasars and show that known systematic uncertainties require that , e.g. , the power - law index can not be determined with an accuracy better than ; and 8 .discuss future directions of our work , both in relaxing the constraint of not allowing the calibration sample to be affected by the data , and in generalizing the technique to other sources of calibration uncertainty .this work was supported by nasa aisrp grant nng06gf17 g ( hl , ac , vlk ) , and cxc nasa contract nas8 - 39073 ( vlk , as , jjd , pr ) , nsf grants dms 04 - 06085 and dms 09 - 07522 ( dvd , ac , sm , tp ) , and nsf grants dms-0405953 and dms-0907185 ( xlm ) .we acknowledge useful discussions with herman marshall , alex blocker , jonathan mcdowell , and arnold rots .aguirre , et al ., 2011 , apjs , 192 , 4 anderson , t.w . , 2003 ,_ an introduction to multivariate statistical analysis _ , 3 ed . , john wiley & sons , ny arnaud , k. a. , 1996 , astronomical data analysis software and systems v , 101 , 17 bevington , p.r . , and robinson , d.k . , 1992 ,_ data reduction and error analysis for the physical sciences _ , mcgraw - hill , 2 ed .bishop , c. , 2007 , _ pattern recognition and machine learning _, 1 ed . ,springer , ny bridle , s.l . , et al . , 2002 ,mnras , 335(4 ) , 1193 brown , a. , 1997 , _ the neutron and the bomb : a biography of sir james chadwick _ , oxford university press butler , r. p. , marcy , g. w. , williams , e. , mccarthy , c. , dosanjh , p. , & vogt , s. s. 1996 , pasp , 108 , 500 casella , g. , and berger , r.l . , 2001 ,_ statistical inference _, 2 ed ., duxbury press , ca christie , m.a . , et al ., 2005 , _ los alamos science _ 29 , 6 conley , et al ., 2011 , _ apjs _ , 192 , 1 , 1 cox , m.g . , and harris , p.m. , 2006 , meas .technol . , 17 , 533 david , l. , et al . , 2007 , chandra calibration workshop , # 2007.23 davis , j.e . , 2001 , apj , 548 , 1010 drake , j.j . , et al . , 2006 , ferraty , f. , and vieu , p. , 2006, _ nonparametric functional data analysis : theory and practice _ , springer , 1 ed ., ny forrest , d.j .et al . , 1997 , technical report , new hampshire univ .durham , nh forrest , d.j . , 1988 , baas , 20 , p. 740 freeman , p. , et al ., 2001 , proc .spie , 4477 , 76 gelman , a. , carlin , j.b . ,stern , h.s . , and rubin , d.b .bayesian data analysis _, second edition , chapman & hall / crc texts in statistical science gelman , a. , and rubin , d.b ., 1992 , statistical sci . , 7 , 457 green , p.j . , 1980 , _ interpreting multivariate data _ , 3 - 19 .chinchester : wiley .p241 gregory , p.c . , 2005 ,_ bayesian logical data analysis for the physical sciences _ , in _ x - ray astronomy handbook _ , cambridge university press grimm , h .- j . , et al . , 2009 ,hanlon , l.o ., et al . , 1995 , ap&ss , 231 , 157 harel , o. , and zhou , x. h. a. , 2005 , statistics in medicine , 26 , 3057 heinrich , j. , and lyons , l. , 2007 , ann .nucl . part ., 57 , 145 heydorn , k. , and anglov , t. , 2002 , accred . qual .7 , 153 jarosik , n. , et al . , 2011 , _apjs _ , 192 , 14 jolliffe , i. , 2002 , _ principal component analysis _, 2 ed . , springer , ny kashyap , v.l . , et al . , 2008 , proc .spie , 7016 , 7016p.1 kim , a.g ., and miquel , r. , 2006 , astroparticle physics , 24 , 45 li , k .- h . , et al . , 1991 , statistica sinica , 1 , 65 ligo collaboration 2010 , _ nuclear instruments and methods in physics research a _ , 624 , 223 mandel , k.s . , wood - vasey , w.m . ,friedman , a.s . , and kirshner , r.p . , 2009 , _apj _ , 704 , 629 maness , et al ., 2011 , _ apj _ , 707 , 1098 marshall , h. , 2006 , iachec , lake arrowhead , ca mather , j.c . ,fixsen , d.j . ,shafer , r.a . ,mosier , c. , and wilkinson , d.t ., 1999 , apj , 512 , 511 meng , x .-l . , 1994 , statistical science , 9 , 538 mohr , p.j . ,taylor , b.n . , and newell , d.b . , 2008 , j. physchem . ref .data , 37 , 3 , 1187 osbourne 1991 , international statistical review , 59 , 3 , 309 park , t. , et al . , 2008 , apj , 688 , 807 ramsay , j. , and silverman , b.w . , 2005 , _ functional data analysis _ , springer , 2 ed . , ny refsdal , b. et al .2009 , proc . of the 8th python in science conference , ( scipy 2009 ) , g. varoquaux , s. van der walt , j. millman ( eds . ) , pp .51 - 57 ( 2009 ) rosset , c. , et al .2010 , a&a , 520 , 13 rubin , d.b .1987 , _ multiple imputation for nonresponse in surveys _ , j.wiley & sons , ny rutherford , e.s . , and chadwick , j. , 1911 , proc .london , 24 , 141 schafer , j. l. , 1997 , _ analysis of incomplete multivariate data _ , chapman & hall , new york schmelz , j.t .2009 , apj , 704 , 863 siemiginowska , a. , et al . , 2008 ,apj , 684 , 811 simpson , g. , and mayer - hasselwander , h. , 1986 , a&a , 162 , 340 sundberg , rolf , 1999 , scandinavian journal of statistics , 26 , 161 taris , et al . , 2011 , a&a 526 , a25 thoms , maraston , & johansson , 2010 , accepted for publication in mnras van dyk , d. , et al . , 2001 ,apj , 548 , 224 virgo collaboration 2011 , classical and quantum gravity , 28 , 2 , 5005 is plotted as a solid black curve .the bottom panel is constructed in the same manner , but using , in order to magnify the structure in .the curves in form a complex tangle that appears to defy any systematic pattern .as we shall see , we can use principle component analysis to form a concise summary of .,width=624 ] with the largest maximum in blue and the 15 curves with the smallest maximum in red , each with subtracted off .the solid black horizontal line at zero represents .the two columns in the six lower panels correspond to simulations 1 and 2 , respectively and plot the posterior distributions of and using each of the 31 effective area curves in the first panel .the rows of the bottom six panels correspond to the posterior distribution of , the 95.4% contour of the joint posterior distribution , and the posterior distribution of .the colors of the plotted posterior distributions indicate the effective area curve that was used to generate the distribution .the solid vertical black lines in the the second and fourth rows indicate the values of the parameters used with to generate simulations 1 and 2 .the effect of the choice of effective area curves on the posterior distributions is striking.,title="fig:",width=480 ] with the largest maximum in blue and the 15 curves with the smallest maximum in red , each with subtracted off .the solid black horizontal line at zero represents .the two columns in the six lower panels correspond to simulations 1 and 2 , respectively and plot the posterior distributions of and using each of the 31 effective area curves in the first panel .the rows of the bottom six panels correspond to the posterior distribution of , the 95.4% contour of the joint posterior distribution , and the posterior distribution of .the colors of the plotted posterior distributions indicate the effective area curve that was used to generate the distribution .the solid vertical black lines in the the second and fourth rows indicate the values of the parameters used with to generate simulations 1 and 2 .the effect of the choice of effective area curves on the posterior distributions is striking.,title="fig:",height=480 ] ( row 1 ) and ( row 2 ) when fitting simulation 3 ( column 1 with counts ) and simulation 1 ( column 2 with counts ) .the replicates in each panel correspond to 30 effective area curves randomly selected from .the posterior distributions plotted with solid lines were constructed using .the statistical errors are smaller with the larger data set so that calibration errors are relatively more important . , title="fig:",width=312 ] ( row 1 ) and ( row 2 ) when fitting simulation 3 ( column 1 with counts ) and simulation 1 ( column 2 with counts ) .the replicates in each panel correspond to 30 effective area curves randomly selected from .the posterior distributions plotted with solid lines were constructed using .the statistical errors are smaller with the larger data set so that calibration errors are relatively more important ., title="fig:",width=312 ] and give intervals for each energy bin that contain 100% and 68.3% of the calibration sample .the dashed and dotted lines outline intervals for each energy bin containing 100% and 68.3% of 1000 pca replicates of the effective area , sampled using equation [ eq : pca0 ] .the correspondence between the calibration sample and the pca sample is quite good , especially for the 68.3% intervals .the solid horizontal line is and dotted line near it is the almost identical .the other three panels give histograms of the calibration sample ( grey ) and the pca sample ( solid line ) in each of three energy bins , represented by signs in the first panel.,title="fig:",width=624 ] and give intervals for each energy bin that contain 100% and 68.3% of the calibration sample .the dashed and dotted lines outline intervals for each energy bin containing 100% and 68.3% of 1000 pca replicates of the effective area , sampled using equation [ eq : pca0 ] .the correspondence between the calibration sample and the pca sample is quite good , especially for the 68.3% intervals .the solid horizontal line is and dotted line near it is the almost identical .the other three panels give histograms of the calibration sample ( grey ) and the pca sample ( solid line ) in each of three energy bins , represented by signs in the first panel.,title="fig:",width=624 ] .we show the result of varying on fits carried out for spectra from simulation 1 ( left column ) and simulation 2 ( right column ) . for each , we generate effective area curves using equation [ eq : pca ] , and carry out separate fits for each using _ sherpa _ , and combine the the results of the fits using the multiple imputation combining rules ( equations [ eq : mi - est][eq : mi - total ] ) .this gives us one value for the combined ( statistical and systematic ) error bar .we repeat this process 200 times for each to investigate the variability of the computed error bar .the average computed errors ( filled symbols ) are shown for the power - law index ( top row ) and the absorption column density ( second row ) as a function of along with the uncertainty on the errors due to sampling ( thin vertical bars ) .the total error is grossly underestimated for ( computed for only the default effective area ) , and the uncertainty on the error decreases for .typically , is sufficient to obtain a reasonably accurate estimate of the total error .we also show the coverage fraction for the derived error bars for ( third row from the top ) and ( bottom row ) .the coverage is small for small because the degrees of freedom is small ( see equation [ eq : mi - df ] ) but asymptotically approaches gaussian coverage of for large ., height=480 ] is shown for four cases , where a spectrum is simulated using one effective area curve and the fit is possibly carried out with another .this explores the dependence of the fitting methodology ( codified in the routine pyblocxs ) on misspecified calibration .the top row shows the acf for simulation 1 ( generated using `` default '' effective area curve ; see table [ t : sim ] ) and the bottom row for simulation 5 ( generated using an `` extreme '' effective area curve ) .the diagonal plots show the acf when the `` correct '' effective curve is used to fit the spectrum , i.e. , the same curve as was used to generate it , and the cross - diagonal plots show the case when the fitting is carried out using a different effective area curve .the cases in the left column both use the `` default '' effective area to fit the simulated spectra , and the cases in the right column both use the `` extreme '' curve .the autocorrelation functions demonstrate that and are essentially uncorrelated regardless of whether the correct effective area curve was used in the fit or not .thus , we set in our pragmatic bayesian samplers . , width=624 ] , shown for same cases as the autocorrelation cases shown before . while the autocorrelation determines the `` stickiness '' of the mcmc iterations , the time series demonstrates that choosing misspecified calibration files does not have any effect on the convergence of the solutions .the traces are shown in the same order as before , for all iterations .the inset shows the last 50 iterations , with denoted by filled circles , and consecutive iterations connected by thin straight lines .the necessity of using is apparent in the slow changes in the values of ., width=624 ] as applied to the simulated spectra 1 - 4 in table [ t : sim ] .these are spectra which are generated using the default effective area .the `` true '' value of the power - law index parameter that was used to generate the simulated spectra is shown as the vertical dashed line .for each simulation , posterior probability density functions of the power - law index parameter are computed using the pragmatic bayesian with pca ( black solid curve ; [ s : alg : mc].4 ) , pragmatic bayesian with sampling from ( red dashed curve ; [ s : alg : mc].3 ) , multiple imputation with pca ( green dotted curve ; [ s : alg : mi].2 ) , multiple imputation with samples from ( brown dot - dashed curve ; [ s : alg : mi].1 ) , and the combined posteriors from individual runs using the full sample ( purple dash - dotted curve ) .results for the column density parameter are similar .we use samples for multiple imputation .the density curves are obtained from smoothed histograms of mcmc traces from pyblocxs for the bayesian cases , and are gaussians with the appropriate mean and variance obtained via fitting with xspecv12 for the multiple imputation cases .also shown are the 68% equal - tail intervals as horizontal bars , with the most probable value of the photon index indicated with an ` x ' for each of these case , and additionally for the case where a fixed effective area was used to obtain only the statistical error .note that in all cases , fitting with the default effective area alone leads to an underestimate of the true uncertainty in the fitted parameter ., title="fig:",width=307 ] as applied to the simulated spectra 1 - 4 in table [ t : sim ] .these are spectra which are generated using the default effective area .the `` true '' value of the power - law index parameter that was used to generate the simulated spectra is shown as the vertical dashed line .for each simulation , posterior probability density functions of the power - law index parameter are computed using the pragmatic bayesian with pca ( black solid curve ; [ s : alg : mc].4 ) , pragmatic bayesian with sampling from ( red dashed curve ; [ s : alg : mc].3 ) , multiple imputation with pca ( green dotted curve ; [ s : alg : mi].2 ) , multiple imputation with samples from ( brown dot - dashed curve ; [ s : alg : mi].1 ) , and the combined posteriors from individual runs using the full sample ( purple dash - dotted curve ) .results for the column density parameter are similar .we use samples for multiple imputation .the density curves are obtained from smoothed histograms of mcmc traces from pyblocxs for the bayesian cases , and are gaussians with the appropriate mean and variance obtained via fitting with xspecv12 for the multiple imputation cases .also shown are the 68% equal - tail intervals as horizontal bars , with the most probable value of the photon index indicated with an ` x ' for each of these case , and additionally for the case where a fixed effective area was used to obtain only the statistical error .note that in all cases , fitting with the default effective area alone leads to an underestimate of the true uncertainty in the fitted parameter ., title="fig:",width=307 ] + as applied to the simulated spectra 1 - 4 in table [ t : sim ] .these are spectra which are generated using the default effective area .the `` true '' value of the power - law index parameter that was used to generate the simulated spectra is shown as the vertical dashed line .for each simulation , posterior probability density functions of the power - law index parameter are computed using the pragmatic bayesian with pca ( black solid curve ; [ s : alg : mc].4 ) , pragmatic bayesian with sampling from ( red dashed curve ; [ s : alg : mc].3 ) , multiple imputation with pca ( green dotted curve ; [ s : alg : mi].2 ) , multiple imputation with samples from ( brown dot - dashed curve ; [ s : alg : mi].1 ) , and the combined posteriors from individual runs using the full sample ( purple dash - dotted curve ) .results for the column density parameter are similar .we use samples for multiple imputation .the density curves are obtained from smoothed histograms of mcmc traces from pyblocxs for the bayesian cases , and are gaussians with the appropriate mean and variance obtained via fitting with xspecv12 for the multiple imputation cases .also shown are the 68% equal - tail intervals as horizontal bars , with the most probable value of the photon index indicated with an ` x ' for each of these case , and additionally for the case where a fixed effective area was used to obtain only the statistical error .note that in all cases , fitting with the default effective area alone leads to an underestimate of the true uncertainty in the fitted parameter ., title="fig:",width=307 ] as applied to the simulated spectra 1 - 4 in table [ t : sim ] .these are spectra which are generated using the default effective area .the `` true '' value of the power - law index parameter that was used to generate the simulated spectra is shown as the vertical dashed line .for each simulation , posterior probability density functions of the power - law index parameter are computed using the pragmatic bayesian with pca ( black solid curve ; [ s : alg : mc].4 ) , pragmatic bayesian with sampling from ( red dashed curve ; [ s : alg : mc].3 ) , multiple imputation with pca ( green dotted curve ; [ s : alg : mi].2 ) , multiple imputation with samples from ( brown dot - dashed curve ; [ s : alg : mi].1 ) , and the combined posteriors from individual runs using the full sample ( purple dash - dotted curve ) .results for the column density parameter are similar .we use samples for multiple imputation .the density curves are obtained from smoothed histograms of mcmc traces from pyblocxs for the bayesian cases , and are gaussians with the appropriate mean and variance obtained via fitting with xspecv12 for the multiple imputation cases .also shown are the 68% equal - tail intervals as horizontal bars , with the most probable value of the photon index indicated with an ` x ' for each of these case , and additionally for the case where a fixed effective area was used to obtain only the statistical error .note that in all cases , fitting with the default effective area alone leads to an underestimate of the true uncertainty in the fitted parameter ., title="fig:",width=307 ] .these are spectra which are generated using an extreme instance of an effective area from .the fits when only one effective area is used are done with the default effective area .note that in many cases , not incorporating the calibration uncertainties results in intervals for the parameter which does not contain the true value ., title="fig:",width=307 ] .these are spectra which are generated using an extreme instance of an effective area from .the fits when only one effective area is used are done with the default effective area .note that in many cases , not incorporating the calibration uncertainties results in intervals for the parameter which does not contain the true value ., title="fig:",width=307 ] + .these are spectra which are generated using an extreme instance of an effective area from .the fits when only one effective area is used are done with the default effective area .note that in many cases , not incorporating the calibration uncertainties results in intervals for the parameter which does not contain the true value ., title="fig:",width=307 ] .these are spectra which are generated using an extreme instance of an effective area from .the fits when only one effective area is used are done with the default effective area .note that in many cases , not incorporating the calibration uncertainties results in intervals for the parameter which does not contain the true value ., title="fig:",width=307 ] for each of the 8 simulations .all the simulations are shown on the same plot , rescaled ( to depict the fractional deviation from the mean , inflated by a factor of 3 ) and offset ( by an integer corresponding to the number assigned to the simulation ) for clarity .the traces for both the mcmc+pca ( pragmatic bayesian algorithm using pca to generate new effective areas ; solid black lines ) and mcmc+sample ( pragmatic bayesian algorithm with sampling from ; dotted red lines ) are shown , with the latter overlaid on the former . the last 50 iterations are shown zoomed out in the absissa for clarity , and shows each transformed as filled circles , connected by thin lines of the corresponding style and color . note that all iterations are shown , but in the calculations of the posterior probability distributions , only every iteration , where , is used ( see figure [ fig : acf ] ) . , width=652 ] ) are shown .the abscissae represent the statistical uncertainty as derived by adopting a fixed , nominal effective area , and fit with absorbed power - law models using _ciao_/sherpa ( stronger sources tend to have smaller error bars ) .they are compared with the total error , derived using ( a ) the multiple imputation combining rule ( [ s : alg : mi].2 ) with _ciao_/sherpa ( ) , and ( b ) the pragmatic bayesian method with pca ( [ s : alg : mc].4 ) , with pyblocxs .( similar results are obtained when using the pragmatic bayesian method for the full sample of effective areas . )the different symbols correspond to the analysis carried out for different observations .the dotted line represents equality , where the total error is identical to the statistical error .the systematic error can not be ignored when the statistical error is small , and represents the limiting accuracy of a measurement . ,title="fig:",width=307 ] ) are shown .the abscissae represent the statistical uncertainty as derived by adopting a fixed , nominal effective area , and fit with absorbed power - law models using _ciao_/sherpa ( stronger sources tend to have smaller error bars ) .they are compared with the total error , derived using ( a ) the multiple imputation combining rule ( [ s : alg : mi].2 ) with _ciao_/sherpa ( ) , and ( b ) the pragmatic bayesian method with pca ( [ s : alg : mc].4 ) , with pyblocxs .( similar results are obtained when using the pragmatic bayesian method for the full sample of effective areas . )the different symbols correspond to the analysis carried out for different observations .the dotted line represents equality , where the total error is identical to the statistical error .the systematic error can not be ignored when the statistical error is small , and represents the limiting accuracy of a measurement ., title="fig:",width=307 ] c|l & effective area ( arf ) curve + & replicate generated from pca representation of the calibration sample + & the default effective area curve .+ & the observation specific effective area curve .+ & effective area curve in the calibration sample + & a set of effective areas , the calibration sample + & average offset of from + & the between imputation ( or systematic ) variance of .+ & diagonal element of + & energy of incident photon + & energy channel at which the detector registers the incident photon + & random variate generated from the standard normal distribution + & fractional variance of component in the pca representation + & number of inner iterations in pyblocxs , typically + & number of components used in pca analysis , here + & principal component number or index + & the superscript indicates the running index of random draws + & an mcmc kernel + & the mcmc kernel used in pyblocks + & number of replicate effective area curves in calibration sample + & replicate effective area number or index , or principal component number + & imputation number or index + & number of imputations + & response of a detector to incident photons , see equation [ eq : sim_arf ] + & objective function ( posterior distribuiton , likelihood , or perhaps ) + & point spread function ( psf ) + & energy redistribution matrix ( rmf ) + & eigenvalue or pc coefficient of component in the pca representation + & astrophysical source model + & total variance of .+ & eigen- or feature - vector for component in the pca representation + & the within imputation ( or statistical ) variance of .+ & diagonal elements of + & true sky location of photons + & locations of incident photons as registered by detector + & data , typically used here as counts spectra in detector pi bins + & data and physical calculations used by calibration scientists + & model parameter of interest + & estimate of + & estimate of corresponding to imputed effective area + & estimates variance of + & , representing the statistical error on + & , representing the total error on + & a sum of the smaller components , j+1 to l in the pca representation +
|
while considerable advance has been made to account for statistical uncertainties in astronomical analyses , systematic instrumental uncertainties have been generally ignored . this can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty . ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters . accounting for such uncertainties currently requires extensive case - specific simulations if using existing analysis packages . here we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high - energy data . we first present a method based on multiple imputation that can be applied with any fitting method , but is necessarily approximate . we then describe a more exact bayesian approach that works in conjunction with a markov chain monte carlo based fitting . we explore methods for improving computational efficiency , and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files . this method is implemented using recently codified _ chandra _ effective area uncertainties for low - resolution spectral analysis and is verified using both simulated and actual _ chandra _ data . our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties .
|
recently , we have obtained a suitable extension of the canonical fluctuation - dissipation relation involving the heat capacity and energy fluctuations : considers a _ system - surroundings _ equilibrium situation in which the inverse temperature of a given thermostat exhibits non - vanishing correlated fluctuations with the total energy of the system under study as a consequence of the underlying thermodynamic interaction , .clearly , eq.([fdr ] ) differs from the canonical equilibrium situation due to the realistic possibility that the internal thermodynamical state of the thermostat can be affected by the presence of the system under study .this allows us to describe the fluctuating behavior of the system under more general equilibrium situations , rather than the ones associated with the known canonical and microcanonical ensembles .the fluctuation relation ( [ fdr ] ) possesses interesting connections with some challenging problems related to statistical mechanics , such as : ( 1 ) a compatibility with the existence of macrostates exhibiting _ negative heat capacities _ , a thermodynamic anomaly that appears in many physical contexts ( ranging from small nuclear , atomic and molecular clusters to the astrophysical systems lynden , lyn2,thir , einarsson ) associated with the existence of _ nonextensive properties _ ; ( 2 ) a direct application for the extension of available monte carlo methods based on the consideration of the gibbs canonical ensemble in order to capture the presence of a regime with and avoid the incidence of the so - called _ super - critical slowing down _ ( a dynamical anomaly associated with the occurrence of discontinuous ( first - order ) phase transitions , which significantly reduces the efficiency of monte carlo methods ) ; ( 3 ) finally , a direct relationship with an _ uncertainty relation _ supporting the existence of some _ complementary character _ between thermodynamic quantities of energy and temperature , an idea previously postulated by bohr and heisenberg with a long history in the literature .our aim in this work is to present a more complete study of the existing connections of the fluctuation - dissipation relation ( [ fdr ] ) .the core of our analysis is focussed on certain geometric aspects relating the present approach with other geometric formulations of fluctuation theory .such ideas straightforwardly lead to a geometric generalization of the gibbs canonical ensemble describing a special family of equilibrium distributions recently proposed in the literature , which can also be obtained from some known formulations of statistical mechanics , such as jaynes reinterpretation in terms of information theory , as well as mandelbrot s approach based on inference theory .our main motivation in deriving the fluctuation - dissipation relation ( fdr ) was to arrive at a suitable extension of the known fluctuation relation : is compatible with the existence of macrostates with negative heat capacities .as discussed in many standard textbooks on statistical mechanics , the latter relation follows as a direct consequence of the consideration of the gibbs canonical ensemble : constitutes a starting point for many applications of equilibrium statistical mechanics .however , such a relation is only compatible with macrostates having non - negative heat capacities , and hence , all those macrostates with negative heat capacities can not be appropriately described by using this statistical ensemble .in fact , such macrostates are thermodynamically unstable under this kind of equilibrium situations ( a system submerged in a certain environment ( heat reservoir or bath ) with constant inverse temperature ) .one can easily verify from eq.([fdr ] ) that a macrostate with a negative heat capacity is thermodynamically stable provided that the correlation function considering the existence of correlative effects between the system and its surroundings obeyed the following inequality : a simple interpretation ( but not the only one possible ) of the above fluctuating constraint follows from admitting that the thermostat or the surroundings is a finite system with a positive heat capacity .clearly , the existing energetic interchange between these systems imposes the occurrence of thermal fluctuations for the thermostat temperature , , with the amount of energy released or absorbed by the system around its equilibrium value .such fluctuations can be rephrased as follows : the condition of thermal equilibrium is considered . by substituting eq.([bb ] ) into the fluctuation - dissipation relation ( [ fdr ] ) , we obtain: , it is possible to arrive at the following inequalities : combining eqs.([cc])-([bp ] ) .essentially , this last result is the same constraint derived by thirring in order to ensure the thermodynamic stability of macrostates with a negative heat capacity .the study of macrostates with negative heat capacities demands that such macrostates be found in a stable equilibrium situation .as already discussed , such an aim could be implemented by considering an equilibrium situation in which the system is found in thermal contact with a bath having a _ positive and finite heat capacity _ that obeys thirring s constraint ( [ c.thir ] ) . the equilibrium condition associated with the gibbs canonical ensemble ( [ gibbs ] )is unsuitable here , since the invariability of the gibbs thermostat temperature presupposes a system with an infinite heat capacity , , which is incompatible with inequality ( [ c.thir ] ) .the differences between these equilibrium situations are schematically illustrated in fig.[caricature.eps ] . here , the thick solid line represents the typical microcanonical caloric curve of a finite short - range interacting system undergoing a first - order phase transition , which is characterized by the existence of a regime with negative heat capacities ( the branch p - q ) , with being the energy per particle .the thin solid lines and are respectively the inverse temperature dependencies on the system energy per particle of a gibbs thermostat ( with ) and a thermostat having a positive finite heat capacity , with and being the corresponding energy distribution functions .the intersection points derived from the condition of thermal equilibrium determine the positions of the energy distribution function maxima and minima .clearly , the thermal contact with a gibbs thermostat ensures the existence of only one intersection point , or equivalently , a unique peak of the canonical energy distribution function for most of the admissible values of the thermostat inverse temperature .the important exception takes place in the inverse temperature interval ] ) , which exhibits a regime with negative heat capacities when the number of spin states is greater than a certain critical value depending on the lattice dimensionality , e.g. with d=2 .a direct demonstration of the applicability of the extended sw method using the present ideas in order to study the anomalous regime with in the -state potts model is shown in fig.[montecarlo.eps ] , whose decorrelation time shows a weak power - law dependence with at the critical point of the discontinuous phase transition .generally speaking , the consideration of a finite thermostat in order to capture the anomalous regime with negative heat capacities and to avoid the super - critical slowing down should not depend on the classical or quantum nature of the system under analysis .consequently , one can expect that this idea could be used for enhancing the potentialities of some known quantum monte carlo methods .the fluctuation - dissipation relation ( [ fdr ] ) constitutes a particular case of a very general fluctuation relation : the inverse temperature difference between the surroundings ( heat reservoir or bath ) and the system .in fact , eq.([fdr ] ) is obtained after substituting the first - order approximation : into eq.([mother ] ) .alternatively , one can consider the known _ schwartz inequality _: in order to rewrite the fluctuation relation ( [ mother ] ) as follows : where denotes the thermal uncertainty of a physical observable .clearly , eq.([unc ] ) is a thermo - statistic analogy of the quantum mechanics uncertainty relation : between position and momentum , which suggests the existence of certain complementary character between thermodynamic quantities of energy and ( inverse ) temperature bohr , heisenberg , rosenfeld , scholgl , mandelbrot , uffink .it is well - known that the nature of the temperature is radically different from a direct observable quantity such as energy .in fact , it is a thermodynamic quantity whose physical meaning can only be attributed by appealing to the concept of statistical ensemble . in practice ,the system temperature is indirectly measured by using the temperature of a second system through the thermal equilibrium condition , which plays the role of a measuring apparatus ( thermometer ) , whose internal temperature dependence on some direct _ thermometric quantity _ ( e.g. : electric signal , force , volume , etc . ) is previously known . as expected ,such a measuring process unavoidably involves a perturbation on the internal state of the system under analysis . according to uncertainty relation ( [ unc ] ) , it is impossible to simultaneously reduce the thermal uncertainties of the inverse temperature difference and the system energy to zero : any attempt to reduce the perturbation of the system energy to zero , , leads to a divergence of the inverse temperature difference uncertainty , and vice - versa .consequently , _ it is impossible to simultaneously determine of the energy and inverse temperature of a given system using the standard experimental procedures based on the thermal equilibrium with a second system_. clearly , we have to admit non - vanishing thermal uncertainties and during any practical determination of the energy - temperature dependence of a given system , that is , its caloric curve .while such thermal uncertainties are unimportant during the study of large thermodynamic systems , they actually impose a fundamental limitation to the practical utility of thermodynamic concepts such as temperature and heat capacity in systems with few constituents . in order to avoid any misunderstanding , it must be clarified that one can obtain the energy dependence of the inverse temperature of a given system _ by calculating _its boltzmann s entropy : which is possible to be achieved regardless of the system size .the limitation associated with the uncertainty relation ( [ unc ] ) refers to the precision of an _ experimental measuring _ of the microcanonical caloric curve of a thermodynamic system .as previously discussed in detail in our first paper on this subject vel - unc , the rigorous fluctuation relation ( [ mother ] ) is derived from the following ansatz for the energy distribution function : where is the state density of the system and is the probabilistic weight considering the thermodynamic influence of the surroundings ( thermostat ) .such functions are defined on a certain subset of euclidean real space , ] .it could be said that these subsets constitute two equivalent _ coordinate representations _ of all admissible macrostates of the system , which shall be denoted as and respectively .the coordinate transformation induced by the bijective function is referred to as a _ reparametrization_. since the elementary subset ] , the elementary probability that the system is found in such conditions , eq.([ansatz ] ) , does not depend on the coordinate representation used for its expression : here , and denote the system distribution functions in the representation and , respectively , which are mutually related by the transformation rule : ^{-1}.\ ] ] let be the elementary volume considering the number of microstates belonging to the elementary subset $ ] .as the case of the elementary probability , does not depend on the coordinate representation , and hence , it obeys the following properties : ^{-1}.\ ] ] consequently , the probabilistic weight considering the surroundings thermodynamic influence behaves as a _scalar function _ under reparametrizations : ,\ ] ] with being the inverse function of .the reparametrization invariance of the probability distribution function also leads to the reparametrization invariance of the expectation value of any physical observable ( scalar function ) : such that , one can denote the expectation values without indicating the coordinate representation used for its expression : a remarkable equilibrium situation of the conventional thermodynamics and statistical mechanics is the system in energetic isolation , whose probabilistic weight : defines the known _microcanonical ensemble_. this probabilistic weight possesses the notable feature that its mathematical form does not depend on the representation : a property that is straightforwardly derived from the identity : ^{-1}\ ] ] and the transformation rule ( [ trans.omega ] ) .let us define the thermostat inverse temperature in representation as : therefore , it obeys the transformation rule : ^{-1}\ ] ] as a consequence of the scalar character of the probabilistic weight , with being the thermostat ( effective ) inverse temperature expressed in eq.([beta.w ] ) .boltzmann s entropy of the system in this representation can be defined by : where , with being a suitable constant that makes dimensionless .the above coarsed - grained definition of boltzmann s entropy is not properly a scalar function as the case of the probabilistic weight ( trans.weight ) .in fact , it obeys the transformation rule : as already pointed out by ruppeiner ( see subsection ii.b of ref.rupper ) , the density distribution function derived from einstein s postulate : \ ] ] obeys different mathematical forms under different coordinate representations , , if one assumes that the entropy is a _ state function _ whose value does not depend on the representation ( scalar function ) , .a simple analysis allows us to verify that the left - hand side of eq.([ein ] ) behaves as : while its right - hand side as : = c\exp\left[\frac{s\left(x\right)}{k_{b}}\right]\left|\frac{\partial x\left(y\right)}{\partial y}\right|dy \neq c'\exp\left[\frac{s\left(y\right)}{k_{b}}\right]dy.\ ] ] this fact not only constitutes an important defect in order to develop a _riemannian formulation _ of fluctuation theory , but it also presupposes some inconsistences with the thermodynamic arguments behind of einstein s postulate for the fluctuation formula of eq.([ein ] ) . in this work , we shall assume the entropy modification ( [ trans.b.e ] ) associated with reparametrizations and analyze its direct consequences .clearly , such an alternative definition allows us to preserve the functional dependence of fluctuation formula ( [ ein ] ) in any coordinate representation .it requires that the entropy is no longer a state function with a scalar character , as is usually assumed in other geometric formulations of fluctuation theory . under these above assumptions ,the system inverse temperature in the representation is given by : and obeys the transformation rule : \left[\frac{\partial \theta}{\partial u}\right]^{-1}.\ ] ] as expected , the inverse temperature difference in the representation can be expressed as : by only admitting regular reparametrizations obeying the constraints : on every point , one can easily show the validity of the boundary conditions : by starting from eq.([bu1 ] ) and eq.([bu2 ] ) .as already shown in the previous subsection , definition ( inv.temp.dif.phi ) and the boundary conditions ( [ gen.b1 ] ) and ( gen.b1 ) lead to the following extensions of the rigorous identities ( eq.cond ) , ( [ aux.fu ] ) and ( [ gen.identity ] ) : as well as the generalized fluctuation theorems : and finally , the thermodynamic uncertainty relation : thus , the consideration of coordinate changes makes it possible to extend the results already derived by using the energy representation .although the thermodynamic identities ( [ eq.cond],[aux.fu ] ) and ( [ eq.cond.2],[aux.fu.2 ] ) , and the fluctuation theorems ( [ mother],comple.flu ) and ( [ mother.2],[comple.flu.2 ] ) , as well as the uncertainty relations ( [ unc ] ) and ( [ unc.2 ] ) are closely related , _ they represent different thermodynamic relations characterizing the same equilibrium situation_. it could be said that all of these mutually related identities account for the existence of a special kind of internal symmetry , which shall be hereafter referred to as _reparametrization duality_. the invariance under reparametrizations ( coordinate transformation or _ diffeomorphisms _ ) is the same kind of symmetry considered by einstein s theory of gravitation .however , there exist radical differences between this latter physical theory and the geometric statistical formalism developed in this work .( 1 ) while the gravitation theory is defined in terms of _ local quantities _ , the rigorous thermodynamic identities obtained here are expressed in terms of _ statistical expectation values _ defined over the entire subset representing all admissible system macrostates in the present equilibrium situation , that is , this is a _ non - local theory _ , similar to quantum mechanics .( 2 ) furthermore , einstein s theory refers to the same physical laws in different representations , while the above thermodynamic identities consider a family of _ different fluctuations relations _ exhibiting the same mathematical appearance under different coordinate representations of a given equilibrium situation .this is why we refer to it as reparametrization duality instead of reparametrization symmetry . in the next subsection, we shall arrive at a local formulation of the present approach with a riemannian - like structure closely related to other geometric approaches of fluctuation theory existing in the literature rupper .we shall see , however , that such a development presupposes the consideration of certain unexpected approximations .let us assume that the systems under consideration are large enough to deal with the thermodynamic fluctuations by using a _gaussian approximation_. an essential assumption considered here is that the system undergoes small thermal fluctuations close to its equilibrium point , which is determined by _the most likely macrostate_. a problem encountered is that _the most likely macrostate actually depends on the coordinate representation used for describing the system behavior _ , which is a direct consequence of the non - scalar character of the system entropy .in order to show this fact , let us consider the transformation rule of the inverse temperature difference : .\ ] ] the stationary condition associated with the most likely macrostate in each representation are given by : according to eq.([trans.eta ] ) , the vanishing of does not correspond to a vanishing of , and vice versa , a result showing that the most likely macrostate depends on the coordinate representation .this last result contracts with the general validity of the thermal equilibrium condition in terms of statistical expectation values , eq.([eq.cond.2 ] ) .it clearly indicates that the method generally used for deriving such a condition in terms of the most likely macrostate is just a suitable approximation .nevertheless , it could be easily noticed that the modification involved during the reparametrization change is just a second - order effect . the transformation rule ( [ trans.eta ] )can be combined with eq.([eq.cond ] ) and eq.([eq.cond.2 ] ) in order to obtain : where the following notation is considered : eq.([res.likely ] ) indicates that the second additive term on right - hand side of the transformation rule ( [ trans.eta ] ) is just a small correction , which can be disregarded in most practical applications .therefore , one can admit the approximate relation : where denotes the value of the function at the most likely macrostate , .basically , the approximation assumed in eq.([eq.app ] ) is equivalent to considering boltzmann s entropy ( [ be.g ] ) as a scalar function , and hence , the approximate transformation rule of the system inverse temperature is given by : in general , the gaussian approximation allows us to consider the fluctuations of an arbitrary energy function as : in particular , it allows us to introduce the following transformation rule : moreover , by starting from eq.([eq.app ] ) , we obtain : ,\ ] ] which reduces to : after considering the thermal equilibrium condition . using this latter transformation rules, one can obtain the transformation rules of some fluctuations relations : exactly , eqs.([cont.tens])-([scal.tens ] ) correspond to transformation rules of contravariant second - rank tensors , covariant second - range tensor and scalar functions in a differential geometric theory , respectively . in order to provide a _ riemannian structure _ to the present geometrical approach, we must introduce an appropriate _metric_. such a role could be carried out by the _ global curvature _ : evaluated at the most likely macrostate , which allows for the conversion between the fluctuations of the conjugated thermodynamic quantities ( covariant and contravariant vectors ) within the gaussian approximation : the global curvature obeys the transformation rule : \right\}\ ] ] with , which reduces to : after considering the thermal equilibrium condition and dismissing small contributions associated with the non - scalar character of boltzmann s entropy ( the two terms associated with the boltzmann s constant ) .clearly , the global curvature can only be considered as a second - rank covariant tensor under the above approximations , since the general transformation rule ( [ gen.glob.curv.trans ] ) does not correspond to this kind of geometric object .interestingly , such a function appears in the complementary fluctuation relation ( [ comple.flu.2 ] ) , which establishes the non - negative character of its expectation value in any coordinate representation : as already commented , this rigorous fluctuation relation satisfies , as a whole , the reparametrization duality , which is not the case of the global curvature considered as an individual entity . by using the global curvature , one can easily obtain other fluctuations relations such as : and rewrite the distribution function in this gaussian approximation as follows : \theta .\ ] ]let us denote by the thermostat temperature in the representation , with .one can formally introduce the heat capacity of this representation as : which allows us to obtain a geometric extension of fluctuation - dissipation relation ( [ fdr ] ) : after combining the gaussian approximation : with definition ( [ inv.temp.dif.phi ] ) and the fluctuation relation ( mother.2 ) . a relevant case among the admissible equilibrium situations considered by the above fluctuation - dissipation relation is the one obeying the constraint , which is associated with the following distribution function : this is just the analogous version of the gibbs canonical ensemble in the representation , with being a constant parameter . by rewriting this particular distribution function in the energy representation : \omega_{u}\left(u\right)du\ ] ] one arrives at the same expression found for the so - called _ generalized canonical ensemble _ recently proposed in the literature .let us now analyze its general mathematical properties . as usual ,the partition function derived from the normalization condition : allows us to obtain the _ generalized planck s thermodynamic potential _ : which provides two relevant statistical expectation values : these last results can be combined in order to obtain the canonical version of the fluctuation - dissipation relation ( [ fdr2 ] ) : with being the canonical heat capacity : clearly , this theorem states that the stable thermodynamically macrostates are those with a nonnegative heat capacity .let us now rewrite planck s thermodynamic potential in the representation : }\frac{d\theta}{\delta\epsilon_{\phi}}\end{aligned}\ ] ] and develop a gaussian approximation ( the second - order power expansion in ) around the local maxima : with and given by : the local maxima are derived from the stationary and stability conditions : by admitting the existence of only one maximum , this approximation yields : clearly , the additive logarithmic term in the gaussian estimation of the planck thermodynamic potential constitutes a small correction in the case of sufficiently large systems . by dismissing this small contribution, one finds that planck s thermodynamic potential is approximately given by the known _ legendre transformation _ : the stationary condition is merely the condition of thermal equilibrium associated with this representation , while the stability condition is simply the requirement of non - negativity of the microcanonical heat capacity : eqs.([planck])-([c.may.z ] ) correspond to many well - known dual expressions previously obtained within the gibbs canonical ensemble ( [ gibbs ] ) .obviously , these two ensembles are intimately related .by considering the scalar character of the probabilistic weight : the thermostat inverse temperature in the energy representation is given by : this latter result clarifies that the generalized canonical ensemble ( [ gen.can ] ) corresponds to a special kind of equilibrium situation with a variable ( fluctuating ) inverse temperature of all admissible states accounted for by fluctuation - dissipation relation ( [ fdr ] ) , that is , a situation with non - vanishing system - surroundings correlative effects . by considering the transformation rule for the microcanonical curvature : \right\},\ ] ] one can find that the requirement can be combined with the existence of macrostates with in the energy representation with an appropriate selection of the reparametrization in eq.(trans.kap ) takes into account the modification of the system entropy during a reparametrization and the consequent correction of the most likely macrostate . ] .this fact is more evident when working in the energy representation , where the stability condition reads as follows : by considering the relations and , with and being the heat capacities of the system and the thermostat respectively ( their usual definitions ) , as well as by using the thermal equilibrium condition , one arrives at the expression : which leads to thirring s stability condition ( c.thir ) for macrostates with . as the gibbs canonical ensemble ( [ gibbs ] ), the present geometric extension ( [ gen.can ] ) becomes equivalent to the microcanonical ensemble with increasing of the system size , , an equivalency that can be ensured even for macrostates with or with an appropriate selection of the reparametrization .this remarkable property makes this ensemble a very attractive thermo - statistical framework , since besides of exhibiting many notable properties of the usual the gibbs canonical ensemble , it also provides a better treatment of the phenomenon of ensemble inequivalence associated with the presence of negative heat capacities , as already discussed in refs. .in particular , this statistical ensemble constitutes a suitable framework for extending of monte carlo methods , as discussed in subsection [ monte ] .it is possible to realize that the generalized gibbs canonical ensemble ( [ gen.can ] ) can also be derived from jaynes s reinterpretation of statistical mechanics in terms of the information theory of shannon , e.g. , by considering the maximization of the known statistical ( extensive ) information entropy : under the normalization condition : and the following nonlinear energy - like constraint : such a derivation was developed by toral in ref. .the interested reader can refer to this work for more details . clearly, the bijective character of the reparametrization should ensure that this generalized ensemble exhibits almost the same stationary properties obtained from the application of the gibbs canonical ensemble in sufficiently large systems , where one usually assumes the appropriateness of the gaussian approximation .however , the nonlinear character of the bijective application produces a deformation in the canonical description , which conveniently modifies the system fluctuating behavior and the accessible regions of the subset of all admissible system macrostates . generally speaking , _ statistical inference _ can be described as the problem of deciding how well a set of outcomes , obtained from independent measurements , fits to a proposed probability distribution : if the probability distribution is characterized by one or more parameters ( ) , this problem is equivalent to inferring the value of the parameter(s ) from the observed measurement outcomes . to make inferences about the parameter ,one constructs estimators , i.e. , functions : of the outcomes of independent repeated measurements .the value of this function represents the best guess for .commonly , there exist several criteria imposed on estimators in order to ensure that their values constitute _good _ estimates of the parameter , such as : * _ unbiasedness _ : * _ efficiency _ or minimal statistical dispersion : * _ sufficiency _ : where is the marginal distribution of and is an arbitrary function of the measurements , independent on .since any statistical estimator represents a stochastic quantity , it is natural in inference problems that an estimator obeys the unbiasedness ( [ unbiasedness ] ) and efficiency ( [ efficiency ] ) conditions .however , there exists a remarkable theorem of inference theory , the _ cramr - rao s inequality _ , which places an inferior bound on the efficiency of an arbitrary unbiased estimator : where is the so - called _ fisher s information entropy _ : ^{2}\rho\left(x|\theta\right)dx.\ ] ] on the other hand , efficiency condition ( [ sufficiency ] ) ensures that , given the value of , the values of the data are distributed independently of , containing in this way all of the information about parameter that can be obtained from the data . as with unbiasedness and efficiency , sufficiency is also a natural desirable condition in inference problems .however , a theorem by pitman and koopman states that sufficient estimators only exist for a reduced family of distribution functions , the so - called _ exponential family _ : .\ ] ] mandelbrot was the first investigator to realize the intimate connection between statistical mechanics and inference theory .clearly , the gibbs canonical ensemble ( [ gibbs ] ) constitutes a relevant physical example of probabilistic distribution function belonging to the exponential family ( [ pk ] ) . as the well - known kinchin work in the framework of information theory , mandelbrot proposed a set of axioms in order to justify a direct derivation of the gibbs canonical ensemble in the framework of inference theory . moreover , he also focussed the inference problem of the inverse temperature , which appears as a parameter of the gibbs canonical ensemble ( [ gibbs ] ) , through some an unbiased estimator defined for a set of outcomes of the system energy . thus ,this author provided an interpretation of the _ energy - temperature complementarity _ previously postulated by bohr and heisenberg : with , a result that follows from the cramr - rao s inequality ( [ cramer.rao ] ) after noting that the fisher s information entropy ( fisher ) for the gibbs canonical distribution ( [ gibbs ] ) is simply the canonical expectation value of the energy dispersion , .after reading the present discussion , one can point out some critiques to mandelbrot s approach . in regard to his interpretation of energy - temperature complementarity , eq.([mandel.tur ] ) , it is clear that such an uncertainty relation only applies in the framework of the gibbs canonical ensemble ( [ gibbs ] ) .moreover , this inequality accounts for the limits of precision of a statistical estimation of the inverse temperature appearing as a parameter of the canonical ensemble ( [ gibbs ] ) .clearly , this quantity has nothing to do with the system inverse temperature , but rather the inverse temperature of the gibbs thermostat . this is a common misunderstanding of some contemporary developments of statistical physics , where it is not distinguished between these two temperatures , leading in this way to some limitations and inconsistences .clearly , such difficulties are overcome by the uncertainty relation ( [ unc ] ) associated with the energy - temperature fluctuation - dissipation relation ( [ fdr ] ) .the differences between the gibbs temperature of the canonical ensemble ( [ gibbs ] ) and the boltzmann s definition ( [ bb ] ) are irrelevant in the case of large short - range thermodynamic systems considered in conventional applications of statistical mechanics and thermodynamics , overall , in those physical situations where the necessary conditions for the equivalence between canonical and microcanonical descriptions apply. however , the existing differences become critical when one considers the thermodynamical description of long - range interacting systems such as the astrophysical ones , where the presence of macrostates with _ negative heat capacities _ constitutes an important thermodynamic feature that rules their macroscopic behavior and dynamical evolution .as already discussed , such an anomaly can not be described by using the gibbs canonical description ( [ gibbs ] ) .besides , there does not exist in this context an appropriate gibbs thermostat that ensures the existence of a thermal contact ( a boundary interaction ) in presence of a long - range interacting force such as gravity .the above limitations also extend to other physical contexts such as small or mesoscopic nuclear , molecular and atomic clusters , where the presence of a negative heat capacity is not an unusual feature , while the thermodynamic influence of a gibbs thermostat constitutes a very strong perturbation of its internal thermodynamic state . in this kind of scenario , there does not always exist a clear justification for the direct application of some theoretical developments based on the consideration of the gibbs canonical ensemble , e.g. : the use of finite - temperature calculations for the study of collisions in high energy physics .interestingly , a collective phenomenon such as the _ nuclear multi - fragmentation _ resulting from collisions of heavy nuclei is simply a first - order phase transition revealing the experimental observation of macrostates with negative heat capacities .clearly , such a realistic phenomenon can not be appropriately described by using the canonical ensemble .remarkably , its is easy to note that the gibbs canonical ensemble ( [ gibbs ] ) is not the only one probabilistic distribution function justified in terms of inference theory , as originally presupposed by mandelbrot in his approach .in fact , the whole family of the generalized gibbs canonical ensembles ( [ gen.can ] ) also belongs to the exponential family ( [ pk ] ) , and hence , such distributions also ensure the existence of sufficient estimators obeying uncertainty relations _ la mandelbrot _ : as a consequence of the underlying reparametrization duality discussed in this work .as expected , , with being the generalized canonical expectation values derived from the generalized ensemble ( [ gen.can ] ) .we have provided in this work a panoramic overview of direct implications and connections of the energy - temperature fluctuation - dissipation relation ( [ fdr ] ) with different challenging questions of statistical mechanics . as briefly discussed , the main motivation and most direct consequence of this generalized fluctuation relation was the compatibility with macrostates having negative heat capacities in the framework of fluctuation theory .such a feature makes possible to analyze and apply the necessary conditions for the thermodynamical stability of such anomalous macrostates in order to extend the available monte carlo methods based on the consideration of the gibbs canonical ensemble ( [ gibbs ] ) , a procedure that also allows one to avoid the incidence of the so - called _ super - critical slowing down _ encountered in large - scale simulations .moreover , the fluctuation - dissipation relation constitutes a particular expression of a fluctuation relation leading to the existence of a complementary relationship between thermodynamic quantities of energy and ( inverse ) temperature ( [ unc ] ) .the consideration of geometric concepts , such as coordinate changes or _ reparametrizations _ , leads to a direct extension of many old and new rigorous results of statistical mechanics in terms of a special kind of internal symmetry that we refer to here as a _ reparametrization duality_. such a basis inspires the introduction of a geometric generalized version of the gibbs canonical ensemble ( [ gen.can ] ) , which has been recently proposed in the literature . this latter probabilistic distribution allows for a better treatment of the phenomenon of ensemble inequivalence or for the consideration of anomalous macrostates with negative heat capacities . at the same time, this family of distribution functions still preserves many notable properties of the gibbs canonical ensemble , including its derivation from jaynes reinterpretation of statistical mechanics in terms of information theory , as well as mandelbrot s approach based on inference theory .it is a pleasure to acknowledge partial financial support by fondecyt 3080003 and 1051075 .l.v . also thanks the partial financial support by the project pncb-16/2004 of the cuban national programme of basic sciences .99 l. velazquez and s. curilef , j. phys .a : math . theor .* 42 * ( 2009 ) 095006 .p. h. chavanis in : _ dynamics and thermodynamics of systems with long range interactions _ , lecture notes in physics , t. dauxois , s. ruffo , e. arimondo , m. wilkens ( eds . ) , ( springer , new york , 2002 ) ; e - print ( 2002 ) [ cond - mat/0212223 ] .
|
recently , we have derived a generalization of the known canonical fluctuation relation between heat capacity and energy fluctuations , which can account for the existence of macrostates with negative heat capacities . in this work , we presented a panoramic overview of direct implications and connections of this fluctuation theorem with other developments of statistical mechanics , such as the extension of canonical monte carlo methods , the geometric formulations of fluctuation theory and the relevance of a geometric extension of the gibbs canonical ensemble that has been recently proposed in the literature.pacs numbers : 05.20.gg ; 05.40.-a ; 75.40.-s ; 02.70.tt
|
`` attention '' mechanisms are a critical component to brain s cognitive performance .such mechanisms enable the brain to process overwhelming visual stimuli with limited capacity by selectively enhancing the information relevant to one s current behaviour . with the massive growth of digital image data due to social media , surveillance camera , among others ,there is a growing demand for computing platforms to perform cognitive tasks .most of these computing platforms have limited resources in terms of processing power and battery life .hence , researchers have been strongly motivated to design efficient large - scale image recognition methods to enable resource constrained iot ( internet of things ) devices with cognitive intelligence .several brain - inspired computing models including support vector machines ( svm ) , random forest , and adaboost have proven to be very successful for image recognition . however , these classifiers do not scale well with increasing number of image categories . deep learning networks like convnets have achieved state - of - the - art accuracies , even surpassing human performance for imagenet dataset . however , they have been criticized for their enormous training cost and computational complexity .similarly , the one - versus - all linear svm , one of the most popular classifiers for large - scale classification , is computationally inefficient as its complexity increases linearly with the number of categories .while these classifiers are modeled to mimic the brain - like cognitive abilities , they lack the remarkable energy - efficient processing capability of the brain .the brain carries out enormously diverse and complex information processing to deal with a constantly varying world at a power budget of about 12 - 20 w . seeking to attain the brain s efficiency , we draw inspiration from its underlying processing mechanisms to design a multi - class classification method that is both accurate and computationally efficient . one such mechanism known as `` saliency based selective attention '' shown in fig. 1 ( left ) simplifies complex visual tasks into characteristic features and then selectively activates particular areas of the brain based on the feature information in the input . when presented with new visual images , the brain associates the already learnt features to the visual appearance of the new object types to perform recognition .this facilitates the brain to learn a host of new information with limited capacity and also speeds up the recognition process .interestingly , we note that there is significant similarity among underlying characteristic features ( like color or texture ) of images across multiple objects in real world applications .this presents us with an opportunity to build an efficient visual recognition system incorporating inter - class feature similarities and relationships . in this work ,we propose a computationally efficient multi - class classification method : attention tree ( atree ) that exploits the feature similarity among multiple classes in the dataset to build a hierarchical tree structure composed of binary classifiers .the resultant atree learns a hierarchy of features that transition from general to specific as we go deeper into the tree in a top - down manner .this is similar to the state - of - the - art deep learning convolutional networks ( dlns ) where the convolutional layers exhibit a generic - to - specific transition in the learnt features . in case of dlns, the entire network is utilized for the recognition of a particular test input .in contrast , the construction of the attention tree incorporates effective and active pruning of the dataset during training of the individual tree nodes resulting in an efficient instance - specific classification path .in addition , as we will see in later sections , our attention model captures both inter and intra class feature similarity to build a tree hierarchy with decision paths of varying lengths even for the same class .this provides substantial benefits in test speed and computational efficiency for large - scale problems while maintaining competitive classification accuracy .1 ( right ) shows a toy example of an atree based on real - world broad semantic categories for different object classes .for example , to recognise a car , it is not sensible to learn all the specific appearance details .instead , first we learn the general vehicle - type features ( wheels , shape etc ) and then learn more discriminative details ( brand symbol ) .thus , we learn a hierarchy of features generalizing over object instances like : wheeled vehicle vehicles .if presented with new motorbike object types , the attention hierarchy now associates this new category of objects to the already learnt `` wheeled vehicle '' features and then learns more discriminative details corresponding to the motorbike types .each node of the atree is then associated with different features based on inter - class relationships .it is evident from fig .1 that the attention tree method bears resemblance to the selective attention mechanism of the brain ( fig . 1 left ) by exploiting feature similarity and the implicit relationships among different visual data to learn a meaningful hierarchy for recognition .while decision tree , ensemble methods and a class of other boosting techniques have been proposed for lowering the testing complexity of machine - learning problems , they suffer from major limitations : a ) in ensemble learning , a set of weak learners are combined into a complex classifier with high accuracy .the number of weak classifiers can be in the order of hundreds to get a reasonable performance for large - scale problems .thus , ensemble methods become computationally expensive for larger datasets .b ) most existing models deviate from the biological attention based visual processing in the human brain and perform one - against - rest classification . in this case , the learning algorithm fails to maintain a general - to - specific feature hierarchy that turns out to be ineffective as well as computationally inefficient . a class of work on one - versus - all " and one - versus one " methods have been explored to convert a multi - class problem into multiple binary classification problems . in such models , classes are not organized in a hierarchical tree .also , these methods do not incorporate class relationships or feature similarities .an extension of these methods include _ error correcting output codes _ that utilize feature sharing to build more generalized and robust classifiers . as discussed earlier , these methods yield good classification accuracy . however , the time complexity is linearly proportional to the number of classes that does not scale well to larger datasets . propose different ways to construct a hierarchical classification tree .however , most of these methods rely on a greedy prediction algorithm for class prediction through a single path of the tree .while these algorithms achieve sublinear complexity , the accuracy is typically sacrificed as errors made at higher nodes of the hierarchy can not be corrected later .researchers have also looked at developing efficient and effective feature representations for large - scale classification problems . learn discriminative features using deep convolutional networks to achieve state - of - the - art accuracy .please note that our proposed atree is orthogonal to such models since our method can use various feature respresentations to explore the accuracy vs. efficiency tradeoff .hence , we do not optimize over different features in this work , rather compare the efficiency benefits of our approach with existing hierarchical methods .while our proposed atree model draws inspiration from other tree - based methods such as , we have different focus , design and evaluation strategies .as mentioned , most of these methods use a greedy prediction algorithm to achieve a good tradeoff between efficiency and complexity .the novelty of our work is that we use the recursive adaboost training as a unified and principled optimization procedure to determine data partitioning ( or learning attention hierarchy ) based on feature similarity .this in turn enables the binary svm to construct a maximum - margin hyperplane for optimal decision boundary modeling ( with lower generalization error ) leading to better performance .in addition , organizing the binary classifiers in a hierarchical tree structure on top of the attention hierarchy further reduces complexity .we use a variant of the boosted tree algorithm that combines adaboost with a svm based decision tree to construct the atree . the proposed attention based classification framework resolves the problems associated with standard decision tree methods as discussed above . for a simple two - class ( yes / no ) problem , the training stage for atree consists of two phases : a ) first, we construct the visual feature hierarchy in atree using the adaboost training algorithm recursively , wherein each tree node is a complex classifier that works on an optimal feature at that tree level for partitioning the inputs .the partitioned input data obtained at a particular node are then used to train the left and the right sub - trees .thus , the training data for the subsequent nodes of the tree are continuously pruned during the construction of atree leading to computationally efficient training .the recursive boosting procedure intrinsically embeds clustering in the learning stage with similar feature clusters created in an automatic and hierarchical fashion .b ) with the feature hierarchy and the resultant pruned data space fixed for each node / branch of the tree in the first phase , we train a standard binary svm on the right and the left partitioned subsets of input data at each tree node .we further extend the two - class attention model described above to multi - class problems .we use the minimum entropy measure to select a feature that can be used to categorize the multiple category of objects into two broad classes .then , the training algorithm for a simple two - class atree is used to design the attention hierarchy .again , clusters of multiple classes are automatically formed . at test time, only those branches and nodes depending upon the output of the binary classifier are activated that are relevant to the input .hence , our approach is both time and energy efficient since it involves instance - specific selective activation of nodes .next , we briefly discuss the adaboost learning framework and the shortcomings associated with it .we explain the intuition behind modifying the standard adaboost training procedure which can then be effectively used to construct the feature - based hierarchy or atree .the adaboost algorithm combines a set of simple or weak classifiers ( ) to form the final classifier ( ) given by .the output of the final or strong classifier is .the weak classifiers can be thought of as feature or basis vectors . given a set of training samples, adaboost maintains a probability distribution , ( uniform in the first iteration ) , of the samples .then , adaboost calls a weaklearn algorithm that trains the weak learner or classifier ( ) on the weighted sample in a series of iterations \{}. the distribution is updated in each iteration to minimize the overall error ( ) .finally , adaboost uses a weighted linear combination of the weak learners ( or features ) to obtain the final output .the adaboost and weaklearn algorithm have been explained in detail in . for each sample with weight , the error rate ( ) is given by eqn .1 shows the maximum value of the error . for large - scale problems ,when tends to be complex , saturates at after few iterations and thus the adaboost algorithm fails to reach a global error minima . a possible solution for avoiding this is to design better weak classifiers that can effectively separate the classes .however , this would further increase the computational complexity for computing these classifiers ( or features ) .one of the key principles of adaboost is that `` easy '' samples that are correctly classified by the weak classifiers get low weights while those misclassified ( `` hard '' samples ) get higher weights .the weight distribution captures all the information about selected `` features '' in a given iteration .however , due to the weight update rule and normalization of w in each iteration , the information about previously selected features might be lost .this will result in misclassification of correctly classified ( `` easy '' ) samples from earlier iterations in the present epoch .thus , the algorithm does not maintain a generic - to - specific transition while learning the weak classifiers ( or features ) that proves to be ineffective after a few iterations . to address this, we build an attention tree of strong classifiers , instead of constructing a single strong classifier from a linear combination of weak learners .the tree utilizes the features learnt from the previous nodes to construct the subsequent nodes .as we traverse down the tree , the classifiers learn more specific features that are useful for classifying the `` hard '' inputs correctly while preserving the feature information learnt at the early nodes for `` easy '' input samples .the central idea of the attention tree algorithm is to use feature - based attention to optimize the search procedure inherent in a vision problem .this model of attention addresses the reduction of number of candidate image subsets and feature subsets that are required for object recognition by selectively tuning the visual processing hierarchy .the theory described here is most closely related to the neuroscience works of that present the neurobiological concepts of primate visual attention .algorithm 1 shows the procedure for training the attention tree for a two - class problem . inphase i _ , a tree is recursively trained .it learns and preserves a hierarchy of features essential for understanding the underlying image representations and for efficient classification . at each node, a classifier is learnt using the adaboost algorithm described in that identifies the most optimum feature to separate the training inputs at a particular node into the corresponding sub - branches .it is shown in that adaboost is essentially approximating a logistic regression . for convenience in notation , we denote the output computed by each classifier at the tree node as * _ phase i : learning visual feature hierarchy _ * * input : * training dataset d= ; , + * output : * atree with feature hierarchy of depth l * initialize * =1 * while * ( ) using adaboost , train a strong classifier on d combining t weak classifiers .calculate training error , .exit adaboost if ( user - defined , =0.48 in our experiments ) . compute the probability distribution * initialize * , = \ { } * for * //_n= # of samples _compute and using eqn 2 and 3 for the strong classifier learnt in step 3 . *if * * then * , assign * elseif * * then * , assign * else * and , assign * end if * * end for * normalize weights in subset and goto step 2 .normalize weights in subset and goto step 2 .//_recursively repeat until is reached _ * end while * - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - * _ phase ii : svm optimization _ * * input : * atree with feature hierarchy of depth l , training samples ( or ) for each node of the atree= + * output : * svm based atree * initialize * =1 * while * ( ) train a binary svm at each tree node using and at that node ignoring all samples with with standard regularized hinge loss minimization . * end while * depending upon the probabilities computed by the classifier node , the training set ( ) is divided as and that are then passed to the sub - branches for training the following nodes of the tree .as the tree expands , only a subset of the input samples are passed to the subsequent nodes .thus , the final nodes or leaves of the tree will consist of input samples belonging to one particular class .please refer to fig . 2 for an overview of the tree structure and input sub - sampling obtained with the attention model .later , in section iv(d ) , we give a detailed explanation about the input sub - sampling and the hierarchical feature learning achieved with our attention model . in _phase ii _ of _ algorithm 1 _ , a binary svm ( with any suitable kernel ) is trained at each node of the tree using the and training sub - samples obtained from _phase i_. the training labels ( + for , - for and * for instances that are passed to both / ) are assigned to each input in the corresponding subsets for training the binary svm .as the training set size decreases ( owing to the input partitioning ) at the successive nodes as we traverse down the tree , the complexity of the problem and hence that of the svm also reduces .this in turn enables better decision boundary modeling with low computational complexity in the subsequent nodes ( svms ) for improved classification performance .adding svms ( _ phase ii _ of algorithm 1 ) at the nodes on top of the learnt feature hierarchy ( _ phase i _ of algorithm 1 ) enables the attention tree model to achieve state - of - the - art accuracies on challenging benchmark databases with significantly lower cost .the threshold value , in _ phase i of algorithm 1 _ , determines the fraction of training samples separated as positive ( + ) and negative ( - ) subsets . if =1 , then all training samples are passed to both branches ( or sub - trees ) of a tree node .the weights for both sub - trees are re - computed based on the node classifier s output . in that case, the tree based adaboost training converges to a standard boosting algorithm wherein the feature hierarchy ( general - to - specific ) is not learnt . for all our experiments discussed in sectionv , we set the value to be . for , easy inputs that can be correctly classified with general features at the top nodes will be unnecessarily passed down to bottom nodes for classification .this will result in computational inefficiency , defeating the purpose of the attention tree .if , then , each training sample is either passed to the right or left sub - tree which leads to a _ constrained _ partition . in this case , the hard or confusing classes will be assigned to one of the sub - trees causing overfitting of data in the subsequent nodes .this will lead to a decline in accuracy .however , the test complexity will be low since the length of the tree will be short leading to a quicker decision at the cost of degraded performance .those samples whose output probability lies in the range $ ] when can be considered as hard or confusing ones . for ,the hard samples are passed to both the left and the right sub - trees for training ( * ) .the hard or confusing inputs / classes are ignored while training the svm at the corresponding node in _phase ii_. this is adopted from the _ relaxed hierarchy _ structure in .this is done to enhance the accuracy of the attention tree .it is understood that the decision boundary becomes progressively non - linear to model the hard or confusing classes in a dataset as we traverse down the atree .the hard or confusing instances are ignored and passed to the bottom nodes that construct better decision boundary models , thereby , decreasing the overall error . in casethe hard classes are not passed to bottom nodes , the svms at the top will construct overfitted models for the complex data instances , thereby , decreasing the accuracy considerably . in sectionv , we vary the threshold to build _ constrained _ and _ relaxed _ hierarchical attention models and analyze the tradeoff between computational efficiency and accuracy for both approaches . to conserve the feature transition in the attention model , we propose a simple method for extending the two - class training model into a multi - class one .traditionally , boosting algorithms use multi - class weak learners to construct a multi - class final strong classifier .however , for large number of classes , constructing reasonably accurate multi - class weak learners turns out to be highly computationally expensive .as seen earlier , we observe a feature similarity across classes that can be used to decompose a multi - class problem into a hierarchy of two - class problems .* input : * training dataset d= , + * output : * svm based atree of depth l + for training a tree of maximum depth l , compute the probability distribution for each feature at value , compute histogram for and for . find optimal and that have minimum entropy .* if * * then * assign * else * assign * end if * * initialize * training set d= ; //_now the multi - class is reduced to a 2-class problem _ call algorithm 1 .algorithm 2 shows the procedure for training a multi - class attention tree .the algorithm first finds the optimum feature across multiple classes that separates the input patterns into 2-classes and then uses the 2-class training procedure ( _ phase i _ of algorithm 1 ) to learn the subsequent classifier nodes of the tree . in our experiments , we observed that the feature chosen for transforming the multi - class to 2-class problem is often the feature selected by algorithm 1 to construct the top node of the atree .intuitively , after the first selection , the features selected at the subsequent nodes help in making a stronger and more accurate decision.thus , similar objects ( with similar features ) of different classes are clustered together in the initial nodes of the hierarchy .as the tree expands , these classes are gradually set apart .the tree is terminated when the algorithm does not find any common feature to partition the inputs ( at the leaves of the tree ) .thus , each leaf of the tree corresponds to a particular class .after the attention hierarchy is learned , _ phase ii _ of algorithm 1 is invoked to train svms at each node ( excluding the leaves ) of the hierarchy . the attention tree composed of svm nodesis then used for testing . those instances ( easy )that can be easily distinguished with general features are identified with svms at the top nodes .the svms at the bottom nodes perform more accurate classification on the hard instances in the dataset .when an input instance is presented at the root node , the branch with higher output probability at the svm node is activated .based on the path activated by the output of svm nodes , the instance then traverses the attention hierarchy until a leaf node where a final decision ( or class assignment ) is made .note that a subset of classes are eliminated at each tree node as the tree is traversed .the attention based hierarchy , thus , scales sub - linearly _o(log(n ) ) _ with respect to the number of classes . in the current era of `` data deluge '' that presents vision problems with a hefty task of recognizing from hundreds or thousands of classes, the sub - linearly growing attention tree model can be very useful .the training algorithm naturally divides the samples into left and right sub - groups based on the configuration of features .2 shows an example of how the attention tree learns and divides the samples on a synthesized dataset of 3000 points .the dataset consists of inputs belonging to two classes ( denoted as orange and blue ) .the samples that are clustered together can be termed as hard inputs .such samples are passed down the sub - branches of the tree forming the successive nodes .the top node of the tree partitions the inputs into two subsets .this division is intuitive as the right set of orange points are distant from the remaining inputs that are clustered together .the tree then expands on the hard inputs where the two sets are clustered together .if these data points are assumed to be features ( like texture or color components ) corresponding to two image classes , it is clearly seen that the hierarchy formed is coherent with the basic generic - to - specific feature transition theory of the attention model .consider an example of recognizing a red ferrari ( blue points ) from a sample set of vehicle images consisting of motorbikes and cars ( orange points ) .the first intuitive step is to recognize all red vehicles in the sample and then look for a ferrari shaped object from the sub - sample of remaining red images .the attention tree tries to model this intuitive behavior by learning the feature hierarchy . at the first level, the tree uses the red feature to distinguish the non - red vehicles from the red vehicles .as we go down , the tree uses more specific features ( like ferrari shapes or wheel textures ) to perform more accurate classification .our attention model automatically learns this feature hierarchy without any need to pre - specify the feature clusters .the pruning of the input data as we traverse down the attention model reduces the complexity of the original multi - class problem .this in turn enables better decision boundary modelling at the bottom svm nodes of the atree as compared to the top node resulting in improved classification performance .a noteworthy observation here is that the attention model comprises of multiple decision paths of different lengths . in fig .2 , the tree consists of leaf nodes ( for orange data points ) at every level . for a given input , the decision can be reached at an earlier leaf node yielding a more optimal speedup during testing . referring to the ferrari example , all non - red images can be classified at the first level without traversing the whole tree .this imbalanced decision tree structure is what separates our model from other decision tree methods where one has to traverse the entire tree to reach a decision . even within a particular class ,all inputs are not equal .for example , recognizing a person standing against a plain background is much easier ( less time and effort ) than when he / she is in the midst of a crowd . ideally , algorithms should spend effort proportional to the difficulty of the inputs irrespective of whether they belong to the same class or not .most existing works focus on optimizing the computational complexity based on inter - class feature variability .in contrast , our imbalanced method captures both inter and intra class feature variability while expanding the attention tree thus yielding more computational benefits .previously , we discussed that the threshold , , serves as a useful control parameter to construct either relaxed or constrained models of the attention tree .3 demonstrates the sample relaxed / constrained hierarchy for a 4-class problem .the instances from class c are the hard inputs in the dataset . in the constrained hierarchy, it is clearly seen that instances from c are forced to the left sub - node . in this case, it is very likely that the svm at the root node will misclassify a test instance from class c due to overfitting .however , the decision path for recognizing class d is short .so , we will observe an improvement in efficiency ( or test speed ) at the cost of accuracy .with relaxed hierarchy , we see that there is an extra svm classifier evaluation required to recognize class d that increases the computational cost. however , the accuracy in this case will be better as the addition of an extra classifier node ( node cd in fig .3(a ) ) minimizes overfitting for complex distribution of data .in addition , the relaxed hierarchy captures the intra - class feature variability for class c which is not seen in the constrained model . in the relaxed model, instances of class c that are relatively easy can be classified at the 2nd level and those that are hard are only passed to the 3rd level for accurate classification . in contrast , with constrained model all instances of class c are passed to the 3rd level for classification . from this sample demonstration , it is clear that can be modulated to control the accuracy and efficiency of the atree .since the learnt tree with svm nodes is used for measuring the complexity and accuracy of the attention model , the kernel - type selection ( non - linear or linear ) plays a key role in determining the overall computational efficiency / performance for a given multi - class problem . in case of svms with linear kernels ,the complexity of each classifier node is same , so the overall test complexity is proportional to the number of classifiers evaluated to reach a decision .however , in case of non - linear kernels , the complexity of each classifier is proportional to the number of its support vectors .so , we use a computational model as devised in to optimize the number of support vectors for maximizing the computational benefits .it is clear that the training algorithm for multi - class attention tree does not always result in a balanced partitioning of classes ( into left and right sub - trees ) at a particular node as observed in fig .3 . given a svm classifier ,let be the number of support vectors of the classifier .we define a cost function ( ) that reflects the average efficiency of the svm ( ) as : where and are the number of classes assigned to positive ( right sub - tree ) and negative ( left sub - tree ) labels respectively . is the fraction of negative classes and similarly is the fraction of positive classes . after the attention hierarchy is learned ( _ phase i _ of _ algorithm1 _ ) , we can estimate and . in an ideal case , for instances from class , classes are pruned after evaluating with cost proportional to number of kernel evaluations .so , the average cost for discarding a particular class is .similarly , the average cost for eliminating a class for instances belonging to positive subset ( ) is .given the proportion of positive and negative classes , the average cost for eliminating one class by is given by eqn ., we select the number of support vectors , , that minimizes the overall cost function while yielding competitive accuracy .in this section , we evaluate our proposed framework on two fundamental computer vision tasks : object recognition and scene categorization for the benchmark datasets , caltech-256 and sun .we use the evaluation metrics : classification accuracy and test speed ( or test complexity ) to discuss the benefits of our approach . for classification accuracy ,we use the mean of per - class accuracy that is reported as a standard way for estimating multi - class classification performance . for test speed, we distinguish two cases based on the kernel type selection for the svm classifiers at the atree nodes .the first case corresponds to linear classifiers , where the overall test complexity is proportional to the number of evaluated classifiers .so , for a linear kernel svm , we report the mean of the number of classifier evaluations for all test instances .the second case corresponds to nonlinear kernel svms . as mentioned earlier , the complexity of each classifier is now proportional to the number of support vectors .specifically , let be the number of classifiers to be evaluated , where each classifier has a set of support vectors where m and i denote the classifier and its support vectors respectively . if classifiers are evaluated independently without caching any kernel computations , then , the number of kernel computations for a single test instance is given by .this method proves to be very inefficient when the number of classifiers are large .an efficient approach would be caching the kernel computations from different classifiers and reusing them whenever possible .then , the number of kernel computations reduces to .we use the latter approach to report test speed when non - linear kernels are used .we compare our method to various existing approaches : gao , one - vs - all , one - vs - one , dagsvm , tree - based hierarchy and marszalek .the regularization parameter of svm is chosen by cross validation on the training set . with 256 categories and at least 80 images per class ,this is a standard muti - class object recognition dataset .we randomly sampled 80 images for each class , and used half ( 40 per class ) for training and remaining half for testing . for features, we used the standard spatial histograms of visual words based on dense sift . like , we used the extended gaussian kernel based on distance .however , since linear kernel of histogram based features gives poor accuracy , we used explicit feature transformation from to approximate implicit feature mapping of kernel .the linear svm is applied on the transformed feature .we varied computational parameters for tree ( 2 to 5 levels ) , marszalek ( ) , gao ( .5 to 0.8 with step size of 0.1 } ) and our method atree ( .5 to 0.9 in steps of 0.1 } ) to obtain a tradeoff between accuracy and speed . here , and are the computational parameters defined in and respectively that are varied to achieve the complexity vs. accuracy tradeoff .4 shows the results .it is clearly seen that atree performs better ( faster at same accuracy and more accurate at the same relative complexity ( rc ) ) for both linear and non - linear kernels .for instance , in case of linear kernel , atree achieves one of the best accuracy ( .3% ) with around 27% of the complexity of one - vs - all with a _ relaxed hierarchical _ model ( where ) while achieving a speedup of 3.7x .also , for , when the atree is modelled as a _ constrained hierarchy _ , it achieves a higher speed up of 5.5x for .5% accuracy degradation with respect to one - vs - all .however , to achieve a similar 5x speed up other methods : gao , marszalek , tree have to suffer 3.2% , 8% , 10% accuracy degradation .please note that atree achieves consistently better accuracy performance than the best result reported in for both linear and non - linear kernels .now , we evaluate our atree model for scene classification on the sun dataset .the sun dataset captures a full variety of 899 scene categories .we used 397 well - sampled categories as . for each class ,50 images are used for training and the other for test . for image representation , we used spatial hog pyramid with histogram intersection kernel ( non - linear svm ) and transformed spatial histogram oriented gradient ( hog ) pyramid ( explicit feature transformation from to approximate the implicit feature mapping of histogram intersection kernel ) with linear kernel ( linear svm ) . as with caltech-256 , we varied the tradeoff between accuracy and speed for tree ( 2 to 5 levels ) , marszalek ( ) , gao ( .6 to 0.9 with step size of 0.1 } ) and our method atree ( .5 to 0.9 with step size of 0.05 } ) .5 shows the results .the performance improvement for both linear / non - linear kernels is similar and consistent with the results of caltech-256 .for instance , for hog with histogram intersection kernel , our method has a significantly improved accuracy of 24.4% with % complexity compared to one - vs - all ( for .65 implying a _ relaxed hierarchy _ ) .however , marszalek and gao can only reduce the relative complexity to 49% and 64% respectively to attain similar accuracy as one - vs - all .the performance of the atree further improves if is increased and the highest accuracy observed is 25.2% ( at 26% complexity ) that is .7% higher than the best result reported in . as for the test speed , while our method achieves a maximum speed up of 4.8x compared to one - vs - all even with an improved accuracy , other methods never meet this speedup irrespective of the accuracy .in addition , with linear kernel , atree achieves a slightly improved accuracy with respect to one - vs - all and dagsvm while being 7.2x faster .in fact , for a 1.8% decline in accuracy , atree ( with _ constrained hierarchy _ for ) is 19x faster than one - vs - all .however , for gao / marszalek , the accuracy degradation is higher upto 2.5%/6.8% to achieve similar speed up .the above results validate that atree is more effective to reduce the rc while maintaining a competitive accuracy in comparison to other hierarchical tree - based implementations . from fig . 5 ( b ), it is worth noting that if non - linear kernels are used , a lower depth tree does not necessarily lead to lower computational complexity .when is large for or is closer to 0.5 ( ) , the depth of the tree is low on account of constrained partitioning of inputs into left and right sub - trees .ideally , we should get an accuracy decline with a lower complexity for such cases as the number of classifier evaluations will be less . however , we observe that both accuracy and complexity are worse .the reason is that , although a fewer number of classifier evaluations are required in these cases , each svm involves a large number of support vectors ( since constrained partition forces the svms to perform complex boundary modelling ) which increases the overall complexity .besides performance comparison , we also studied how the complexity of atree changes with the increase in the number of classes .we sampled 100 , 200 and 300 with the original 397 classes from sun dataset , and for each case we learn the model with spatial hog using linear kernel . for fair comparison, we set for our method , for and for to match the same level of accuracy as one - vs - all .as seen in fig .6 , the complexity of our method grows sublinearly as compared to .as discussed earlier , atree model gives rise to an imbalanced tree that can have leaf nodes even at the beginning of the attention hierarchy .thus , we observe that our method grows at a slightly lesser rate than that of .atree builds a feature hierarchy in the label space automatically .7 shows the attention tree formed for a subset of some sampled images from the caltech-256 dataset .we observe that the images that have similar features are clustered together in an initial node and are gradually set apart as the tree is traversed .conforming to the imbalanced attention model , we observe that for certain classes : zebra , car tire , the classification is done at earlier nodes while more confusing classes are passed down .in addition , we also observe intra - class variability for the camel class in which certain instances are evaluated earlier than others . in fig .8 , we present a sampling of the first three levels of the atree constructed for the entire caltech-256 and sun dataset showing how the different classes are assigned \{+,- , * ( algorithm 1 ) } and separated into left and right sub - trees . in the caltech-256 atree hierarchy , we observe that the assignment of classes into sub - nodes in many cases correlates to human vision i.e. images from different classes that are assigned to the same sub - tree look similar to humans .for the sun atree hierarchy , the partitioning of classes in the first two levels correlates with human - defined concepts .e.g. , natural outdoor scenes vs. indoor man - made scenes .also , the hierarchy starts partitioning classes with large visual distances and then identifies subtle discrepancies at the bottom nodes which is in coherence with the concepts of visual stimuli decomposition in the human brain .this suggests the biological plausibility and effectiveness of our attention model for image classification .we proposed a novel neuro - inspired visual feature learning to construct an efficient and accurate tree - based classifier : attention tree , for large - scale image classification .our learning algorithm is based on the biological attention mechanism observed in the brain that selects specific features for greater neural representations .the atree uses a principled optimization procedure ( recursive adaboost training ) to extract knowledge about the relationships between object types and integrates that into the visual appearance learning .we evaluated our method on both the caltech-256 and sun datasets and obtained significant improvement in accuracy and efficiency .in fact , atree outperforms the one - vs - all method in accuracy and yields lower computational complexity compared to the state - of - the - art `` tree - based''methods .the proposed framework intrinsically embeds clustering in the learning procedure and identifies both inter and intra class variability .most importantly , our proposed atree learns the hierarchy in a systematic and less greedy way that grows sublinearly with the number of classes and hence proves to be very effective for large - scale classification problems .it is noteworthy to mention that the current atree framework suffers from overfitting when the training dataset is small .the overfitting behaviour is checked by modulating the depth of the atree and also adopting the relaxed hierarchy structure where confusing or `` hard '' inputs are passed to both the right and the left sub - nodes .additionally , tree pruning methods can be used to control overfitting .further research can be done to explore the overfitting problem .this work was supported in part by c - spin , one of the six centers of starnet , a semiconductor research corporation program , sponsored by marco and darpa , by the semiconductor research corporation , the national science foundation , intel corporation and by the national security science and engineering faculty fellowship .j. xiao , j. hays , k. a. ehinger , a. oliva , and a. torralba , `` sun database : large - scale scene recognition from abbey to zoo , '' in _ computer vision and pattern recognition ( cvpr ) , 2010 ieee conference on_.1em plus 0.5em minus 0.4emieee , 2010 , pp . 34853492 .d. geebelen , j. a. suykens , and j. vandewalle , `` reducing the number of support vectors of svm classifiers using the smoothed separable case approximation , '' _ ieee transactions on neural networks and learning systems _ , vol .23 , no . 4 , pp . 682688 , 2012 .k. he , x. zhang , s. ren , and j. sun , `` delving deep into rectifiers : surpassing human - level performance on imagenet classification , '' in _ proceedings of the ieee international conference on computer vision _ , 2015 , pp .10261034 .t. gao and d. koller , `` discriminative learning of relaxed hierarchy for large - scale visual recognition , '' in _ 2011 international conference on computer vision_.1em plus 0.5em minus 0.4emieee , 2011 , pp .20722079 .m. sun , w. huang , and s. savarese , `` find the best path : an efficient and accurate classifier for image hierarchies , '' in _ proceedings of the ieee international conference on computer vision _, 2013 , pp . 265272 .j. deng , s. satheesh , a. c. berg , and f. li , `` fast and balanced : efficient label tree learning for large scale object recognition , '' in _ advances in neural information processing systems _, 2011 , pp . 567575 .p. wang , c. shen , n. barnes , and h. zheng , `` fast and robust object detection using asymmetric totally corrective boosting , '' _ ieee transactions on neural networks and learning systems _ , vol .23 , no . 1 , pp . 3346 , 2012 .s. paisitkriangkrai , c. shen , and a. van den hengel , `` a scalable stagewise approach to large - margin multiclass loss - based boosting , '' _ ieee transactions on neural networks and learning systems _ , vol . 25 , no . 5 ,pp . 10021013 , 2014 .g. griffin and p. perona , `` learning and using taxonomies for fast visual categorization , '' in _ computer vision and pattern recognition , 2008 .cvpr 2008 .ieee conference on_.1em plus 0.5em minus 0.4emieee , 2008 , pp . 18 .y. lin , f. lv , s. zhu , m. yang , t. cour , k. yu , l. cao , and t. huang , `` large - scale image classification : fast feature extraction and svm training , '' in _ computer vision and pattern recognition ( cvpr ) , 2011 ieee conference on_.1em plus 0.5em minus 0.4emieee , 2011 , pp .16891696 .n. dalal and b. triggs , `` histograms of oriented gradients for human detection , '' in _ 2005 ieee computer society conference on computer vision and pattern recognition ( cvpr05 ) _ , vol .1.1em plus 0.5em minus 0.4emieee , 2005 , pp .886893 .e. grossmann , `` adatree : boosting a weak classifier into a decision tree , '' in _ computer vision and pattern recognition workshop , 2004 .conference on_.1em plus 0.5em minus 0.4emieee , 2004 , pp .105105 .z. tu , `` probabilistic boosting - tree : learning discriminative models for classification , recognition , and clustering , '' in _ tenth ieee international conference on computer vision ( iccv05 ) volume 1 _ , vol .2.1em plus 0.5em minus 0.4emieee , 2005 , pp .15891596 .x. li , l. wang , and e. sung , `` a study of adaboost with svm based weak learners , '' in _ proceedings .2005 ieee international joint conference on neural networks , 2005 ._ , vol .1.1em plus 0.5em minus 0.4emieee , 2005 , pp .196201 .j. friedman , t. hastie , r. tibshirani _ et al ._ , `` additive logistic regression : a statistical view of boosting ( with discussion and a rejoinder by the authors ) , '' _ the annals of statistics _ , vol . 28 , no . 2 , pp .337407 , 2000 .p. panda , a. sengupta , and k. roy , `` conditional deep learning for energy - efficient and enhanced pattern recognition , '' in _ 2016 design , automation & test in europe conference & exhibition ( date)_.1em plus 0.5em minus 0.4emieee , 2016 , pp . 475480 .
|
one of the key challenges in machine learning is to design a computationally efficient multi - class classifier while maintaining the output accuracy and performance . in this paper , we present a tree - based classifier : attention tree ( atree ) for large - scale image classification that uses recursive adaboost training to construct a visual attention hierarchy . the proposed attention model is inspired from the biological `` selective tuning mechanism for cortical visual processing '' . we exploit the inherent feature similarity across images in datasets to identify the input variability and use recursive optimization procedure , to determine data partitioning at each node , thereby , learning the attention hierarchy . a set of binary classifiers is organized on top of the learnt hierarchy to minimize the overall test - time complexity . the attention model maximizes the margins for the binary classifiers for optimal decision boundary modelling , leading to better performance at minimal complexity . the proposed framework has been evaluated on both caltech-256 and sun datasets and achieves accuracy improvement over state - of - the - art tree - based methods at significantly lower computational cost . shell : bare demo of ieeetran.cls for ieee journals visual attention , image classification , feature similarity , attention tree ( atree ) , support vector machine ( svm ) .
|
consider a statistician working on a problem in which a vector of real - valued outcomes is to be observed , and prior to , i.e. without , observing the statistician s uncertainty is exchangeable , in the usual sense of being invariant under permutation of the order in which the outcomes are listed in .this situation has extremely broad real - world applicability , including ( but not limited to ) the analysis of a completely randomized controlled trial , in which participants ideally , similar to elements of a population to which it is desired to generalize inferentially are randomized .each participant is assigned either to a control group that receives the current best treatment , or an experimental group that receives a new treatment whose causal effect on one or more outcomes is of interest .this design , while extremely simple , has proven to be highly useful over the past 90 years , in fields as disparate as agriculture , medicine , and ( in contemporary usage ) testing in data science at massive scale .we use randomized controlled trials as a motivating example below , but we emphasize that they constitute only one of many settings to which the results of this paper apply . focusing just on the experimental group in the randomized controlled trial , the exchangeability inherent in implies via de finetti s theorem that the statistician s state of information may be represented by the hierarchical model [ de - finetti - representation ] y_i f & f & f & ( f ) for , where is a cumulative distribution function ( cdf ) on and is a prior on the space of all such cdfs , i.e. , the infinite - dimensional probability simplex . note that ( [ de - finetti - representation ] ) has uniquely specified the likelihood in a bayesian nonparametric model for , and all that remains is specification of .speaking now more generally ( not just in the context of a randomized controlled trial ) , suppose that the nature of the problem enables the analyst to identify an alternative statistical problem in which & = g(p ) & & & g & g , where is a collection of transformations from one problem to another having the property that , without having seen any data , and are the _ exact same problem_. then the prior under must be the same as the prior under ! furthermore , since this holds for any , the result will be , as long as is endowed with enough structure , that there is one and only one prior , for use in , that respects the inherent invariance of the problem under study .bayes rule then implies that there is one and only one posterior distribution under .when this occurs , we say that the problem admits an _ optimal bayesian analysis_. the logic underlying the above argument has been used to motivate and formalize the notion of noninformative priors for decades . indeed , in the special case where is parametric and is a group of transformations encoding invariance with respect to monotonically - transformed units of measurement , derived the resulting prior distribution .as another example , derived the prior distribution for the mean number of arrivals of a poisson process by using its characterization as a lvy counting process to specify an appropriate transformation group .notably , the resulting prior distribution is _ not _ the jeffreys prior , because the problem s invariance and corresponding transformation group are different .see for additional work on this subject .having studied this line of reasoning , it is natural to ponder its generality . in this paperwe show that the argument can be made quite general we prove that the argument s formal notions a. can be generalized to include _ approximately _ invariant priors in an sense , and b. can be extended to infinite - dimensional priors on spaces of functions .we focus on the setting described in ( [ de - finetti - representation ] ) and defer more general situations to future work . in thissetting we derive a number of results , ultimately showing that the dirichlet process prior is an approximately invariant stochastic process for any cdf on .together with de finetti s theorem , this demonstrates that the posterior distribution \del{n , \hat{f}_n } \, , \ ] ] where is the empirical cdf , corresponds in a certain sense to an optimal bayesian analysis see section [ discussion ] for more on this point .not all approaches to noninformative priors are based on group invariance .perhaps the earliest approach can be traced back to , who proposed a principle of indifference under which , if all that is known about a quantity is that ( for some set of possible values ) , then the prior should be uniform on .suppose we take : the fact that (0,1) ] for any monotonic nonlinear requires that the problem under study must uniquely identify the scale on which uniformity should hold for the principle to be valid this was a major reason for the rise of non - bayesian theories of inference in the 19th century . has proposed a notion of noninformative priors that is defined by studying their effect on posterior distributions , and choosing priors that ensure that prior impact is minimized . has proposed the maximum entropy principle , which defines noninformative prior distributions via information - theoretic arguments .all of these notions are different , and applicable to problems where the corresponding notion of noninformativeness arise most naturally .most of the work on noninformative priors has focused on the parametric setting , in which the number of unknown quantities is finite . in contrast , and have derived results on noninformative priors in dirichlet process mixture models .their notion of noninformativeness is completely different from our own , as it is a posteriori , i.e. , it involves examining the behavior of the posterior distribution under the priors studied .this makes their approach largely complementary to ours : in specifying priors , it is helpful to understand both the prior s effect on the posterior and the prior s behavior a priori without considering any data .here we study noninformative prior specification from a _ strictly a priori _ perspective .we do not consider the prior s effect on the posterior distribution .there is no data or discussion of computation .our motivation is a generalization of the following argument by .suppose that in the randomized controlled trial described above , the outcome of interest is binary .by de finetti s theorem , we know that (\theta_1)\ ] ] is the unique likelihood for ( e.g. , the treatment group in ) this problem .suppose further that the statistician s state of information about external to the data set is what jaynes calls `` complete initial ignorance '' except for the fact that is such that jaynes argues that this state of information is equivalent to the statistician possessing complete initial ignorance about all possible rescaled and renormalized versions of , namely for all positive .jaynes shows that this leads uniquely to the haldane prior ( _ 1 ) & & & & ( _ 1 , _ 2 ) & , where .combining this result with the unique bernoulli likelihood under exchangeability , in our language jaynes has therefore identified an instance of optimal bayesian analysis . in what follows we ( a ) extend jaynes s argument to the multinomial setting with outcome categories for arbitrary finite and ( b )show how this generalization leads to a unique noninformative prior on .to begin our discussion , we first introduce the notion of an invariant distribution , which describes what we mean by the term noninformative .a density is _ invariant with respect to a transformation group _ if for all ] is the jacobian of the transformation .note that in equation ( [ invariant-0 ] ) , if we were to instead take in the middle and right integrals to be , we would exactly get the classical integration by substitution formula , which under appropriate conditions is always true .we are interested in the inverse problem : given a set of transformations in , does there exist a unique satisfying ( [ invariant-0 ] ) ? in a number of practically - relevant cases , is uniquely specified by the context of the problem being studied .if this leads to a unique prior distribution , and when additionally a unique likelihood also arises , for example via exchangeability , an optimal bayesian analysis is possible , as defined in section [ introduction ] .it is often the case that the prior distributions that result from this line of reasoning are limits of conjugate families , making them easy to work with this occurs in our results below , in which the corresponding posterior distributions are dirichlet .the above definition is intuitive , but not sufficiently general to be applicable to spaces of functions .there are multiple issues : a. in many cases , can not be taken to integrate to 1 , b. probability distributions on spaces of functions do not always admit riemann - integrable densities , c. may be defined via equivalence classes of transformations , leading to singular jacobians , and d. infinite - dimensional measures that are non - normalizable are not well - behaved mathematically . as a result, the above definition needs to be extended to a measure - theoretic setting .we call a transformation group acting on a measure space _ nonsingular _ if for with ] and for any measurable subset we have where is the domain of , is the indicator function of the set , and is the radon - nikodym derivative of with respect to .it can be seen by taking to be absolutely continuous with respect to the lebesgue measure that equation ( [ invariant-1 ] ) is a direct extension of equation ( [ invariant-0 ] ) .we would ultimately like to extend the above definition to the infinite - dimensional setting .doing so directly is challenging , because may be non - normalizable , in which case kolmogorov s consistency theorem and other analytic tools for infinite - dimensional probability measures do not apply . herewe sidestep this problem by instead extending the definition of invariance to allow us to define a sequence of _ approximately _ invariant measures , which in our setting can be taken to be probability measures .to do so , two additional definitions are needed .let be a nonsingular transformation group acting on a measure space .we say that a sequence of measures is _-invariant with respect to _ if for any with ] and each measurable subset , the inequality implies that where is the invariant measure under , is a function , implies that , is the domain of , and can be taken to be identical for all . definition [ epsilon - invariant - process ] has been explicitly chosen to formalize the notion of noninformativeness on a space of functions without constructing a non - normalizable infinite - dimensional measure . to complete our assumptions , we need to specify .our definitions constitute a direct generalization of the transformation group used by jaynes to derive the haldane prior for see section [ introduction ] .[ group - functions ] let {g \ ! : s_\infty \goesto s_\infty}\ ] ] be a nonsingular group of measurable functions under composition acting on the infinite - dimensional simplex .[ group - vectors ] for non - negative integer and any vector of non - negative constants , let be a nonsingular group under composition acting on the simplex , where each element represents an equivalence class of the transformations ( [ basic - transformation-1 ] ) . note that is a -dimensional homomorphism of we use this property in our proofs below .it can also readily be seen that for any , the constants are only determined up to proportionality . for each and ] .note first that .note also that because the same transformation is used in defining and .then , note that and hence it suffices to consider the transformation applied to the lebesgue measure .consider an arbitrary hypercube .we have where are 1-dimensional lebesgue measures , for which we have that where ] for some . since is assumed to admit a generalized density , we can rewrite ( [ theorem-10 - 1 ] ) as a riemann integral .in addition , we substitute in the transformation and radon - nikodym derivative , and get this formula needs to hold for all measurable sets , and hence the functions inside the integrals need to be equal pointwise .this yields the functional equation which will be the main subject of further study .this is a multivariate functional equation that at first may appear fearsome , but is in fact solvable via elementary methods . to solve it , recognizing that ( [ functional - equation-1 ] ) must hold for all probability vectors and all vectors of positive constants , we set & = & & & _ i=1^p c_i = 1 , which yields then , by swapping for , ( [ functional - equation-2 ] ) rearranges into since the numerator is not a function of any , and it can easily be checked that all such generalized densities are valid solutions to the original equation . thus ( [ functional - equation-3 ] ) is the functional equation s unique solution and therefore the unique invariant measure under .the same technique used to solve the functional equation in theorem [ epsilon - invariance ] can be used to prove a much stronger result : if the functional equation is true approximately , its solutions will approximate those of the exact equation . in the next resultwe make use of the definition of _ stability _ of a functional equation due to hyers , ulam and rassias see for details .[ stability ] suppose we have then & < , & & & & . by repeating the technique from the previous proof, we have which can be rewritten where the last inequality is strict because is a positive integer . letting we get which is the stability result desired .this suffices to prove our result for the dirichlet distribution .[ dirichlet - epsilon - invariance ] is an -invariant measure under for all . by repeating the steps of theorem [ epsilon - invariance ] and combining them with corollary [ stability ], we obtain that is -invariant under if and only if it satisfies & < & & & & . substituting in , and choosing the constant of the generalized density to be the same as for the dirichlet, we get where are the components of probability vector , and this expression simplifies to since for all , the product is upper bounded by and lower bounded by .thus the inequality holds near zero if for all , and since we get that , as , we can choose such that .thus , is -invariant for all .we now extend theorem [ dirichlet - epsilon - invariance ] to get an analogous result for the dirichlet process .[ dp - epsilon - invariance ] is an -invariant process under for all .consider an arbitrary finite - dimensional index with corresponding homomorphism and finite - dimensional measure .it follows from theorem [ dirichlet - epsilon - invariance ] that is -invariant with this inequality depends only on , so it suffices to show that this constant can be bounded by another constant that is not a function of and approaches 0 . is the inverse multivariate beta function , which is a ratio of gamma functions .it is well known that where is the euler - mascheroni constant .therefore , we have as . thus , for each , we can choose a to satisfy the required expressions under all finite - dimensional index sets , and is therefore an -invariant process .we conclude our theoretical investigation with a conjecture : the -invariance of all finite - dimensional distributions with a uniform should suffice for invariance with respect to the original group acting on the infinite - dimensional space .[ conjecture ] a stochastic process is an -invariant process if and only if the measure of its sample paths is an -invariant measure .one approach to attempting a proof would involve appropriately extending kolmogorov s consistency theorem to -finite infinite - dimensional measures . this can be done , but the notions involved are quite technical see for more details .to see how our results may be applied , consider again the randomized controlled trial of section [ introduction ] , and suppose now that the outcome for participant in the experimental group is categorical with levels .under exchangeability , a minor extension of de finetti s theorem for dichotomous outcomes then yields that the likelihood can be expressed as (1 , \v{\theta } ) \ , , \ ] ] in which mn is the multinomial distribution with parameters and .theorem [ epsilon - invariance ] implies that , modulo inherent abuse of notation under improper priors , (0)\ ] ] is the unique prior that obeys the fundamental invariance possessed by the problem namely , invariance with respect to all transformations of probability vectors that preserve normalization .thus we have extended jaynes s result for binomial outcomes to the multinomial setting , yielding another instance of optimal bayesian analysis .generalizing to the setting where is an exchangeable sequence of real - valued outcomes , de finetti s most general representation theorem implies that is the unique likelihood . if little is known about , and it is therefore approximately invariant under all measurable functions i.e. under , see definition [ group - functions ] the prior given by theorem [ dp - epsilon - invariance ] is (\eps , f_0 ) \,.\ ] ] by the usual conjugate updating in the dirichlet - process setting , the posterior on given with the prior in ( [ dp - eps - f0 ] ) is \del{\eps + n , \frac{\eps}{\eps + n } f_0 + \frac{n}{\eps+n } \hat{f}_n } \ , , \ ] ] in which is the empirical cdf based on . since may be taken as close to zero as one wishes , it is natural to regard \del{n , \hat{f}_n}\ ] ] as an instance of approximately optimal bayesian analysis under all .conjecture [ conjecture ] would strengthen this assertion provided can be rigorously constructed as an infinite - dimensional -finite measure , which is beyond the scope of this work .though the simplicity of this analysis may at first make it seem limited , its appeal comes from its extremely general ability to characterize uncertainty .see , e.g. , for an example of a analysis in two randomized controlled trials in e - commerce , one with sample sizes in the tens of millions .bayesian analysis can not proceed without the specification of a stochastic model prior and sampling distribution relating known quantities to unknown quantities : data to parameters .one of the great challenges of applied statistics is that the model is not necessarily uniquely determined by the context of the problem under study , giving rise to model uncertainty , which if not assessed and correctly propagated can cause badly calibrated and unreliable inference , prediction and decision see , e.g. , .perhaps the simplest way to avoid model uncertainty is to recognize settings in which it does not exist situations where broad and simple mathematical assumptions , rendered true by problem context , lead to unique posterior distributions .our term for this is _ optimal bayesian analysis_. it seems worthwhile ( a ) to catalog situations in which optimal analysis is possible and ( b ) to work to extend the list of such situations theorems [ epsilon - invariance ] and [ dp - epsilon - invariance ] are two contributions to this effort .we are grateful to daniele venturi , yuanran zhu , and catherine brennan for their thoughts on differential equations , which we originally used in a much longer and more complicated proof of the solution of our functional equation .we are additionally grateful to juhee lee for her thoughts on prior specification , and to thanasis kottas for his thoughts on dirichlet processes .membership on this list does not imply agreement with the ideas expressed here , nor are any of these people responsible for any errors that may be present .
|
in a given problem , the bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information , about the unknowns of main interest , external to the data . in cases where little such information is available , the problem under study may possess an invariance under a transformation group that encodes a lack of information , leading to a unique prior . previous successful examples of this idea have included location - scale invariance under linear transformation , multiplicative invariance of the rate at which events in a counting process are observed , and the derivation of the haldane prior for a bernoulli success probability . in this paper we show that this method can be extended in two ways : ( 1 ) to yield families of approximately invariant priors , and ( 2 ) to the infinite - dimensional setting , yielding families of priors on spaces of distribution functions . our results can be used to describe conditions under which a particular dirichlet process posterior arises from an optimal bayesian analysis , in the sense that invariances in the prior and likelihood lead to one and only one posterior distribution . _ keywords : _ bayesian nonparametrics , dirichlet process , functional equation , hyers - ulam - rassias stability , improper prior , invariance , optimal bayesian analysis , transformation group .
|
uncertainty relations date back to the work of weyl and heisenberg who showed that a signal can not be localized simultaneously in both time and frequency .this basic principle was then extended by landau , pollack , slepian and later donoho and stark to the case in which the signals are not restricted to be concentrated on a single interval .the uncertainty principle has deep philosophical interpretations .for example , in the context of quantum mechanics it implies that a particle s position and momentum can not be simultaneously measured . in harmonic analysisit imposes limits on the time - frequency resolution .recently , there has been a surge of research into discrete uncertainty relations in more general finite - dimensional bases .this work has been spurred in part by the relationship between sparse representations and the emerging field of compressed sensing .in particular , several works have shown that discrete uncertainty relations can be used to establish uniqueness of sparse decompositions in different bases representations .furthermore , there is an intimate connection between uncertainty principles and the ability to recover sparse expansions using convex programming .the vast interest in representations in redundant dictionaries stems from the fact that the flexibility offered by such systems can lead to decompositions that are extremely sparse , namely use only a few dictionary elements .however , finding a sparse expansion in practice is in general a difficult combinatorial optimization problem .two fundamental questions at the heart of overcomplete representations are what is the smallest number of dictionary elements needed to represent a given signal , and how can one find the sparsest expansion in a computationally efficient manner . in recent years , several key papers have addressed both of these questions in a discrete setting , in which the signals to be represented are finite - length vectors .the discrete generalized uncertainty principle for pairs of orthonormal bases states that a vector in can not be simultaneously sparse in two orthonormal bases .the number of non - zero representation coefficients is bounded below by the inverse coherence .the coherence is defined as the largest absolute inner product between vectors in each basis .this principle has been used to establish conditions under which a convex optimization program can recover the sparsest possible decomposition in a dictionary consisting of both bases .these results where later generalized in to representations in arbitrary dictionaries and to other efficient reconstruction algorithms .the classical uncertainty principle is concerned with expanding a continuous - time analog signal in the time and frequency domains .however , the generalizations outlined above are mainly focused on the finite - dimensional setting . in this paper ,our goal is to extend these recent ideas and results to the analog domain by first deriving uncertainty relations for more general classes of analog signals and arbitrary analog dictionaries , and then suggesting concrete algorithms to decompose a continuous - time signal into a sparse expansion in an infinite - dimensional dictionary . in our development , we focus our attention on continuous - time signals that lie in shift - invariant ( si ) subspaces of .such signals can be expressed in terms of linear combinations of shifts of a finite set of generators : \phi_\ell(t - nt),\ ] ] where are the si generators , and ] .therefore , the finite results which provide bounds on the number of non - zero expansion coefficients in pairs of bases decompositions are not immediately relevant here .instead , we characterize analog sparsity as the number of active generators that comprise a given representation , where the generator is said to be active if ,n \in { { \mathbb{z}}} ] in is defined by ^{-j\omega n} ] at frequency , and is periodic . in order to guarantee a unique stable representation of any signal in by a sequence of coefficients ] , and the norm in the middle term is the standard norm . condition ( [ eq : riesz ] ) implies that any has a unique and stable representation in terms of the sequences ] , where the inner product on is defined as in section [ sec : frame ] we consider overcomplete signal expansions in which more than generators are used to represent a signal in . in this case( [ eq : riesz ] ) can be generalized to allow for stable overcomplete decompositions in terms of a frame for .the functions form a frame for the si space if there exist constants and such that for all , where .our main interest is in expansions of a signal in a si subspace of in terms of orthonormal bases for .the generators of form an orthonormal basis form ( or generate ) a basis , we mean that the basis functions are . ] if for all , where if and otherwise . since , ( [ eq : ortht ] ) is equivalent to taking the fourier transform of ( [ eq : orthtt ] ) , the orthonormality condition can be expressed in the fourier domain as given an orthonormal basis for , the unique representation coefficients ] .this can be seen by taking the inner product of in ( [ eq : si ] ) with and using the orthogonality relation ( [ eq : ortht ] ) .evidently , computing the expansion coefficients in an orthonormal decomposition is straightforward .there is also a simple relationship between the energy of and the energy of the coefficient sequence in this case , as incorporated in the following proposition : [ prop : orth ] let generate an orthonormal basis for a si subspace , and let \phi_\ell(t - nt) ]. see appendix [ app : energy ] . in the finite - dimensional setting , sparsity is defined in terms of the number of non - zero expansion coefficients in a given basis . in an analog decomposition of the form ( [ eq : model ] ) , there are in general infinitely many coefficients so that it is not immediately clear how to define the notion of analog sparsity . in our development , analog sparsity is measured by the number of generators needed to represent . in other words , some of the sequences ] and not by the values of the individual elements . in general, the number of zero sequences depends on the choice of basis .suppose we have an alternative representation \psi_\ell(t - nt),\ ] ] where also generate an orthonormal basis for .an interesting question is whether there are limitations on and . in other words ,can we have two representations that are simultaneously sparse so that both and are small ?this question is addressed in the next section and leads to an analog uncertainty principle , similar to ( [ eq : ucd ] ) . in section [ sec : example ] we prove that the relation we obtain is tight , by constructing an example in which the lower limits are satisfied . as in the discrete settingwe expect to be able to use fewer generators in a si expansion by allowing for an overcomplete dictionary . in particular , if we expand using both sets of orthonormal bases we may be able to reduce the number of sequences in the decomposition beyond what can be achieved using each basis separately .the problem is how to find a sparse representation in the joint dictionary in practice . even in the discretesetting this problem is np - complete .however , results of show that under certain conditions a sparse expansion can be determined by solving a convex optimization problem . herewe have an additional essential complication due to the fact that the problem is defined over an infinite domain so that it has infinitely many variables and infinitely many constraints . in section [ sec : rep ]we show that despite the combinatorial complexity and infinite dimensions of the problem , under certain conditions on the bases functions , we can recover a sparse decomposition by solving a finite - dimensional convex optimization problem .we begin by developing an analog of the discrete uncertainty principle for signals in si subspaces .specifically , we show that the minimal number of sequences required to express in terms of any two orthonormal bases has to satisfy the same inequality ( [ eq : ucd ] ) as in the discrete setting , with an appropriate modification of the coherence measure . [thm : uncertainty ] suppose we have a signal where is a si subspace of .let and denote two orthonormal generators of , so that can be expressed in both bases with coefficient sequences ,b_{\ell}[n] ] . on the other hand, the finite - dimensional coherence can be written as , where is the discrete fourier transform of and is the length of . without loss of generality , we assume that .since and both generate orthonormal bases , we have from proposition [ prop : orth ] that once in terms of and once in terms of : the third equality follows from rewriting the integral over the entire real line as the sum of integrals over intervals of length as in ( [ eq : integrals ] ) in appendix [ app : energy ] , and the second inequality is a result of ( [ eq : mu ] ) .applying the cauchy - schwarz inequality to the integral in ( [ eq : mixed ] ) we have ^{2 } } \nonumber \\ & & \hspace*{-0.2in}\leq \int_0^{\frac{2\pi}{t } } { \left(}\sum_{|\ell|=a } \left| a_{\ell}{{{(e^{j\omega t})}}}\right|{\right)}^2 d\omega \int_0^{\frac{2\pi}{t } } { \left(}\sum_{\ell=1}^b \left| b_{\ell}{{{(e^{j\omega t})}}}\right|{\right)}^2 d\omega .\end{aligned}\ ] ] using the same inequality we can upper bound the sum in ( [ eq : csm ] ) : combining with ( [ eq : csm ] ) , ( [ eq : mixed ] ) and ( [ eq : normph ] ) leads to using the well - known relation between the arithmetic and geometric means completes the proof .an interesting question is how small can be made by appropriately choosing the bases . from theorem [ thm : uncertainty ] the smaller , the stronger the restriction on the sparsity in both decompositions .as we will see in section [ sec : rep ] , such a limitation is helpful in recovering the true sparse coefficients . in the finitesetting we have seen that .the next theorem shows that the same bounds hold in the analog case .[ thm : mu ] let and denote two orthonormal generators of a si subspace and let , where is defined by ( [ eq : r ] ) .then we begin by proving the upper bound , which follows immediately from the cauchy - schwarz inequality and the orthonormality of the bases : where the last equality is a result of ( [ eq : orth ] ) .therefore , . to prove the lower bound , notethat since is in for each , we can express it as \psi_r(t - nt)\ ] ] for some coefficients ] .as we show below , any signal in can be expressed in terms of si generators .we would like to choose two orthonormal bases , analogous to the spike - fourier pair in the finite setting , for which the coherence achieves its lower limit of .to this end , we first highlight the essential properties of the finite spike - fourier bases in , and then choose an analog pair with similar characteristics . the basic properties of the spike - fourier pair are illustrated in fig .[ fig : ifd ] . the first element of the spike basis , , is equal to a constant in the discrete fourier domain , as illustrated in the left - hand side of fig .[ fig : ifd ] .the remaining basis vectors are generated by shifts in time , or modulations in frequency , as depicted in the bottom part of the figure .in contrast , the first vector of the fourier basis is sparse in frequency : it is represented by a single frequency component as illustrated in the right - hand side of the figure .the rest of the basis elements are obtained by shifts in frequency .we now construct two orthonormal bases for with minimal coherence by mimicking these properties in the continuous - time fourier domain .since we are considering the class of signals bandlimited to , we only treat this frequency range .as we have seen , the basic element of the spike basis occupies the entire frequency spectrum .therefore , we choose our first analog generator to be constant over the frequency range ] .since we have real generators , we divide this interval into equal sections of length , and choose each to be constant over the corresponding interval , as illustrated in fig .[ fig : ifa ] . more specifically , let \},\ ] ]be the interval .then the analog pair of bases generated by is referred to as the analog spike - fourier pair . in order to complete the analogy with the discrete spike - fourier bases we need to show that both analog sets are orthonormal and generate , and that their coherence is equal to .the latter follows immediately by noting that it is easy to see that replicas of at distance will not overlap .furthermore , these replicas tile the entire frequency axis ; therefore , , and . to show that generate , note that any can be expressed in the form ( [ eq : si ] ) ( or ( [ eq : xeq ] ) ) by choosing for .if is zero on one of the intervals , then will also be zero , leading to the multiband structure studied in .since the intervals on which are non - zero do not overlap , the basis is also orthogonal .finally , orthonormality follows from our choice of scaling .proving that generate an orthonormal basis is a bit more tricky . to see that these functions span that from shannon s sampling theorem , any function bandlimited to with can be written as substituting , we can replace the sum over by the double sum over and , resulting in {\operatorname{sinc}}((t-(\ell-1 ) t ' -mt))/t ' ) \nonumber \\ & = & \sqrt{\frac{t}{n}}\sum_{\ell=1}^n \sum_{n \in { { \mathbb{z } } } } a_{\ell}[n]\phi_{\ell}(t - nt),\end{aligned}\ ] ] with =x((\ell-1)t ' + nt) ] , the frequency spacing between the lpfs is set to , as depicted in the right - hand side of fig .[ fig : uce ] . this signal can be represented in frequency by basis functions , with , and .it therefore remains to be shown that can also be expanded in time using signals .since is bandlimited to , \phi_\ell(t - nt),\ ] ] where ={{\langle\phi_\ell(t - nt),x(t)\rangle}} ] is a real sequence , . therefore we consider on the interval ] be a dictionary consisting of two orthonormal bases with coherence . if a vector has a sparse decomposition in such that and then this representation is unique , namely there can not be another with and .furthermore , if then the unique sparse representation can be found by solving the optimization problem ( [ eq : l1d ] ) .as detailed in , the proof of proposition [ prop : unique ] follows from the generalized discrete uncertainty principle .another useful result on dictionaries with low coherence is that every set of columns are linearly independent ( * ? ? ?* theorem 6 ) .this result can be stated in terms of the kruskal - rank of , which is the maximal number such that every set of columns of is linearly independent .* theorem 6 ) [ prop : kr ] let ] the vector at point- whose elements are ] such that \ell(t - nt),\ ] ] and ] is equal for all if and only if its norm \|_2=(\sum_n |\gamma_{\ell}^2[n]|)^{1/2} ] is equal to where \|_2 ] of ( [ eq : decoma ] ) satisfy where is the coherence defined by ( [ eq : mu ] ) , then this representation is unique .the second , more difficult question , is how to find a unique sparse representation when it exists .we may attempt to develop a solution by replacing the norm in ( [ eq : loa ] ) by an norm , as in the finite - dimensional case .this leads to the convex program \ell(t - nt).\ ] ] however , in practice , it is not clear how to solve ( [ eq : l1a ] ) since it is defined over an infinite set of variables ] .taking the inner products on both sides of ( [ eq : decoma ] ) with respect to leads to & = & \sum_{\ell=1}^{2n } \sum_{n \in { { \mathbb{z } } } } \gamma_\ell[n]{{\langle\phi_{r}(t - mt),d_\ell(t - nt)\rangle } } \nonumber \\ & = & \sum_{\ell=1}^{2n } \sum_{n \in { { \mathbb{z } } } } \gamma_\ell[n]a_{r\ell}[m - n],\end{aligned}\ ] ] where ={{\langle\phi_{r}(t - nt),d_\ell(t)\rangle}} ] satisfying the constraints in ( [ eq : loa ] ) we can alternatively seek the smallest number of functions that satisfy ( [ eq : constcf ] ) . to simplify ( [ eq : constcf ] ) we use the definition ( [ eq : dl ] ) of .since and the fourier transform of is equal to , ( [ eq : constcf ] ) can be written as denoting by the vectors with elements respectively , we can express ( [ eq : constcf2 ] ) as { \mbox{\boldmath{}}}{{{(e^{j\omega})}}},\ ] ] where is the sampled cross correlation matrix ,\ ] ] with defined by ( [ eq : r ] ) .our sparse recovery problem ( [ eq : loa ] ) is therefore equivalent to { \mbox{\boldmath{}}}{{{(e^{j\omega})}}}. \end{array}\ ] ] problem ( [ eq : l1f ] ) resembles the multiple measurement vector ( mmv ) problem , in which the goal is to jointly decompose vectors in a dictionary . in the next section we review the mmv model and a recently developed generalization to the case in which it is desirable to jointly decompose infinitely many vectors in terms of a given dictionary .this extension is referred to as the infinite measurement model ( imv ) . in section [ sec : auc ] we show how these ideas can be used to solve ( [ eq : l1f ] ) .as we will show , the ability to sparsely decompose a set of signals in the imv and mmv settings depends on the properties of the corresponding dictionary . in our formulation ( [ eq : l1f ] ) , the dictionary is given by .\ ] ] the next proposition establishes some properties of that will be used in section [ sec : auc ] in order to solve ( [ eq : l1f ] ) .[ prop : formm ] let denote two orthonormal bases for a si space .let denote the cross - correlation matrix defined by ( [ eq : m2 ] ) , and let , be the analog and discrete coherence measures defined by ( [ eq : mu ] ) , ( [ eq : mud ] ) .then , for each : 1 . is a unitary matrix ; 2 . .see appendix [ app : formm ] .the basic results of on expansions in dictionaries consisting of two orthonormal bases can be generalized to the mmv problem in which we would like to jointly decompose vectors in a dictionary .denoting by the matrix with columns , our goal is to seek a matrix with columns such that and has as few non - zero rows as possible . in this model , not only is each representation vector sparse , but in addition the vectors share a joint sparsity pattern .the results in establish that under the same conditions as proposition [ prop : unique ] , the unique can be found by solving an extension of the program : here is a vector whose element is equal to where is the row of , and the norm is an arbitrary vector norm .when is equal to a single vector , for any choice of norm and ( [ eq : l1mmv ] ) reduces to the standard optimization problem ( [ eq : l1d ] ) .[ prop : mmv ] let be an matrix with columns that have a joint sparse representation in the dictionary ] .the -sparse imv model assumes that the vectors , which we denote for brevity by , share a joint sparsity pattern , so that the non - zero elements are all supported on a fixed location set of size .this model was first introduced in in the context of blind sampling of multiband signals , and later analyzed in more detail in .a major difficulty with the imv model is that it is not clear in practice how to determine the entire solution set since there are infinitely many equations to solve .thus , using an optimization , or a greedy approach , are not immediately relevant here .in it was shown that ( [ eq : imv ] ) can be converted to a finite mmv without loosing any information by a set of operations that are grouped under a block refereed to as the continuous - to - finite ( ctf ) block .the essential idea is to first recover the support of , namely the non - zero location set , by solving a finite mmv .we then reconstruct from the data and the knowledge of the support , which we denote by .the reason for this separation is that once is known , the linear relation of ( [ eq : imv ] ) becomes invertible when the coherence is low enough . to see this ,let denote the matrix containing the subset of the columns of whose indices belong to .the system of ( [ eq : imv ] ) can then be written as where the superscript is the vector that consists of the entries of in the locations . since is -sparse , .in addition , from proposition [ prop : kr ] it follows that if then every columns of are linearly independent .therefore consists of linearly independent columns implying that , where is the moore - penrose pseudo - inverse of .multiplying ( [ yaxs ] ) by on the left gives the elements in not supported on are all zero .therefore ( [ reconstruct1 ] ) allows for exact recovery of once the finite set is correctly identified . in order to determine by solving a finite - dimensional problem we exploit the fact that is finite , since is of length , has dimension at most .in addition , it is shown in that if there exists a solution set with sparsity , and the matrix has kruskal rank , then every finite collection of vectors spanning the subspace contains sufficient information to recover exactly .therefore , to find all we need is to construct a matrix whose range space is equal to .we are then guaranteed that the linear system has a unique -sparse solution whose row support is equal .this result allows to avoid the infinite structure of ( [ eq : imv ] ) and to concentrate on finding the finite set by solving the single mmv system of ( [ vau ] ) .the solution can be determined using an relaxation of the form ( [ eq : l1mmv ] ) with replacing , as long as the conditions of proposition [ prop : mmv ] hold , namely the coherence is small enough with respect to the sparsity . in practice , a matrix with column span equal to can be constructed by first forming the matrix , assuming that the integral exists .every satisfying will then have a column span equal to .in particular , the columns of can be chosen as the eigenvectors of multiplied by the square - root of the corresponding eigenvalues .we summarize the steps enabling a finite - dimensional solution to the imv problem in the following theorem .[ thkey ] consider the system of equations ( [ eq : imv ] ) where ] is any set of sequences for which {\ell r}z_r{{{(e^{j\omega})}}} ] where the columns of form a basis for the span of .as we have seen , a basis can be determined in frequency by first forming the correlation matrix alternatively , we can find a basis in time by creating {{{\bf c}}}^h[n].\ ] ] the basis can then be chosen as the eigenvectors corresponding to nonzero eigenvalues of or , which we denote by . to find we consider the convex program { { { \bf u}}}.\ ] ] let denote the rows in that are not identically zero and let ]. then ( { { { \bf d}}}_s^h{{{\bf d}}}_s)^{-1}{{{\bf d}}}_s^h{{{\bf c}}}{{{(e^{j\omega})}}},\ ] ] where ] and . we summarize our results on analog sparse decompositions in the following theorem .[ thm : as ] let and denote two orthonormal generators of a si subspace of with coherence .let be a signal in and suppose there exists sequences ,b_{\ell}[n] ] and \gamma\gamma ] where the component of is the fourier transform at frequency of ={{\langle\phi_{\ell}(t - nt),x(t)\rangle}} ] .then the non - zero sequences,b_{\ell}[n],\ell \in s ] .however , the theorem also holds when ] in ( [ eq : asmmvt ] ) should be replaced by the matrix ] . specifically , we assume that there exists a finite number such that the support set of is equal .in other words , the joint support of any vectors is equal to the support of the entire set . under this assumption, the support recovery problem reduces to an mmv model and can therefore be solved efficiently using mmv techniques .specifically , we select a set of frequencies , and seek the matrix with columns that is the solution to { \mbox{\boldmath{}}}_i,\quad 1 \leq i \leq m. \end{array}\ ] ] if we choose as the norm , then ( [ eq : mmvtw ] ) is equivalent to separate problems , each of the form { \mbox{\boldmath{}}},\ ] ] were and is a unitary matrix ( see proposition [ prop : formm ] ) . from proposition [ prop : unique ] , the correct sparsity pattern will be recovered if is low enough , which due to proposition [ prop : formm ] can be guaranteed by upper bounding . in some cases, even one frequency may be sufficient in order to determine the correct sparsity pattern ; this happens when the support of is equal to the support of the entire set of sequences . in practice , we can solve for an increasing number of frequencies , with the hope of recovering the entire support in a finite number of steps .although we can always construct a set of signals whose joint support can not be detected in a finite number of steps , this class of signals is small . therefore , if the sequences are generated at random , then with high probability choosing a finite number of frequencies will be sufficient to recover the entire support set .until now we discussed the case of a dictionary comprised of two orthonormal bases . the theory we developedcan easily be extended to treat the case of an arbitrary dictionary comprised of sequences that form a frame ( [ eq : frame ] ) for .these results follow from combining the approach of the previous section with the corresponding statements in the discrete setting developed in .specifically , suppose we would like to decompose a vector in terms of a dictionary with columns using as few vectors as possible .this corresponds to solving since ( [ eq : loai ] ) has combinatorial complexity , we would like to replace it with a computationally efficient algorithm . if has low coherence , where in this case the coherence is defined by then we can determine the sparsest solution by solving the problem the coherence of a dictionary measures the similarity between its elements and is equal to only if the dictionary consists of orthonormal vectors .a general lower bound on the coherence of a matrix of size is ^{1/2} ] , which was introduced in section [ sec : example ] .as we have seen , this space can be generated by the functions with .suppose now that we define the functions where and .using similar reasoning as that used to establish the basis properties of the generators ( [ eq : psit ] ) , it is easy to see that constitute an orthonormal basis for the space of signals bandlimited to ] such that \ell(t - nt),\ ] ] and is minimized . to derive an infinite - dimensional alternative to ( [ eq : l1a2 ] )let generate a basis for .then is uniquely determined by the sampling sequences ={{\langleh_{\ell}(t - nt),x(t)\rangle}}=r_{\ell}(nt),\ ] ] where is the convolution . therefore , satisfies ( [ eq : xd ] ) only if =\sum_{\ell=1}^{m } \sum_{n \in { { \mathbb{z } } } } \gamma_\ell[n]a_{r\ell}[n],\ ] ] where ={{\langleh_{r}(t - nt),d_\ell(t)\rangle}} ] by noting that from proposition [ prop : l1da ] the columns of corresponding to are linearly independent .therefore , if ( [ eq : rsc2 ] ) is not satisfied , but instead is rich , so that the support of every set of vectors ( for different frequencies ) is equal to the span of the entire set , then we can still convert the problem into an mmv . to do this ,we choose frequency values and seek the set of vectors with the sparsest joint support that satisfy once the support is determined , we can find the non - zero sequences ] .we may alternatively view our algorithm as a method to reconstruct from these samples assuming the knowledge that has a sparse decomposition in the given dictionary .thus , our results can also be interpreted as a reconstruction method from a given set of samples , and in that sense complements the results of .in this paper , we extended the recent line of work on generalized uncertainty principles to the analog domain , by considering sparse representations in si bases .we showed that there is a fundamental limit on the ability to sparsely represent an analog signal in an infinite - dimensional si space in two orthonormal bases .the sparsity bound is similar to that obtained in the finite - dimensional discrete setting : in both cases the joint sparsity is limited by the inverse coherence of the bases . however , while in the finite setting , the coherence is defined as the maximal absolute inner product between elements from each basis , in the analog problem the coherence is the maximal absolute value of the sampled cross - spectrum between the signals . as in the finite domain , we can show that the proposed uncertainty relation is tight by providing a concrete example in which it is achieved .our example mimics the finite setting by considering the class of bandlimited signals as the signal space .this leads to a fourier representation that is defined over a finite , albeit continuous , interval . within this spacewe can achieve the uncertainty limit by considering a bandlimited train of lpfs .this choice of signal resembles the spike train which is known to achieve the uncertainty principle in the discrete setting .finally , we treated the problem of sparsely representing an analog signal in an overcomplete dictionary .building upon the uncertainty principle and recent works in the area of compressed sensing for analog signals , we showed that under certain conditions on the fourier domain representation of the dictionary , the sparsest representation can be found by solving a finite - dimensional convex optimization problem .the fact that sparse decompositions can be found by solving a convex optimization problem has been established in many previous works in compressed sensing in the finite setting .the additional twist here is that even though the problem has infinite dimensions , it can be solved exactly by a finite - dimensional program in many interesting cases . in this paperwe have focused on analog signals in si spaces .a very interesting further line of research is to extend these ideas and notions to a larger class of analog signals , leading to a broader notion of analog sparsity and analog compressed sensing .the author would like to thank prof .arie feuer for carefully reading a draft of the manuscript and providing many constructive comments .to prove the proposition , note that where the last equality follows from ( [ eq : xeq ] ) . to simplify ( [ eq : norm ] ) we rewrite the integral over the entire real line , as the sum of integrals over intervals of length : for all .substituting into ( [ eq : norm ] ) and using the fact that is -periodic , we obtain where we used ( [ eq : orth ] ) .to prove the proposition , we first note that since is in for each , we can express it as \psi_r(t - nt)\ ] ] for some coefficients ] denotes the row of .the second equality in ( [ eq : or ] ) follows from the orthonormality of , and the last equality is a result of ( [ eq : ar ] ) .since , it follows from ( [ eq : or ] ) that the matrix is unitary for all .
|
the past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing . most of this work has focused on the finite - dimensional setting in which the goal is to decompose a finite - length vector into a given finite dictionary . underlying many of these results is the conceptual notion of an uncertainty principle : a signal can not be sparsely represented in two different bases . here , we extend these ideas and results to the analog , infinite - dimensional setting by considering signals that lie in a finitely - generated shift - invariant ( si ) space . this class of signals is rich enough to include many interesting special cases such as multiband signals and splines . by adapting the notion of coherence defined for finite dictionaries to infinite si representations , we develop an uncertainty principle similar in spirit to its finite counterpart . we demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle . building upon these results and similar work in the finite setting , we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem . the distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints , under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite - dimensional problem .
|
over the last decade network analysis has proven to be an invaluable tool to advance our understanding of complex dynamical systems in diverse scientific fields including the neuroscienes .specific aspects of functional brain networks with nodes that are usually associated with sensors capturing the dynamics of different brain regions and with links representing interactions between pairs of brain regions were reported to differ between epilepsy patients and healthy controls which supports the concept of an epileptic network . moreover , epileptic networks during generalized and focal seizures ( including status epilepticus ) were shown to possess topologies that differ from those during the seizure - free interval .most of the aforementioned studies investigated network - specific characteristics such as the average shortest path length or the clustering coefficient .network theory , however , also provides concepts and tools to assess various aspects of importance ( e.g. centralities ) of a node in a network , but by now , there are only a few studies that investigated node - specific characteristics of epileptic networks , and these studies investigated the dynamics of functional brain networks during seizures only . refs . andreported on highest centrality values for the ( clinically defined ) epileptic focus which would support the notion of a crucial network node that facilitates seizure activity .we here report preliminary findings obtained from a time - resolved analysis of node importance in functional brain networks derived from long - term , multi - channel , intracranial electroencephalographic ( ieeg ) recordings from an epilepsy patient . investigating various centrality aspects ,we provide first evidence that the epileptic focus is not consistently the most important node ( i.e. , with highest centrality ) , but node importance may drastically vary over time .we analyzed ieeg data from a patient who underwent presurgical evaluation of drug - resistant epilepsy of left mesial - temporal origin and who is completely seizure free after selective amygdalohippocampectomy .the patient had signed informed consent that the clinical data might be used and published for research purposes .the study protocol had previously been approved by the local ethics committee .ieeg was recorded from channels ( chronically implanted intrahippocampal depth and subdural grid and strip electrodes ) and the total recording time amounted to about 1.7 days , during which three seizures were observed .ieeg data were sampled at using a analog - to - digital converter , filtered within a frequency band of , and referenced against the average of two recording contacts outside the focal region .following previous studies we associated each recording site with a network node and defined functional network links between any pair of nodes and of their anatomical connectivity using the mean phase coherence as a measure for signal interdependencies .we used a sliding window approach with non - overlapping windows of data points ( duration : ) each to estimate in a time - resolved fashion , employing the hilbert transform to extract the phases from the windowed ieeg .the elements of the interdependence matrix * i*then read : in order to derive an adjacency matrix * a*from * i*(i.e , an undirected , weighted functional network ) and to account for the case that the centrality metrics could reflect trivial properties of the weight collection we sort in ascending order and denote with the position of in this order ( rank ) .we then consider , , and .this approach leads to a weight collection with entries being uniformly distributed in the interval $ ] .the importance of a network node may be assessed via centrality metrics .degree , closeness , and betweenness centrality are frequently used for network analyses , and for these metrics generalizations to weighted networks have been proposed ( see ref . for an overview ) .if a node is adjacent to many other nodes , it possesses a high degree centrality . when investigating weighted networks , however , the number of neighboring nodes is not a sensible measure and one may consider _ strength centrality _ of node instead assessing node importance in weighted networks via closeness and betweenness centrality requires the definition of shortest paths .this can be achieved by assuming the `` length '' of a link to vary inversely with its weight .the _ closeness centrality _ of node is defined as where denotes the length of the shortest path from node to node .the _ betweenness centrality _ of node is the fraction of shortest paths running through that node . here, denotes the number of shortest paths between nodes and running through node , and is the total number of shortest paths between nodes and .we used the algorithm proposed by brandes to estimate the aforementioned centralities .[ img : centr ] illustrates the centrality metrics , , and the nodes of an exemplary network .in figs . [ img : dc_example ] , [ img : cc_example ] , and [ img : bc_example ] we show the temporal evolutions of , for three selected nodes from the exemplary epileptic brain networks investigated here .we chose one node from within the epileptic focus ( upper plots of figures ) , another node from the immediate surrounding of the epileptic focus ( middle plots of figures ) , and a third one which was associated with a recording site far off the epileptic focus ( lower plots in figures ) .all centrality metrics exhibited large fluctuations over time , both on shorter and longer time scales .the temporal evolutions of quite similar , while differently from the two other metrics .the similarity between to be expected , at least to some degree ( see the discussion in ref . ) , since they characterize the role of a node as a starting or end point of a path . on the other hand , a node s share of all paths between pairs of nodes that utilize that node . for this patient, we could not observe any clear cut changes of the centrality metrics prior to seizures that would indicate a preictal state .moreover , none of the metrics exhibited features in their temporal evolutions that would constantly indicate the network nodes associated with the epileptic focus ( or its immediate neighborhood ) as important nodes .rather , their importance may drastically vary over time . to demonstrate that our exemplary results hold for all nodes of the epileptic brain networks investigated here, we show , in fig .[ img : boxplots ] , findings obtained from an exploratory data analysis .the main statistical characteristics of centralities of each node ( maximum and minimum value , the median , and the quartiles estimated from the respective temporal evolutions ) indicated that neither the epileptic focus nor its immediate surrounding can be considered as important , and that the different centrality metrics ranked different nodes as most important .we have investigated various aspects of centrality of individual nodes in epileptic brain networks derived from long - term , multi - channel ieeg recordings from an epilepsy patient . utilizing different centrality metrics , we observed nodes far from the clinically defined epileptic focus and its immediate surrounding to be the most important ones .although our findings must , at present , be regarded as preliminary , they are nevertheless in stark contrast to previous studies that reported highest node centralities for the epileptic focus only .it remains to be investigated whether the different findings can be attributed to the dynamics of different epileptic brains or to , e.g. , differences in network inference .one also needs to take into account that there are a number of potentially confounding variables whose impact on estimates of different centrality metrics is still poorly understood .this work was supported by the deutsche forschungsgemeinschaft ( grant no . le660/4 - 2 ) .d. koschtzki , k. lehmann , l. peeters , s. richter , d. tenfelde - podehl and o. zlotowski , centrality indices , in _ network analysis _ , eds . u. brandes and t. erlebach , lecture notes in computer science , vol .3418 ( springer , berlin , heidelberg , 2005 ) pp . 1661 .
|
there is increasing evidence for specific cortical and subcortical large - scale human epileptic networks to be involved in the generation , spread , and termination of not only primary generalized but also focal onset seizures . the complex dynamics of such networks has been studied with methods of analysis from graph theory . in addition to investigating network - specific characteristics , recent studies aim to determine the functional role of single nodes such as the epileptic focus in epileptic brain networks and their relationship to ictogenesis . utilizing the concept of betweenness centrality to assess the importance of network nodes , previous studies reported the epileptic focus to be of highest importance prior to seizures , which would support the notion of a network hub that facilitates seizure activity . we performed a time - resolved analysis of various aspects of node importance in epileptic brain networks derived from long - term , multi - channel , intracranial electroencephalographic recordings from an epilepsy patient . our preliminary findings indicate that the epileptic focus is not consistently the most important network node , but node importance may drastically vary over time . 14cm(3cm,27 cm ) r. tetzlaff and c. e. elger and k. lehnertz ( 2013 ) , _ recent advances in predicting and preventing epileptic seizures _ , page 175185 , singapore , world scientific . + copyright 2013 by world scientific .
|
quantum biology is swiftly emerging as an interdisciplinary science synthesizing , in novel and unexpected ways , the richness and complexity of biological systems with quantum physics and quantum information science .the intuitive disposition prevailing until recently , namely that quantum coherence phenomena have no relevance to biology , can be simply understood .indeed , quantum coherence has been clearly manifested in carefully prepared and well - isolated quantum systems , the experimental implementation of which was technically demanding .it was thus plausible to assume that all timescales of interest in complex biological matter are orders of magnitude larger than the coherence time of any underlying quantum phenomenon .in other words , decoherence has been assumed to be detrimental merely due to the complexity of biological systems and the vast environments they offer as sinks of quantum information , environments that were understood to be anything but `` carefully prepared '' . in recent years , however , a different physical picture is gradually emerging .biological systems are indeed nothing like single atoms trapped in ultra - high vacuum , allowing the observation of coherences lasting for several seconds or even hours for solid - state nuclear spin systems .but they are also nothing like a classical system where decoherence has obliterated any chance to detect off - diagonal elements of the relevant density matrix .it appears that nature has optimized biological function using both worlds , i.e. taking advantage of quantum coherence amidst an equally essential yet not detrimental decoherence . in this reviewwe focus on a major driving force of quantum biology , namely the quantum dynamics of the radical - pair mechanism ( rpm ) .the rpm involves a multi - spin system of electrons and nuclei embedded in a biomolecule and undergoing a number of physical / chemical processes , like magnetic interactions and coherent spin motion , electron transfer and spin relaxation to name just a few . for an overview of rpm s long scientific historywe refer the reader to a number of reviews of the earlier work on radical - ion pairs and spin chemistry .although the rpm has been known since the 1960s , the rich quantum physical underpinnings of the mechanism have been unraveled only recently . the first paper discussing rpm in the context of modern quantum information theory introduced quantum measurements as a necessary concept to understand the quantum dynamics of rpm , leading among other things to the fundamental spin decoherence process of radical - ion - pair ( rp ) reactions . in a number of papers that followed ,the intricate quantum effects at play in rp reactions were further elucidated .quantum measurement dynamics , phononics , in particular the phonon vacuum , quantum coherence and decoherence , in particular measurement - induced decoherence , coherence distillation , quantum coherence quantifiers , quantum correlations , quantum jumps and the quantum zeno effect , concepts from quantum metrology and the quantum - communications concept of quantum retrodiction were shown to be central for understanding rpm . were it not for these conceptual underpinnings, it would be hard to justify why rpm constitutes a paradigm for quantum biology .it can of course be argued that radical - ion pairs involve spins , and spins are genuine quantum objects , but this argument points to the atomistic , structural aspect of biology .that is , one could also argue that spins , e.g. singlet states in the helium atom , determine atomic structure and atomic structure is the basis of biological structure , alas this is not what quantum biology is about .we hope , however , to convince the reader that rpm s position on the map of quantum biology is indeed well - deserved . this review will focus on our recent work concerning the quantum foundations of rpm .this biochemical spin system is an open system and hence the relevant density matrix does not just obey schrdinger s equation , ] is the starting point for all theoretical calculations involving rp reactions , hence the foundational character of this discussion .our specific quest has been to find the exact form of the reaction super - operator .until we entered this field in 2008 , the overwhelming majority of theoretical treatments in spin chemistry have used what we now understand is a phenomenological theory elaborated upon by haberkorn in 1976 .returning to the chronological exposition , the first paper illuminating the quantum - information substratum of rpm led to a resurgent interest of the spin chemistry community on the foundational aspects of the field , and in parallel to a flurry of activity in the quantum science community , even prompting some authors to announce , more or less timely , `` the dawn of quantum biology '' .the main motivation and excitement behind this work has largely been rpm s relation to the avian magnetic compass .it is clear that concepts of quantum measurements , quantum ( de)coherence and entanglement take on a distinct scientific flavor when applied to migrating birds . in this reviewwe will only cover the part of the literature relevant to our specific quest of obtaining the master equation for .we feel that the rest of the literature , especially papers discussing quantum coherence and entanglement in chemical magnetoreception , address a very promising research venue , however they are mostly based on the traditional description of rp dynamics . since the qualitative and quantitative understanding of quantum coherence and entanglement is intimately tied to the master equation for , these discussions could be premature . focusing on the rpm ,we will not cover other , equally exciting venues of quantum biology , like quantum effects in photosynthetic light harvesting and olfaction recently reviewed , coupling quantum light to retinal rod cells , testing for electron spin effects in anesthesia , and studying ion transport in neuronal ion channels , to name a few .the outline of this review is the following . in section 2we briefly review the basic notions of spin chemistry , touching upon the traditional approach to the radical - pair mechanism . in section 3we present our work on the quantum foundations of rpm . in light of this discussion, we revisit the traditional approach in section 4 and explore its weaknesses and limits of validity . in section 5 we study single - molecule quantum trajectories , which allow us to obtain a deeper understanding of both approaches and test their internal consistency . in section 6 we discuss other competing theoretical approaches .we devote section 7 to discussing the nonlinear nature of our master equation , and its relation to foundational concepts of quantum physics .we close with an outlook in section 8 .spin chemistry deals with the effect of electron and nuclear spins on chemical reactions . that such an effect is possible in the first place is not too straightforward to understand .indeed , covalent and hydrogen bond energies are on the order of 1000 kj / mole and 10 kj / mole , respectively , while the electron s zeeman energy in a laboratory magnetic field of 1 kg is on the order of 1 j / mole , let alone earth s field .yet even at earth s field , electron and nuclear spins can have quite a tangible effect in this class of chemical reactions .the explanation , to be revisited later , is an intricate combination of spin precession and electron transfer timescales with spin angular momentum conservation .although the first organic free radical , triphenylmethyl , was discovered by gomberg in 1900 , the origin of spin chemistry is set in the 1960s , when anomalously high epr and soon later nmr signals were observed in chemical reactions of organic molecules , termed cidep ( chemically induced dynamic electron polarization ) and cidnp ( chemically induced dynamic nuclear polarization ) , respectively .the radical - pair mechanism was introduced by closs and closs and by kaptein and oosterhoff as a reaction intermediate explaining these observations ( a recent editorial briefly reviews the field s history and early literature ) . since then, spin chemistry has grown into a mature field of experimental and theoretical physical chemistry . in particular, the effects related to cidnp , which we will address in section 5.2 , continue to attract a lot of interest since cidnp has become a versatile tool to study photosynthetic reaction centers . an intriguing result of early spin - chemistry work was the proposal by schulten that avian magnetoreception is based on rp reactions . in this reviewwe will not touch upon applying spin chemistry to magnetoreception , referring the reader to a representative part of an extensive literature .the quantum degrees of freedom of radical - ion pairs are formed by a multi - spin system embedded in a biomolecule , which can be either in the liquid or in the solid phase . in particular ,rps are biomolecular ions created by a charge transfer from a photo - excited d donor - acceptor dyad da , schematically described by the reaction , where the two dots represent the two unpaired electron spins of the two radicals .the excited state d is usually a spin zero state , hence the initial spin state of the two unpaired electrons of the radical - pair is also singlet , denoted by .now , both d and a contain a number of magnetic nuclei which hyperfine - couple to the respective electron .neither singlet - state nor triplet state rps are eigenstates of the resulting hamiltonian , , hence the initial formation of is followed by singlet - triplet ( s - t ) mixing , i.e. a coherent oscillation of the spin state of the electrons , designated by .concomitantly , nuclear spins also precess , and hence the total electron / nuclear spin system undergoes a coherent spin motion driven by hyperfine couplings and the rest of the magnetic interactions to be detailed later .this coherent spin motion has , however , a finite lifetime .charge recombination , i.e. charge transfer from a back to d , terminates the reaction and leads to the formation of the neutral reaction products .it is angular momentum conservation at this step that empowers the molecule s spin degrees of freedom and their minuscule energy to determine the reaction s fate : there are two kinds of neutral products , singlet ( the original da molecules ) and triplet , .as it turns out , their relative proportion can be substantially affected by spin interactions entering the mixing hamiltonian . for completenesswe note that the reaction can close through the so - called intersystem crossing , mediated by e.g. spin - orbit coupling . a schematic diagram of the above is shown in fig . [ schematic]a .a and a subsequent charge transfer produces a singlet state radical - pair , which is coherently converted to the triplet radical - pair , , due to intramolecule magnetic interactions .simultaneously , spin - selective charge recombination leads to singlet ( da ) and triplet neutral products ( ) .the latter can intersystem cross into da and close the reaction cycle .( b ) simplified version of ( a ) neglecting the photoexcitation and charge transfer steps. both diagrams could be misleading if taken too literally , since they might suggest that e.g. only singlet radical - pairs recombine to singlet neutral products .this is not the case , since a radical - pair in a coherent s - t superposition can recombine into e.g. a singlet neutral product ( see section 3.7).,width=7 ] omitting the light excitation and initial charge transfer , which commence the reaction in a timescale usually faster than the reaction dynamics , the reaction scheme can be simplified into fig .[ schematic]b . as shown with the two shaded boxes in fig .[ schematic]b , the rp reaction consists of two physically very different processes working simultaneously : ( a ) the unitary dynamics embodied by the magnetic hamiltonian driving a coherent s - t oscillation of the rp spin state , and ( b ) the non - unitary reaction dynamics reducing the rp population in a spin - dependent way . as it will turn out , the latter are the hardest to understand .the density matrix describes the spin state of the rp s two electrons and magnetic nuclei located in d and a. its dimension is , where 4 is the spin multiplicity of the two unpaired electrons , is the spin multiplicity of the nuclear spins , and is the nuclear spin of the -th nucleus , with .the simplest possible rp contains just one spin-1/2 nucleus hyperfine coupled to e.g. the donor s unpaired electron . in this casethe density matrix has dimension .although unrealistic , this simple system exhibits much of the essential physics without the additional computational burden of more nuclear spins and matrices of higher dimension , therefore it is frequently used as a model system .it is angular momentum conservation at the recombination process that forces the decomposition of the rp s spin space into an electron singlet and an electron triplet subspace , defined by the respective projectors and .these are matrices given by and , where and are the spin operators of the donor and acceptor electrons written as -dimensional operators , e.g. the -th component of would be written as , where the first operator in the kronecker product refers to the donor s electron spin , the second to the acceptor s electron spin and the rest to the nuclear spins . by denote the regular ( 2 dimensional ) spin-1/2 operators and by the -dimensional unit matrix .the projectors and are complete and orthogonal , i.e. and .the rp s singlet subspace has dimension while the triplet subspace has dimension .the electron - spin multiplicity 1 in the former corresponds to the singlet state = 1(- ) , while the electron - spin multiplicity of 3 in the latter stems from the triplet states the coupled basis describes the two unpaired electron spins .for example , for an rp containing spin-1/2 nuclei , it would be and the projectors would be written as and .there are also two rates to consider , the singlet and triplet recombination rates , and , respectively .these are defined as follows : at we prepare an rp ensemble having no magnetic interactions ( ) in the singlet ( triplet ) electron state ( and any nuclear spin state ) .then its population would decay exponentially at the rate ( ) .in general , during a time interval , the measured singlet and triplet neutral products will be indeed , if all rps were in the singlet or triplet state , the fraction of them recombining in the singlet or triplet channel during would be and , respectively . if they are in the general state described by , then and have to be multiplied by the respective fraction of singlet and triplet rps .radical - ion pairs are usually produced from an electron spin zero neutral precursor , so their initial state is the electron singlet state .the thermal proton polarization at room temperature and at magnetic fields as large as 10 kg is about , which for all practical purposes can be approximated by zero .the rp initial state having zero nuclear polarization and being in the singlet electron spin state is _0=q_s[singlet ] indeed , since , it is .if is the -th component ( ) of the -th nuclear spin , then since are traceless operators . .the different nuclear spin environment of and is the basis of singlet - triplet mixing.,width=7 ] the magnetic interactions included in drive the unitary rp dynamics .the simplest way to understand s - t mixing is by using the classical vector model and considering first an imaginary rp consisting of just two electron spins having larmor frequencies different by . as shown in fig .[ st2e]a , if the initial state is the singlet state , after a time the two spins will have developed a phase difference , their state becoming the triplet . in realitythis model is encountered in the so - called mechanism operating at high magnetic fields , where an actual difference in the g - factor of the two electrons is responsible for their different larmor frequency . in most cases , however , s - t mixing is caused by the different nuclear spin environment coupled to each electron through hyperfine interactions .to illustrate hyperfine - induced s - t mixing , we consider the example of py - dma ( pyrene - dimethylaniline ) , shown in fig .[ st2e]b . herepy is the electron donor and dma the acceptor , i.e. the rp is . in this example , the donor radical has 10 proton spins hyperfine coupled to its unpaired electron , while the acceptor radical has 12 spins , 11 protons and one nitrogen , coupled to its unpaired electron .this difference in nuclear spin environments `` seen '' by the unpaired electrons of and drives s - t mixing . to come back to the earlier discussion of the very possibility of spin - dependent chemical reactions , note that the diffusion constant of py / dma in typical solvents used in is , while the inter - radical distance where the rp recombination reaction is appreciable is on the order of 10 nm , hence the reaction dynamics take place at a timescale of about 100 ns . the electron spin precession time at a hyperfine magnetic field of 10 g is about 30 ns , short enough for spin motion to influence the reaction yields .we will now visualize a few among many possibilities of s - t mixing , by plotting in fig .[ examples ] the hamiltonian evolution of the rp spin state for four different hamiltonians . .( c ) rp with one nuclear spin in each radical , isotropically coupled with hyperfine couplings and . for the plot .( d ) rp with 4 nuclear spins in one radical and none in the other .the hyperfine couplings are , , and .,width=8 ] we plot the expectation value of given by , where evolves just unitarily , ] .so in we defined p_coh=1\{}c()[pcoh1 ] in retrospect , this definition of the normalization is not satisfactory , because if one is given a density matrix describing an rp ensemble without any reference to , the measure can not be calculated .surely , in most calculations where one propagates the density matrix , the evolution is driven by a known hamiltonian .but in principle it would be desirable to have a measure defined by just providing the density matrix . another way to normalize based solely on derived from the most general pure state given before .we define the maximally s - t coherent pure state as the state having equal amplitudes in all four terms of , .due to normalization it follows that and hence .we can thus define p_coh=43c()[pcoh2 ] a few comments are in order : ( i ) the definition of has the proper ( linear ) scaling with the density matrix elements to qualify for a proper measure of coherence .( ii ) we defined the pure state of maximum coherence as in , where it is stated that in a space of dimension spanned by the vectors , where , the state of maximum coherence is .( iii ) in idealized theoretical scenarios , e.g. when one considers a 2-dimensional radical - pair spanned by just two states , and , the normalization constant 3/4 changes accordingly , i.e. it would become 1/2 .( iv ) when we first introduced the need for an s - t coherence measure we defined in a way that it scales quadratically with the matrix elements of , and this is not an acceptable measure according to .the definition alleviates this problem .( v ) the different normalizations in and produce acceptable numerical differences , but a more thorough study is required to fully understand the normalization of .there is one more step before proceeding with the theory of quantum retrodiction .as mentioned previously , the general rp density matrix can be written as .having defined the coherence measure , we can introduce two new matrices , the maximally incoherent and maximally coherent version of , denoted by and , respectively , and given by while the interpretation is obvious , the interpretation of is more subtle .reminiscent of the concept of entanglement distillation , alludes to the coherence - distilled version of , since . to give a simple example , consider like in section 3.7.1 a mixture of radical - pairs in the state and radical - pairs in the singlet state .it is , so should be the probability that picking an rp out of the ensemble it will have maximum s - t coherence . in general , using andwe can write any density matrix as . in fig .[ qc ] we show that rp reactions can be viewed as a biochemical realization of quantum communication .indeed , the rp spin state might be considered as the quantum state held by alice , the sender .bob just detects singlet or triplet neutral products . from thesehe might infer the information stored in alice s states , for example the value of the applied magnetic field .quantum retrodiction probabilistically answers the question of what was the pre - recombination state of a radical - pair , the neutral product of which is detected by bob .the relevant formalism is used in to calculate the conditional probabilities that upon detecting a singlet ( triplet ) product , the pre - recombination state of the radical - pair was maximally coherent or maximally incoherent , , and .to arrive at the reaction terms , we need to subtract the estimated pre - recombination state of the recombined radical - pair from the density matrix of the surviving rps .this crucially depends on s - t coherence . for a maximally coherent density matrixit is whereas for a maximally incoherent density matrix it is stated in words , these rules mean that in the extreme of maximum coherence we have to subtract the full pre - recombination state from , whereas in the opposite extreme we subtract either a singlet or a triplet rp .the previous update rules hold for a single molecule . during will detect singlet and triplet products , hence the reaction terms read \delta\rho^{\rm 1s}&=p({\rm incoh}|{\rm s})\delta\rho_{\rm incoh}^{\rm 1s}+p({\rm coh}|{\rm s})\delta\rho_{\rm coh}^{\rm 1s}\nonumber\\ \delta\rho^{\rm 1t}&=p({\rm incoh}|{\rm t})\delta\rho_{\rm incoh}^{\rm 1t}+p({\rm coh}|{\rm t})\delta\rho_{\rm coh}^{\rm 1t}\nonumber\end{aligned}\ ] ] as already mentioned , singlet - triplet dephasing and spin - dependent recombination are two physical processes running simultaneously and independently , while stemming from the same interaction hamiltonian between radical - pair and vibrational reservoir . using and , we thus arrive at the sought after master equation +{\cal d}\llbracket\rho\rrbracket+{\cal r}_{k}\llbracket\rho\rrbracket\label{me}\end{aligned}\ ] ] where is the reaction super - operator the term ] . defining , it follows that -k(r{{\rm q}_{\rm s}}+{{\rm q}_{\rm s}}r-2{{\rm q}_{\rm s}}r{{\rm q}_{\rm s}}) ] , where is haberkorn s reaction operator : _h =- k_sq_sq_s - k_tq_tq_t[rh ] if by hand we force in our reaction operator , we exactly retrieve . in other words ,the s - t dephasing process is latently built into haberkorn s master equation , however the reaction terms skew the state evolution of rps by retrodicting the pre - recombination state always assuming zero s - t coherence . in his 1976 paper haberkorn had qualitatively examined various forms of the master equation , one being the decoherence master equation .however , he correctly dismissed it since it is trace - preserving and hence `` corresponds to spin relaxation without reaction '' .we now understand that s - t dephasing is just one aspect of the dynamics running simultaneously with the other , the rp recombination . in any case ,haberkorn s 1976 paper is a tribute to the power of simple physical arguments leading to important insights .the traditional master equation has been used in most theoretical considerations in the field of spin chemistry since the late 1960s , adequately accounting for many experiments .so a natural question to ask is : why do we need a new theory when the one at hand , even if not fundamentally sound , appears to be good enough ? towards the answer we first remind the reader of the early era of atomic physics , when atom - light interactions were successfully described by einstein s rate equations , which just considered the transfer of atomic populations between atomic states , the transfer being triggered by ( at the time incoherent ) light sources such as lamps .after the invention of the laser , the field of coherent atomic interactions exploded , since atomic coherences could then be excited and probed .einstein s rate equations were not sufficient to describe experiments , which required the introduction of the atomic density matrix , i.e. the coupled time evolution of atomic populations and coherences .the parameter deciding which approach is the most suitable is the ratio of the rabi frequency of the exciting light to the relaxation rate , which in this case is the spontaneous decay rate . in spin chemistry ,many experiments so far have been apparently dominated by spin relaxation , which among other things , also damps s - t coherence quantified by .if these relaxation phenomena push towards zero much faster than the fundamental decoherence mechanism we have considered , then our master equation quickly converges to the traditional theory .this will be further quantified in the following subsection .we can thus arrive at the conclusion that , notwithstanding fortuitous agreement with experiments brought about by relaxing environments , understanding spin chemistry experiments at the fundamental level is not possible within the traditional theory .moreover , without a fundamental theory it is neither possible to predict the full range of physical effects that can be _ in principle _ observed , nor design new experiments and explore new fronts of the field .for example , understanding the avian compass mechanism at the quantum level , e.g. addressing the fundamental heading error of the compass and its magnetic sensitivity is a promising venue of biochemical quantum metrology , which however can not be conclusively explored without the fundamental theory of rpm .this is because even if radical - pairs participating in the natural avian compass are plagued by relaxation , future biomimetic sensors need not be .finally , there are cases where any agreement with experiments is deluding , i.e. haberkorn s incorrect reaction super - operator forces other system parameters ( like hamiltonian couplings ) to take on incorrect values in order for the theory to match the data .as will be elaborated in section 5.2 , such an example are the spin dynamics in cidnp , which can not be understood within haberkorn s approach , since the lifetimes of the involved rps are too short for relaxation to set in. a general relaxation process can be described by a set of kraus operators , satisfying and transforming the density matrix according to , i.e. the master equation will be augmented with the term . note that is a special case of resulting from the following three kraus operators : , and , with . the simplest way to makeour point is to assume that besides the fundamental s - t dephasing process , we have an independent relaxation mechanism of rate , being described by the same kraus operators , and . defining , which takes as input an operator and produces a number .in contrast , the calligraphic takes as input an operator and produces another operator . ]the super - operator , we use to find that obeys the equation = -ic-_cc[drcdt ] , where , and we defined with . taking the trace of , we find that , where .we finally define the `` genuine '' s - t decoherence rate as .this describes the decay of s - t coherence due to all effects other than the changing normalization of . for the considered exampleit is .it is obvious that if , where is the rate at which s - t coherence is generated by the hamiltonian , will quickly ( compared to the reaction time ) decay to zero , and thus .hence the traditional theory will provide a consistent description of the dynamics .the form of haberkorn s reaction operator can be simply phrased as follows : singlet products originate from singlet radical - pairs , and triplet products originate from triplet radical - pairs .this reasoning obliterates the very concepts of quantum superposition and quantum measurement and is unphysical on many grounds , one related to the well understood quantum state reconstruction .suppose we are provided with a number of qubits all prepared in the state , which however is not disclosed to us .suppose then that we perform a measurement of each qubit either in the basis b1= or in the basis b2= .measuring in b1 , the measurement outcomes will be 0 or 1 and we should conclude that the preparation state was or . measuring in b2 , the outcome will always be 0 , and we should conclude that the prepared state was . repeating this many times we should conclude that we are provided with qubits in all three states , and . in other words , based on the dictum of identifying the pre - measurement state with the post - measurement state , we would never be able to use the statistical - interpretation of quantum measurements and reconstruct the original state preparation .we will again consider an rp ensemble initially prepared in the state , not undergoing any s - t mixing ( ) and having .haberkorn s theory predicts that the rp state remains pure at all times , with the ensemble consisting at time of radical - pairs in the state |_t=1(e^-k_st2|s+|t_0)[psi1 ] at it is , which is not entangled .however , it is seen from that _ without any coherent operation _ on the radical - pairs , they spontaneously acquire a non - zero entanglement while _ coherently _ evolving to the maximally entangled and non - reactive triplet state .this is unphysical .the caveat , as will be explained in detail in section 6.2 , is that an irreversible reaction causing population leakage is wrongly understood to coherently operate on the amplitudes of the rp quantum state .in contrast , in our approach an rp in the state is ( i ) either projected to , or ( ii ) projected to , or ( iii ) recombines to a singlet neutral product , or ( iv ) stays put at .the probabilities of ( i ) and ( ii ) are the same ( see section 3.8.1 ) , so together they represent a balanced mixture of maximally entangled states , the mixture carrying zero entanglement . event ( iii ) produces net entanglement , but it does so by an incoherent operation , related to measurement - induced entanglement , to be addressed in more detail elsewhere .in section 3.5 we discussed the interpretation of in terms of single - molecule quantum trajectories involving three possible events , ( i ) singlet projection with probability , ( ii ) triplet projection with probability , and ( iii ) hamiltonian evolution with probability .however , the full quantum dynamics include both s - t dephasing and recombination .to account for the latter , we have to augment the quantum trajectories , which are now formed by 5 possible events taking place within : ( e1 ) singlet projection with probability , ( e2 ) triplet projection with probability , ( e3 ) singlet recombination with probability , ( e4 ) triplet recombination with probability , and ( e5 ) hamiltonian evolution with probability .does the average of many quantum trajectories reproduce our master equation ? moreover , which are the quantum trajectories in haberkorn s approach , anddo they reproduce haberkorn s master equation ? the concept of quantum trajectories , well known in quantum optics , was only recently introduced in spin chemistry . in any case ,in we presented a significant constraint to be met when one attempts a quantum - trajectory analysis of haberkorn s master equation .the intuitive understanding in spin chemistry was that rps evolve _ unitarily _ under the action of until the instant they recombine into neutral reaction products .we have shown in that this physical picture _ must _ be advocated if haberkorn s theory is to be consistent in the special case .first of all , what is so special about this case ?the rates and are rp - specific parameters entering the master equation , which obviously must be valid for all radical - pairs , those for which and those for which . in fact the latter are abundant in photosynthetic reaction centers , as will be presented in the following section . moreover , quantum dynamics are non - trivial in this case of asymmetric recombination rates .in contrast , rp quantum dynamics simplify a lot when , where rp population decays exponentially at a rate , _ without the decay affecting the state of the surviving rps_. haberkorn s master equation now becomes : d / dt =- i[h,]-k[drdtk ] defining a new density matrix by , it is ] , i.e. coherences between an exploded and un - exploded bomb .we can now make a point regarding our approach .diagrams ( e ) and ( f ) of fig .[ rpcoh ] have different final states , hence do not interfere , and this is why to calculate in our master equation we add , represented by fig .[ rpcoh]e , with , represented by fig .[ rpcoh]f . .( c ) within 2-order perturbation theory the coherent reactant - product coupling interferes to zero .( d ) leading ( 4 ) order coherent reactant - product coupling .( e ) reactant dephasing , resulting from a virtual transition to the intermediate states and back .( f ) product formation , resulting from a real transition from the reactant to the intermediate state and the decay of the latter to the product state . essentially , ( f ) consists of two separate diagrams materializing consecutively , i.e. is here a real and not a virtual state like in ( c - e).,width=8 ] briegel and co - workers considered the description of rp quantum dynamics from the perspective of quantum maps in and more comprehensively in , where the authors attempt a generalized formulation of the problem , at the cost though , of not offering a clear - cut answer to the question an experimentalist , for example one detecting solid - state nmr signals from photosynthetic reaction centers , might rightfully ask : `` is haberkorn s approach adequate to account for the data and if not , which is the master equation one should use ? '' . the answers given in are `` maybe '' and `` it depends '' .we reply `` no '' and eq . .in the authors invent an abstract environment , without specifying where this environment comes from and what are the microscopic physics coupling the rp s spin degrees of freedom to this environment .the authors then go on to provide several master equations given in terms of decay and dephasing amplitudes , which however , the `` user '' is supposed to specify .for example , given the typical scenario of a solid - state cidnp experiment , i.e. an immobilized rp characterized by the two rates and , and in the absence of any technical decoherence , it is not clear in what is the master equation one ought to use .we stress the physical scenario of immobilized radical - pairs , because the authors in elevate molecular re - encounters to a fundamental physical effect , something that we think is more perplexing than illuminating .this is because the recombination rates and are in principle known functions of the distance between the two radicals .so if one has solved the immobilized rp case , one can readily generalize to a liquid state scenario , where the two radicals diffuse and re - encounter , and where and would become functions of time through their dependence on inter - radical separation . to summarize , the considerations of should become more precise in order that ( i ) it is discernible if and where they differ from other approaches , ( ii ) they are amenable to criticism , and ( iii ) they are useful to experimentalists .the nonlinear nature of our master equation is a subtle point requiring a thorougher examination than we can afford here .although we sympathize with the unease of many quantum physicists with nonlinear master equations , we understand the nonlinearity of to be a necessary evil forced upon us by the physics of the problem .we will elucidate this point by a few thought experiments . in all of themwe consider no s - t mixing ( ) and an unreactive triplet state ( ) .suppose that we prepare two rp ensembles , one in the maximally coherent state , or equivalently , and the other in the maximally incoherent mixture of singlet and triplet states , .in section 3.9.1 we presented the counterintuitive result that in the first ensemble we end up with 25% of the original rps locked in the non - reactive triplet state , while in the second we will end up with 50% of the original rps locked in the non - reactive triplet state .both ensembles have the same , but different .hence the reaction depends on . that this dependence is nonlinear is seen by considering two different ensembles that will occupy the rest of the discussion .consider the ensemble e prepared in the state and e in the state .the two ensembles just differ in the sign of .the end result is the same for both , that is , at we are left with 25% of the original rps locked in the non - reactive triplet state .it is thus clear that the reaction _depends on the `` absolute value '' of s - t coherence , which is a non - linear operation_. now consider mixing the initial contents of e and e and offering the box e to a good experimenter who can quickly , before any recombination takes place , establish that the density matrix of the system is . indeed , in spin chemistry experiments individual rps are not accessible . at the highest laboratory magnetic fields , the typical ( e.g. for an epr transition ) frequency is on the order of 10 hz , the corresponding wavelength and resolvable volume being 1 mm and 1 mm , respectively .this volume contains a macroscopic number of rps .hence any spin - state measurement on the box e ( assumingly fast and non - destructive ) will be a global measurement on all rps inside the box , and it will be consistent with the maximally incoherent density matrix . what will be the final population of non - reactive triplet rps that will remain in the box ?if we know the specific preparation , we conclude it will be 25% , like it was for e and e individually .if we do nt , like the experimenter who is offered the box e , we conclude it will be 50% , as expected from .not quite . the issue at hand has to do with _ proper _ versus _ improper _ mixtures .the case in which the experimenter is unaware of the specific state preparation is referred to as a _ proper _ mixture .there is maximum s - t coherence at the single molecule level both in e and in e , but due to the mixing the whole box e appears maximally incoherent .in contrast , if there is a genuine decoherence process , like s - t dephasing , leading to the same mixed state , one talks of an _ improper _ mixture . in this casethere is no coherence at the single molecule level , and the box e indeed contains a mixture of singlet and triplet rps .there is an illuminating analogy with young s double slit experiment .suppose that a light source emits perfectly coherent light and we register photons on the observation screen for a time .an interference pattern will be formed .suppose then that we keep registering photons for a consecutive time interval , this time putting a phase shifter in front one of the two slits . if we were to observe the two runs individually , we would see two perfect interference patterns shifted with respect to each other . on the other hand , if we wait to register all photons from the two consecutive runs , and then look at the observation screen , there will be no interference pattern . if one is presented with the final picture, one will conclude that the light source was incoherent .this is the analogy of the proper mixture discussed before .in contrast , if one performs this experiment , again with perfectly coherent light , and installs a measurement apparatus after the slits acquiring complete which - path information , the interference pattern will genuinely disappear .this is the analog of the improper mixture discussed above .going back to the bewildered experimenter who is provided the mixture , while all his global measurements will lead him to expect 50% unreacted triplet rps , he will be surprised to finally measure 25% .to our understanding , so far proper and improper mixtures were more of an epistemological curiosity rather than of practical utility .that is , we are unaware of any quantum science experiments able to differentiate proper from improper mixtures .we will close this review with a conjecture , the proof of which we leave as a problem to be addressed in the more abstract context of the foundations of quantum physics . _ leaky quantum systems , in which the leakage depends non - linearly on the quantum - state , can differentiate between proper and improper mixtures_.we hope to have convinced the reader that radical - pair reactions indeed constitute an ideal paradigm for quantum biology , since they require for their understanding the whole conceptual toolset of quantum information science , even touching upon the foundations of quantum physics .the complex quantum dynamics of radical - pair reactions emerge from the interplay of coherent spin dynamics and incoherent electron transfer reactions , rendering this system both open and leaky . to our knowledge , this is rather unusual in quantum science , and conceptually quite challenging , explaining _ in part _ the current lack of consensus in the scientific community on the quantum foundations of rpm .it is true that the last several years have witnessed several exciting discoveries making the case of quantum biology .we feel that the synthesis of the two seemingly disjoint fields that dominated 20 century science , `` quantum '' and `` bio '' , will lead to breakthroughs beyond our current ability to forecast .hence we will be more than content if this review inspires many other researchers to join this tremendously exciting scientific endeavor .we acknowledge support from the european union s seventh framework program fp7-regpot-2012 - 2013 - 1 under grant agreement 316165 .
|
the radical - pair mechanism was introduced in the 1960 s to explain anomalously large epr and nmr signals in chemical reactions of organic molecules . it has evolved to the cornerstone of spin chemistry , the study of the effect electron and nuclear spins have on chemical reactions , with the avian magnetic compass mechanism and the photosynthetic reaction center dynamics being prominent biophysical manifestations of such effects . in recent years the radical - pair mechanism was shown to be an ideal biological system where the conceptual tools of quantum information science can be fruitfully applied . we will here review recent work making the case that the radical - pair mechanism is indeed a major driving force of the emerging field of quantum biology .
|
let me first begin to say that , given the high density of the scientific program , i had to make a drastic selection and i apologize to the speakers , not or badly , mentioned in this summary .this is partly due to the lack of time and partly to my unability to `` digest '' quickly enough , all this new information .i will not touch technical talks because it is not my field .fortunately , missing material can be found in these proceedings , collecting all the write - ups of the presentations .+ from what we heard , it is amazing to realize that spin has some relevance all over the places , in a vast energy range from 100 mev up to several tev and in very many different collision processes , namely , , , , , etc ... it is involved in numerous experiment facilities like , for example , rarf , clas , hermes , hera , compass , belle , rhic , etc .... one notices also that significant advances have been achieved recently in polarized beams and targets , allowing to reach higher precision in the new measurements .new projects are under way , which i will just mention : in fair at gsi , the panda detector has a broad physics program to study qcd with antiprotons , at protvino , u70 is preparing a new polarization program , as well as here in dubna with the nuclotron - m .on the theory side , the terminology used is also very rich since one has currently to decode the following sets of initials , pdf , gpd , tmd , dvcs , dis , sidis , dglap , bfkl , nlo , nnlo , ht , ssa , etc ... + once more , it was clear at this meeting that substantial progress emerge whenever experiment and theory are `` talking to each other '' .i will try to find the right balence between new experimental results and recent theoretical developments , which have most impressed me , but it was a rather difficult exercice .the compass experiment at the cern sps has undertaken a vast experimental program focused on the nucleon spin structure via deep - inelastic scattering ( dis ) of 160 gev polarized muons on polarized nucleons .they have obtained very precise results in two kinematic ranges , q 1 gev and 0.0005 0.02 , as well as 1 100 gev and 0.004 0.7 , for the spin - dependent structure function , by measuring the longitudinal photon - deuteron asymmetry , with a polarized deuteron target .this asymmetry , shown in fig .[ fig:1 ] ( left ) , is compatible with zero over the small range and this indicates a strong cancellation between the polarization of the different sea quarks . for large asymmetry is large and positive , in agreement with earlier data from smc and hermes .they have also discussed the results of a global qcd fit at next - to - leading order ( nlo ) , to the world data on , which , unfortunately , does not lead to a unique determination of the gluon polarization . +another interesting subject is the evaluation of the polarized valence quark distributions .the analysis is based on the asymmetry difference a , for hadrons of opposite charges and it gives direct access to the valence quark helicity distributions , as the fragmentation functions do cancel out .the results , shown in fig . [ fig:1 ] ( right ) , provide information on the contribution of the sea quarks to the nucleon spin .they favour an asymmetric scenario for the sea polarisation , , at a confidence level of two standard deviations , in contrast to the usual symmetric assumption , .however , the statistical errors are still large and do not allow yet a definite conclusion . + on the left the asymmetry for quasi - real photons ( gev ) , as a function of . on the rightthe integral of over the range , as the function of minimum , evaluated at gev ( taken from santos s talk).,title="fig:",width=321 ] on the left the asymmetry for quasi - real photons ( gev ) , as a function of . on the rightthe integral of over the range , as the function of minimum , evaluated at gev ( taken from santos s talk).,title="fig:",width=264 ] the last relevant topic is the gluon polarization , which is essential to clarify the spin structure of the nucleon . since it is impossible to rely on an extraction based on the qcd evolution of the polarized structure functions , compass has chosen to get a direct determination of this quantity , from the measurement of double spin asymmetries in the scattering of polarized muons off a polarized deuteron target .three different channels sensitive to the gluon distribution are being explored : open charm production and high transverse momentum ( high- ) production , in either the quasi - real ( virtuality gev ) photoproduction or the dis ( gev ) regimes .the first method was described by y. bedfer and a preliminary analysis , bearing 2002 - 2004 data , gives : at and . in his presentation k. klimaszewskidiscussed the high- events and reported that the analysis of combined data from years 2002 - 2004 leads to a more precise preliminary result : .the results of compass and from other experiments are shown on fig .[ fig:2 ] and they definitely favor a low value of .comparison of the measurements from various experiments ( taken from bedfer s talk).,width=302 ]the hermes experiment at desy has obtained new results in different area , which were introduced in the talk of s. belostotski . from the analysis of high- hadron production, they got the following estimate , with a theoretical uncertainty of . polarized inclusive dis is also used to determine , the quark contribution to the nucleon spin , and under some reasonable assumptions , they reported .flavor separation for the quark helicity distributions has been achieved from semi - inclusive dis data and , in particular , one gets , by means of production , which is a preliminary result .azimuthal asymmetries were measured in the semi - inclusive production of pions and kaons and hermes has collected data with a transversely polarized hydrogen target from 2002 to 2005 .the polarized part of the semi - inclusive cross section , for unpolarized beam ( u ) and a transversely polarized target ( t ) , has contributions from both the collins and sivers mechanisms .these asymmetries provide information on the quark collins and sivers distribution functions .these mechanisms produce a different dependence of the azimuthal asymmetry on the two angles and , so one can use the variation of and to disentangle the two contributions experimentally .the extracted collins and sivers amplitudes for charged pions and kaons , are presented in fig .[ fig:3 ] , as a function of , , and .the average collins amplitude is positive for and negative for .this is expected if the transversity distribution is positive and is negative , like for the helicity distributions .however , the magnitude of the amplitude appears to be as large as the amplitude , which was unexpected .the average sivers amplitude are significantly positive for and and consistent with zero for and .note that the sivers amplitude for is , by a factor , higher in magnitude than the amplitude for .collins ( left ) and sivers ( right ) amplitudes for charged pions and kaons , ( as labelled ) as a function of , , and ( taken from korotkov s talk).,title="fig:",width=245 ] collins ( left ) and sivers ( right ) amplitudes for charged pions and kaons , ( as labelled ) as a function of , , and ( taken from korotkov s talk).,title="fig:",width=245 ] transverse polarization and as function of for the region ( left ) and ( right ) ( taken from veretennikov s talk).,title="fig:",width=245 ] transverse polarization and as function of for the region ( left ) and ( right ) ( taken from veretennikov s talk).,title="fig:",width=245 ] transverse and polarization and spin transfer from longitudinally polarized target have been measured in the hermes experiment . the kinematic variables are and , where is the transverse momentum with respect to the ( lepton ) beam , and are the energy and z - component of the momentum ( the z - axis is along the lepton beam direction ) and , are the energy and momentum of the positron beam . in fig .[ fig:4 ] , the transverse and polarizations are shown versus for two kinematical domains and . the polarization rises linearly with with higher slope for and the polarization is consistent with zero .we had a very instructive talk by m. grosse perdekamp on the analyses of hadronic events in annihilation at kek by the belle collaboration .he presented the data on the azimuthal asymmetries between two hadrons produced in the fragmentation of a quark - antiquark pair , .the analyses demonstrated that the results on the collins fragmentation functions from hermes and belle experiments are perfectly compatible . using these collins functions the first extraction of the transversity distributions and achieved .the rhic spin program at bnl , underway since 2001 , has been presented by g. bunce .it consists of colliding polarized protons to study the spin structure of the proton . for 2006they have achieved high luminosity collisions at =200 gev , with 55 to 60% polarization and performed sensitive measurements on the gluon polarization .lower production of or jets is dominated by the gluon - gluon graph , and the double helicity asymmetry at mid - rapidity is essentially quadratic in the gluon polarization . at higher , the quark - gluon graph dominates , and is linear in the gluon polarization .the data for for jet production , obtained by the star collaboration , was presented by j. dunlop .it is displayed in fig .[ fig:5 ] and indicates little or no gluon polarization in the measured region , which corresponds to a gluon momentum fraction of from about 0.02 to 0.3 .[ fig:5 ] shows also left : double helicity asymmetry for inclusive jet production at = 200 gev versus of the jet from the star experiment ( taken from dunlop s talk ) .right : versus for inclusive production of charged pions , at =62 gev , preliminary data from 2006 , from the brahms experiment ( taken from bunce s talk).,title="fig:",width=298 ] left : double helicity asymmetry for inclusive jet production at = 200 gev versus of the jet from the star experiment ( taken from dunlop s talk ) .right : versus for inclusive production of charged pions , at =62 gev , preliminary data from 2006 , from the brahms experiment ( taken from bunce s talk).,title="fig:",width=298 ] preliminary results from the brahms experiment for charged pion transverse spin asymmetries , at =62 gev .the asymmetries at 62 gev are very large , and significantly larger than the asymmetries at 200 gev . at this energy, star has also measured a remarkable asymmetry for production , which increases with , for positive and is consistent with zero for negative . a very exciting direction for the transverse spin program is connecting semi - inclusive dis and rhic results .g. bunce recalled that the final state interaction needed to generate the asymmetry of dis and the corresponding initial state interaction of drell - yan , have different color interactions , giving in general an attractive force for dis and a negative force for drell - yan , resulting in opposite sign transverse spin asymmetries .this unique prediction of gauge theory must be checked and this will be done at rhic .clas data for in several bins of for the proton ( left ) and deuteron , per nucleon ( right ) ( taken from dodge s talk).,width=170 ] the clas collaboration at jefferson lab is pursuing a wide program of measurements with polarized electrons incident on polarized proton and deuteron targets , which was partially covered in the talk of g. dodge .it involves inclusive , semi - inclusive and exclusive inelastic scattering over a wide kinematical range in momentum transfer .the data are consistent with the expectation that the asymmetry should approach 1 as and they find that remains negative up to , consistent with results from hall a using a target .they also studied the onset of quark - hadron duality in spin structure functions .quark - hadron duality refers to the observation that the unpolarized structure function , in the resonance region , averages to the smooth scaling curve for at high . in fig .[ fig:6 ] one displays for the proton and deuteron as a function of for various bins .the high scaling " curve is shown by the hatched area and indicates the range of given by pdf fits . at low can see that the data are negative in the region of the (1232 ) resonance , as expected for a spin 3/2 excitation .however , as increases and the loses strength , the resonances do indeed appear to oscillate about the scaling curve .generalized parton distributions ( gpd ) , introduced 10 years ago , is a powerful tool which offers a way to unify two pictures of the nucleon , disconnected so far , on the one hand the pdf s , obtained from dis , and on the other hand the nucleon form factors , obtained from elastic scattering .the gpd s provide a three - dimensional picture of the nucleon and therefore a more detailed information on its partonic structure , designated `` nucleon tomography '' by n. dhose .one hopes to gain some insight on the localization of partons inside the nucleon and to access to their orbital angular momentum , as first suggested by x. d. ji .gpd s can be extracted experimentally through the measurement of hard exclusive reactions , the cleanest one is the deeply virtual compton scattering ( dvcs)(or meson production ) , shown on the left of fig .[ fig:7 ] . in the reaction , the bethe - heitler ( bh ) process on the right of fig .[ fig:7 ] , dominates over dvcs in most of the kinematic region . however , measurable asymmetries in beam spin and beam charge arise from the interference of both processes .the beam spin asymmetry is proportional to the imaginary part of the dvcs amplitude , while the beam charge asymmetry is proportional to the real part of the dvcs amplitude and both asymmetries can be expressed in terms of gpd s .several models are emerging and predictions made from lattice qcd for the first moments of the nucleon gpd s confirm that the transverse size of the nucleon depends significantly on the momentum fraction .the kinematical domain accessible in compass and its availability of positive and negative polarized muons gives it a major opportunity to measure the different configurations of charge and spin of the beam , as explained by n. dhose .finally , let us mention the results presented by a. borissov on exclusive diffractive production of light vector mesons ( and ) on hydrogen and deuterium targets , measured by hermes in the kinematic region gev and gev .data for the and dependences of longitudinal cross sections and spin density matrix elements are in fair agreement with gpd calculations based on the ` handbag factorization ' .this model was presented by s. goloskokov and it seems to work well up to hera energies .several talks were devoted to single spin asymmetries and their connection to the sivers and collins effects , which generate the most sizeable single spin asymmetries ( ssa ) in semi - inclusive deep - inelastic scattering ( sidis ) with transverse target polarization , as already mentioned above . in his talk a. efremovgave our present understanding of these phenomena .within some uncertainties it was found that the sidis data from hermes and compass , on the sivers and collins ssa from different targets , are in agreement with each other and with belle data on azimuthal correlations in -annihilations . at the present stage of the art ,large- predictions for the flavour dependence of the sivers function are compatible with data , and provide useful constraints .the global analysis of hermes , compass and belle data reported by a. prokudin is leading to the extraction of favoured and unfavoured collins fragmentation functions and the unknown transversity distributions for and quarks , and .they turn out to be opposite in sign , with smaller than , and both are smaller than their corresponding soffer bound .this is just a first step for extracting transversities , as noticed by m. wakamatsu , who carried out a comparative analysis of the transversities and the longitudinally polarized pdf s .he concluded that a complete understanding of the spin dependent fragmentation mechanism is mandatory for getting more definite knowledge of the transversities and that some independent determination of transversities is highly desirable , for example , through double transverse spin asymmetry in drell - yan processes .o. teryaev recalled that twist - three quark - gluon correlators were proposed long ago to explain non - zero ssa and he presented some arguments to establish a relation between the sivers function and these twist - three matrix elements . as a result , the sivers mechanism may be applied at large momentum transfer .it is also possible to find some connection between sivers function and gpd . in his talk ,a. sidorov studied the impact of the clas and latest compass data on the polarized parton densities and higher twist ( ht ) contributions .it was demonstrated that the inclusion of the low clas data in the nlo qcd analysis of the world dis data improves essentially our knowledge of ht corrections to and does not affect the central values of pdf s .however the large compass data influence mainly the strange quark and gluon polarizations , but practically do not change the ht corrections .the uncertainties in the determination of polarized parton densities is significantly reduced due to both data sets and he concluded that it is impossible to describe the very precise clas data , if the ht corrections are not taken into account .b. ermolaev presented a description of spin structure function at arbitrary and .it is known that the extrapolation of dglap to the very small- involves necessarily the singular fits for the initial parton densities without any theoretical basis . on the contrary , according to b. ermolaev , the resummation of the leading logarithms of is the straightforward and most natural way to describe at small . combining this resummation with the dglap results leads to the expressions for which can be used at large and arbitrary , leaving the initial parton densities non - singular .the talk presented by x. artru contains two parts . in the first one, he recalls that positivity restrains the allowed domains for pairs or triples of spin observables in polarised reactions , some of which having non - trivial shapes .various domain shapes in reactions of the type are displayed and some methods to determine these domains are mentioned . the second part deals with classical and quantum constraints in spin physics , from both discrete symmetries and positivity . finally , a. a. pankov considered the international linear collider ( ilc ) to study four - fermion contact interactions in fermion pair production process and he stressed the role played by the initial state polarization , to increase the potentiality of this future machine to discover new phenomena .+ * acknowledgements * + i would like to thank konrad klimaszewski for a serious technical help to prepare this talk .i am grateful to the organizers of dspin07 , for their invitation to this conference dedicated to l. i. lapidus , i had the great privilege to meet several times .my special thanks go also to prof .efremov for providing a full financial support and for making , once more , this meeting so successful .
|
during the five days of this workshop we had forty five hours of lectures , so a tremendous amount of new information was delivered . i will be able only to highlight some aspects of the numerous interesting topics , which were discussed , leaving out many of them . * summary of the workshop dubna - spin 07 jacques soffer *
|
physicists have long been fascinated by the notion of information . since the time of maxwells demon , a number of papers have appeared exploring the energy cost of information as well as the connection to computation e.g. .a few papers have dealt with the role of the observer in thermodynamic systems and the information acquired by the observer e.g. .however none have considered the physics of information acquisition in a truly biological context . in this paper ,i show that a physics - based measure of information is not only relevant for the study of biological systems , but it allows for the derivation of equations characterizing the sensory transduction process .these equations can be used to interpret and to compare with neurophysiological data .this work follows from an original approach that has appeared outside of physics and will be described next .a series of papers have explored the mathematical theory of sensation based on an entropy approach . from this theory ,over 150 years of sensory science can be unified by a entropy measure of uncertainty and a few auxiliary assumptions .this work was later extended to neurophysiology . despite the use of entropy, the exact connection of this approach to physics has not been thoroughly explored .central to the theory is the association of the entropy of a sensory distribution as measured by to a physiological variable measuring sensory magnitude . in the case of neurophysiology, is the firing rate or spike response of a neuron .the relationship between and is given by where is a constant with units of spikes per second .that is , uncertainty is equated to sensory magnitude .the entropy approach has been shown to unify a wide range of disparate sensory phenomena including discrimination and reaction time .the similarity of this approach to statistical thermodynamics is striking and invariably raises the question of whether there is an analogous `` second law '' governing sensory processes . in physics ,there has been considerable interest in characterizing and understanding the thermodynamic arrow of time .parallel to this , scientists have also identified other arrows of time , including a psychological arrow which applies to mental processes ( e.g. ) .this paper is concerned with the psychological / perceptual arrow of time but one that is extended to the neurophysiological level .our discussion begins first however with the thermodynamic arrow of time .shalizi explores the connections between statistical inference , information and thermodynamic entropy to highlight difficulties with the bayesian approach to statistical mechanics .his argument is succinct and germane to this paper and will be repeated here .shalizi considers an ideal observer carrying out statistical inference on a dynamical system with random variable and associated density .he makes three assumptions : ( 1 ) time - reversible dynamics ; ( 2 ) bayesian updating of probabilities ; ( 3 ) equality of information and thermodynamic entropy , ( is boltzmann s constant ) .consider a system with initial distribution . at the next timeinstance ( ) , the system has evolved to where is the time - evolution operator ( e.g. frobenius - perron operator ) . first , by liouville s theorem , reversible dynamics are entropy preserving .hence where the notation denotes the entropy of the distribution .a measurement is performed at yielding . by bayes rulewe get where is the likelihood and the posterior distribution given .thus the entropy of the _ posterior distribution _ is given by where the inequality follows from a fundamental theorem of information theory ( e.g. ) .that is , the average entropy decreases monotonically with each measurement .if is decreasing and , this demonstrates a reverse arrow of time in contradiction to the second law .since the derivation is made under very general conditions , shalizi considers the possibility that one ( or more ) of the assumptions are incorrect .assumption one ( of reversible dynamics ) is the least controversial and is the basis of much of modern statistical mechanics .assumption two ( bayesian updating ) provides a logical and systematic way of updating the measurement process .the most likely issue is with the identification of information with thermodynamic entropy .indeed , i argue that while information as used above leads to contradictions in the thermodynamic domain , it provides instead an appropriate description of another domain that of _ sensory information processing_. by making a subtle change from to , shazili s derivation along with three additional assumptions permits the derivation of a set of equations that governs the acquisition of information at the sensory level .thermodynamic entropy is replaced by a new concept which i term _ sensorial or sensory entropy_. from this , a mathematical demonstration of a sensory or perceptual arrow of time follows naturally .this paper is restricted to the problem of _ intensity coding_. however , the methodology is general so that it can be applied to any other type of biological information acquisition as well .intensity coding is the process by which neurons encode information about the sensory stimulus strength . increasing magnitudes of stimuli typically induce higher rates of response ( in terms of action potentials per unit time ) .also , the response of a neuron to a steady signal drops monotonically over time , a process known as adaptation .intensity and neural coding have long been topics of active research interest .a number of studies have probed the general principles underlying sensory and neural response to both simple and complex stimuli e.g. . by contrastthe theory presented in this paper deals only with simple stimuli and works in a more restricted domain .the aim of the approach is not necessarily to make comprehensive neural predictions , but rather to explore the generic process of sensation .the approach also seeks to identify a deep connection between sensory information and statistical physics .the original version of the theory first appeared over 30 years ago and was later extended to the neurophysiological level ( e.g. ) . in both casesthe theory was introduced without a comprehensive tie - in to physics , with assumptions that were less than satisfying .one goal of this paper is to re - develop the theory in the language of physics and to incorporate a number of recent discoveries in statistics and complexity theory .indeed many of the steps crucial to moving this approach forward appeared only after the original works were published .consider an ideal observer ( sensory receptor ) sampling the fluctuating sensory signal magnitude ( microstate ) to estimate the mean intensity ( macrostate ) .a series of measurements ( samples ) are recorded from the signal and stored locally .the entropy is calculated through the uncertainty in the mean .the response is then derived from the entropy , and equated to the firing rate or spike rate response of the associated neuron ( i.e. the primary afferent neuron ) . as increasing numbers of measurementsare taken , the uncertainty or entropy decreases corresponding to a drop in neuronal spike frequency .this , in essence , is the process described by the following derivation .since the problem considered here is different from shalizi s original problem , it is prudent to define what constitutes the _ system_. sensory transduction is the process whereby sensory stimuli are converted to a neural response . as such there are at least two components to the system : one that defines the sensory signal prior to its contact with the receptive field ( _ the physical system _ ) and the other which consists of the receptor , the associated neuron as well as any accessory structures ( _ the sensory system _ ) .equilibrium in one domain does not imply equilibrium in the other .for example , a taste solution placed in the mouth can be in thermodynamic equilibrium such that its thermal and chemical properties do not change with time .however , the gustatory system requires time to adapt to this solution and therefore the neural response to the stimulus may not in fact be in equilibrium .we begin with a modification and adaptation of shalizi s assumptions to the sensory problem .let be the random variable representing the stimulus strength ( i.e. microstate ) of the sensory signal and its associated density .we make the stronger assumption that the physical system is in equilibrium so that the distribution over microstates is time - invariant .finally , is replaced with . while the same function is used in both cases , it is pertinent to remember that thermodynamic is calculated over a distribution in phase space whereas sensorial is calculated over a distribution of stimulus magnitudes . to summarize : * the physical system is in equilibrium . *the observer ( receptor ) updates probabilities in accordance with bayes rule .* spike response of the neuron equals the entropy .that is , from here , we require only three more assumptions . *the observer ( receptor ) operates with finite resolution .resolution error is normally distributed with zero mean and constant variance independent of the signal .* variance of is related to its mean through a power law ( i.e. _ the fluctuation scaling law _ ) . that is with constant . *the asymptotic , equilibrium spike response exhibits constant _ index of dispersion_. that is , this ratio is independent of signal mean .the significance of these assumptions will be discussed later .this is all that is needed to derive a full equation of five parameters that is capable of describing the response of a sensory neuron to simple time - varying inputs below physiological saturation levels .the equation can also be compared to experimental data ._ it is important to realize that no detailed assumptions about the underlying physiological mechanism are required in the derivation ! _ the handful of assumptions allows one to derive the asymptotic , near equilibrium behaviour of a sensory neuron which will be shown next . while subjectivity and experience may affect the choice of the prior distribution we are interested in the asymptotic properties of the _ posterior distribution _ as the number of measurements grows large .the posterior distribution after successive measurements is given by where is the likelihood and the normalization constant .when the number of samples is large ( ) , the posterior distribution approaches a normal distribution asymptotically .moreover , if the estimate of the mean of is efficient , where is a normal distribution with variance . is where is the maximum likelihood estimate of and the fisher information .if the estimator is efficient , the variance of the estimate achieves the cramr - rao lower bound with . ] those objecting to the bayesian approach can re - derive this exact result using the central limit theorem .since the processing of measurements occurs with finite resolution ( * a4 * ) , entropy is calculated from the mutual information of both the posterior and the error distributions .taking the entropy of their convolution and subtracting the equivocation gives this is just the shannon - hartley theorem for an additive gaussian channel with signal - to - noise ratio . from the fluctuation scaling law ( * a5 * ) , we introduce where is the signal mean and is a constant of proportionality .noting the constancy of , we introduce a new constant to obtain the input mean consists of both external and internal sources .the external source is the sensory signal itself and any other environmental signals .internal sources may include other signals generated internally which elicit a sensory or neural response including thermal noise , self - generated signals ( e.g. otoacoustic emissions in the ear ) , etc .we model the signal mean as a sum of the two components where is the total magnitude of external sources and the sum of internal sources . the sample size increases with the number of measurements : is a function of time and refers to the _ sampling rate_. through memory and storage considerations , it is reasonable to assume that sampling does not occur _ad infinitum_. sampling is thus a function of the difference between and , the equilibrium or steady - state value of .that is , where is an unknown function with the condition ( sampling stops when ) .near equilibrium , we take a taylor expansion around to obtain since and , is a positive time constant .solutions of are used to calculate from eq .( [ interm ] ) with a choice of .there is one final step required to complete the derivation .given a finite sample size , it is more proper to think of in eq .( [ hfunction ] ) as a function of the _ sample variance _ . as such, a distribution for can be calculated from the sample variance distribution . in the limit of large , and at equilibrium ( ) , it is straightforward to show that the index of dispersion of is given by please see ( see appendix [ sec : a1 ] ) .this ratio is constant with respect to the signal mean ( see * a6 * ) , and thus that is , the equilibrium sample size must grow as a function of the signal magnitude for the dispersion index to remain constant . ) makes little sense since the units on both sides do not match .however it is easy to see that if we were to introduce a multiplicative constant to correct for the imbalance in units , this constant will be incorporated into . ] summarizing , we have these equations have been shown to give a good description of the neural response to most simple time - varying sensory input ( up to physiological saturation levels ) for many sensory modalities and animal species ( e.g. ) .analytical solutions of these equations to a sample problem are provided in appendix [ sec : a2 ] . the predicted neural response to many common experimental conditions can be found in appendix [ sec : a3 ] . in a senseit can be argued that eqs .( [ gut1])-([gut4 ] ) represent _ universal equations _ of five parameters governing complex biological behaviour .why universal ?one reason is that the assumptions used in the derivation are generic , without reference to specific biological mechanisms .these are assumptions that would pertain to just about any type of sensory measurement system .even a cursory glance at the plethora of available experimental data shows that there is in fact a common level of description across the different sensory modalities ( e.g. ) .for example , despite vast differences in anatomy and signal energy ( e.g. the visual system transduces photonic energy , whereas hearing is based on mechanoreception of vibration ) , _ adaptation _ and _ growth in response _ are both observed universally in all sensory modalities .adaptation is the phenomenon whereby the response to an increase in signal level induces a rapid rise in spike response followed by a slow decay back to equilibrium .the growth in response typically refers to the compressive growth of spike frequency to increasing signal magnitudes .compression is an important property of sensory neurons since sensory signals can range over several orders of magnitude ( e.g. for sound pressure the ratio is approximately :1 ) whereas the dynamic range of a peripheral neuron is at most 1000:1 .the significance of compression be discussed later .both phenomena have clear psychological analogue : adaptation is observed in the habituation to environmental signals like noise or smells ; growth in response is simply a reflection that larger , more intense signals result in higher levels of sensation .such behaviour is observed universally in all sensory modalities . during adaptation ,a sensory neuron is initially in equilibrium in a ` quiet ' environment . a sudden introduction of a steady signal ( ) results in an increase in uncertainty ( * a5 * ) .consequently , the target value of the sample size increases according to eq .( [ meq ] ) .samples are drawn and the uncertainty is reduced via bayes updating . from eq .( [ hfunction ] ) we observe that entropy decreases ( ) demonstrating that , _ as a general principle , the introduction of uncertainty to the sensory system results in a monotonic reduction of average entropy in accordance with an arrow of time ._ since perception in many cases mirrors the activity at the periphery , we can extend the arrow from a neurophysiological to a perceptual level to obtain a _ perceptual arrow of time_. that stimuli held fixed fade from the sensorium is a well - known phenomenon .the process of fading is attributed here to a gradual reduction of uncertainty through bayesian sampling and updating .implicit in the derivation above is the role of _ memory_. no matter how sampling is performed , uncertainty can only decrease if there is some form of storage of past measurements or samples .since the equilibrium sample size is itself finite , uncertainty can not be reduced to zero .for the neural response , this means that adaptation can only occur up to a minimum value in spike frequency corresponding to the equilibrium value of .incomplete adaptation is observed universally ( e.g. ) .the interpretation provided here is that a non - zero equilibrium response encodes residual uncertainty about the signal magnitude due to finite memory .put together , the measurement process and the memory form the `` engine '' which drives the perceptual arrow of time .the sensory transduction process obeys a form of le chatelier s principle whereby the system shifts to cancel any change introduced into the system .appendix [ sec : a2 ] illustrates one such example where a sensory signal is added and later removed .the accompanying neurophysiological response ( for both experiment and theory ) shows that any perturbations are minimized by the system .the perceptual arrow of time provides bounds on the time to return to sensory equilibrium , much like how the second law provides a time for return to equilibrium for le chatelier - type problems . at the heart of the approachlies a number of critical assumptions .do they in fact reflect the operation of real sensory systems ?for example , it is commonly believed that the sensory coding of intensity involves the coupling of input signal strength to the response of the primary afferent neuron .that is , the firing rate is a function of signal mean. however assumption * a3 * ( firing rate equals uncertainty ) runs counter to this claim .recall that the differential entropy of a normal distribution is a function only of the variance and not the mean . through , the dependence of firing rate on the meanhow can such a result be justified ? first it should be noted that the entropy approach to sensation is _ not _ the only approach to postulate a link between information or variance and sensory processing .a bayesian theory of surprise was put forth recently as a means of determining human visual attention and gaze behaviour , where surprise is calculated from the relative entropy ; a theory of differential coupling was proposed whereby the internal ( neural ) excitation is coupled to the variance of the signal . in both cases , arguments were made in support of the idea that the sensory system processes _ change _ rather than the mean itself .there are several well - known experiments that illustrate the connection between variance and sensation .the phenomenon of brightness enhancement ( aka the brcke - bartley effect ) shows that the apparent brightness of a flickering light can change depending on the frequency of flicker .time - average luminance remains constant , however flickering contributes to temporal variations in the signal resulting in changes in apparent brightness .other experiments involving the stabilization of an image on the retina show that prolonged exposure to a fixed image leads to the fading of the visual percept . in each of these cases , we see that the sensory response is coupled to changes in the signal rather than to the mean level of stimulation . however , neither of the experiments probe the exact relationship between _ firing rate _ and variance . instead , consider the following proposal for a new experimental test . light exhibits very different statistical behaviour depending on whether it is in the classical or quantum limit .photon bunching is the phenomenon whereby the statistics of the photon count deviates from a poisson distribution where variance equals mean ( e.g. ) . if a photoreceptor is stimulated with such a signal , the resulting neural response can be recorded to test the dependency of firing rate on signal variance with mean kept constant .and yet it is clear that the neural response is linked to the mean signal magnitude .where does this connection arise ? some recent work has shown that many complex systems exhibit a power - law relationship between mean and variance ( * a5 * ) .the fluctuation scaling law was discovered first in ecology through animal population studies and is known also as taylor s law . a compelling explanation for the origin of taylor s lawwas recently proposed . the family of probability distributions known as the tweedie distributions has a mean - variance power relationship .a convergence theorem has been established showing that any exponential dispersion model exhibiting an asymptotic mean - variance power relationship must have at its basis a tweedie model .the theorem therefore suggests a reason for the ubiquity of the power law in complex systems .the tweedie exponential dispersion models can be categorized according to the value of .tweedie models can be found for all real values of _ except _ for .this has important consequences for the growth of the neural function to be demonstrated next .recall that compression is the neural phenomenon whereby a wide input range is reduced to a more compact output range .this typically would involve a power function relationship with exponent less than one .compression can be observed in the equations governing sensory entropy . for the asymptotic , equilibrium neural response, one can easily derive from the various equations this function exhibits compression if and only if . since is positive , and no such tweedie model exists for , this implies that the only possible range of exponents are for .such tweedie models are known as _compound poisson - gamma models_. a compound poisson - gamma model can be generated via a sum of gamma - distributed random variables , with the number of summed terms itself poisson distributed .a compound poisson - gamma model appears well - suited to describe the mechanism of interaction between the signal and the receptive field of the sensory organ . in the olfactory system , for example , odourant molecules bind with receptor sites on the cilia in the epithelial layer. the number of receptor sites on each cilium may well be poisson distributed .each receptor site is a cluster and cluster sizes are often modelled by gamma distributions .similar arguments have been made in justification for the use of a compound poisson - gamma model in a wide range of complex systems from population dynamics and genomics to rainfall modelling ( e.g. ) .the last assumption ( * a6 * ) concerns the constancy of the index of dispersion .the distribution of the response in the asymptotic , equilibrium limit is a normal distribution with mean and variance derived in appendix [ sec : a1 ] .the normal distribution of can be proved using laplace s method or the saddle - point approximation . despite a continuous distribution being attributed to ,the neural response is fundamentally a discrete variable ( counts per unit interval of time ) .in fact , the firing rate is derived experimentally from the neural count ( number of spikes ) summed over the time interval . at equilibrium , and the index of dispersion over a finite time window is also referred to as the _ fano factor _( e.g. ) .* a6 * is therefore an assumption about the fano factor of the sensory response .a constant fano factor has been reported widely across different sensory modalities and different neuron types ( e.g. ) ; its empirical veracity is generally accepted . the dispersion index or fano factor acts like a signal - to - noise ratio and its constancyis thought to allow for the decoding of intensity information from the spike train ( e.g. ) .the existence of a perceptual arrow of time has been demonstrated through the identification of a new type of entropy called sensory entropy .sensory entropy is a direct extension of traditional , physics - based entropy or information . as such, the study of sensory processing can be viewed as a continuation of the methods of statistical physics .the equating of entropy to neural response can be seen in direct parallel to as proposed by boltzmann and gibbs in early statistical mechanics .this work was supported by a discovery grant from the natural sciences and engineering research council of canada ( nserc ) .the author is grateful for the helpful discussion of the manuscript with professors kenneth norwich , harel shouval and bent jrgenson .35 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , __ ( , ) , ed . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . ,_ _ ( , ) . , * * , ( ) . ,thesis , ( ) . , * * , ( ) ., , , * * , ( ) . , * * , ( ) . , ( ) . ,_ _ ( , ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . , , , * * , ( ) . ,_ _ ( , ) . , __ , vol .( , ) . , , , * * , ( ) . , * * , ( ) . ,_ _ ( , ) . , * * , ( ) . , * * , ( ) . , * * , ( ) . ,_ _ ( , ) . , * * , ( ) . , * * , ( ) . , in _ _ , edited by ( , ) , pp ., , , * * , ( ) . , * * , ( ) . , in _ _ , edited by , , ( , ) , pp . .in the limit of large , we observe from eq .( [ hfunction ] ) that where has been replaced by the sample variance . secondly for large , we have by cochrane s theorem that is , irrespective of the original distribution for , the sample variance has an asymptotic chi - squared distribution with degrees of freedom . from this, the mean and variance of can be calculated using the expectation and variance operators acting on eq .( [ approxh ] ) together with .thus , in the limit of large , where the result for follows from the fact that the variance of a chi - squared variable equals . the dispersion ratio is then at equilibrium , and eq .( [ onebeforemeq ] ) is obtained .the equations governing sensory entropy can be solved for different inputs or experimental configurations , much like how schrdinger s equation can be solved for different potentials and systems .the steps are as follows : ( 1 ) parameterize signal magnitude as a function of time ; ( 2 ) calculate from this the equilibrium sample size ; ( 3 ) solve the differential equation for ; ( 4 ) obtain the response from both and .the value of is continuous across boundaries .here we illustrate the example of adaptation and recovery , where a constant signal is turned on and later turned off . given input signal we assume that the neuron is fully equilibrated ( i.e. fully adapted ) prior to .we divide the solution into three distinctive regions : region i ( ) , ii ( ) and iii ( ) .the sample size is given by where and .continuity ensures that and .substitution of and into eqs .( [ gut1])-([gut4 ] ) gives the response of the neuron for all three regions .the equations show a response typical of what is observed during a neural adaptation experiment ( initial rapid rise followed by slow decay of spike frequency ) . at the cessation of input , a recoveryis observed as the neuron returns to equilibrium ( ) .note that even when there is no external input , spontaneous activity is present ( ) .figure [ fig : fig1 ] shows several commonly observed experimental conditions .each condition represents a parameterization of the signal magnitude as a function of time .the response can be calculated from eqs .( [ gut1])-([gut4 ] ) using an analytical approach illustrated in appendix [ sec : a2 ] or with a numerical solution of the differential equation . in either case, a response typical of those shown in figure [ fig : fig1 ] can be obtained over a wide range of parameter values provided that the choice of parameters conform to an appropriate range of values to be discussed next .a total of five parameters are required to generate a response .the first is the scaling parameter which is positive .it has been suggested that satisfies the equation where and are estimated from the peak and steady - state values of the adaptation curve .equation ( [ magic ] ) has been interpreted as the upper bound in information transmission ( channel capacity ) of a sensory neuron although it is currently no more than a rule of thumb . for , there are no restrictions except that it is positive .the contribution of internal sources is also positive although it is generally much smaller in magnitude than the external source , i.e. . the exponent was discussed earlier and lies in the range .finally , the parameter determines the time constant of adaptation and is greater than zero .so long as these restrictions are obeyed , solutions similar to those shown in figure [ fig : fig1 ] can be obtained .such responses are commonly observed in neurophysiological experiments .next we consider a comparison with real data .the challenge is to find an experiment setup that allows for the determination of five unknown parameters .for example , a typical adaptation curve yields no more than three parameters a single data set will not suffice .however , a suitable combination of two or more experiments involving different conditions can allow for a more robust determination of parameter values .the simultaneous fitting of multiple datasets also provides for a much more stringent test of the theory .an example is shown in figure [ fig : fig2 ] showing data obtained from an adaptation experiment ( constant ; duration is varied ) and intensity - rate experiment ( constant ; is varied ) .data was recorded from the auditory fiber of an anesthetized mongolian gerbil . in the adaptation experiment ,the number of spikes counted in a 960 ms interval was converted to a firing rate and observed as a function of time .an averaged firing rate was obtained over 91 trials .figure [ fig : fig2]a ( jagged line ) shows the response to a 39 db spl tone presented at the characteristic frequency of the fiber ( 2.44 khz ) . in the intensity - rate experiment ,the maximal firing rate during a one millisecond interval is recorded as a function of different sound intensities .figure [ fig : fig2]b shows the intensity - rate response curve ( open circles ) . after 40 db , the response saturates . ) -([gut4 ] ) using a common set of parameters for both figures .( a ) firing rate measured as a function of sound duration for a 39 db tone ( jagged line ) .( b ) peak firing rate measured as a function of sound intensity in decibels ( open circles).,scaledwidth=40.0% ] the theoretical expression used to fit the data was obtained from the solution provided in appendix [ sec : a2 ] . from eq .( [ middle ] ) , the response is \label{adapt}\ ] ] since both experiments were conducted on the same auditory fiber , a common set of five parameters was used ( , , , and hz ) .stimulus magnitude was calculated from rms pressure relative to 20 .an additional fitting parameter was required for figure [ fig : fig2]b since the intensity - rate experiment does not conform to the condition of constant stimulus duration .a value of was introduced to capture the approximate `` average '' recording duration .thus a total of six parameters was used to fit the results of two separate experiments yielding approximately three parameters per curve . using the peak and asymptotic values for the firing rate obtained in figure [ fig : fig2]a, we see that the choice of conforms well with eq .( [ magic ] ) .moreover , the exponent lies in the expected range of .the value of also implies that the equilibrium response eq .( [ feq ] ) grows with exponent , a value suggestive of the power function exponent found for psychophysical scaling laws in loudness vs. sound pressure . despite the compatibility of theory with data, it should be noted that the current approach also has a number of limitations . since the theory was derived in the limit of the large sample , near equilibrium limit ( , where ) , the theory is expected to perform poorly at points where there are rapid changes in the stimulus .the theory also fails to capture several important characteristics of real sensory neurons .for example , the theory does not predict a saturation in response due to the refractory period of the neuron , nor does it always capture the complete response to a complex signal .
|
a perceptual arrow of time is demonstrated through the derivation of equations governing the acquisition of sensory information at the neural level . only a small number of mathematical assumptions are required , with no knowledge of the detailed underlying neural mechanism . this work constitutes the first attempt at formalizing a biological basis for the physics of information acquisition , continuing from a series of earlier works detailing an entropy approach to sensory processing .
|
restoring forces play a very fundamental role in the study of vibrations of mechanical systems .if a system is moved from its equilibrium position , a restoring force will tend to bring the system back toward equilibrium . for decades ,if not centuries , springs have been used as the most common example of this type of mechanical system , and have been used extensively to study the nature of restoring forces .in fact , the use of springs to demonstrate the hooke s law is an integral part of every elementary physics lab . however , and despite the fact that many papers have been written on this topic , and several experiments designed to verify that the extension of a spring is , in most cases , directly proportional to the force exerted on it , not much has been written about experiments concerning springs connected in series .perhaps one of the most common reasons why little attention has been paid to this topic is the fact that a mathematical description of the physical behaviour of springs in series can be derived easily .most of the textbooks in fundamental physics rarely discuss the topic of springs in series , and they just leave it as an end of the chapter problem for the student .one question that often arises from spring experiments is , `` if a uniform spring is cut into two or three segments , what is the spring constant of each segment ? ''this paper describes a simple experiment to study the combination of springs in series using only _one _ single spring .the goal is to prove experimentally that hooke s law is satisfied not only by each individual spring of the series , but also by the _ combination _ of springs as a whole . to make the experiment effective and easy to perform ,first we avoid cutting a brand new spring into pieces , which is nothing but a waste of resources and equipment misuse ; second , we avoid combining in series several springs with dissimilar characteristics .this actually would not only introduce additional difficulties in the physical analysis of the problem ( different mass densities of the springs ) , but it would also be a source of random error , since the points at which the springs join do not form coils and the segment elongations might not be recorded with accuracy . moreover , contact forces ( friction ) at these points might affect the position readings , as well .instead , we decide just to use one single spring with paint marks placed on the coils that allow us to divide it into different segments , and consider it as a collection of springs connected in series .then the static hooke s exercise is carried out on the spring to observe how each segment elongates under a suspended mass . in the experiment ,two different scenarios are examined : the mass - spring system with an ideal massless spring , and the realistic case of a spring whose mass is comparable to the hanging mass .the graphical representation of force against elongation , used to obtain the spring constant of each individual segment , shows , in excellent agreement with the theoretical predictions , that the inverse of the spring constant of the entire spring equals the addition of the reciprocals of the spring constants of each individual segment .furthermore , the experimental results allow us to verify that the ratio of the spring constant of a segment to the spring constant of the entire spring equals the ratio of the total number of coils of the spring to the number of coils of the segment .the experiment discussed in this article has some educational benefits that may make it attractive for a high school or a first - year college laboratory : it is easy to perform by students , makes use of only one spring for the investigation , helps students to develop measuring skills , encourages students to use computational tools to do linear regression and propagation of error analysis , helps to understand how springs work using the relationship between the spring constant and the number of coils , complements the traditional static hooke s law experiment with the study of combinations of springs in series , and explores the contribution of the spring mass to the total elongation of the spring .when a spring is stretched , it resists deformation with a force proportional to the amount of elongation .if the elongation is not too large , this can be expressed by the approximate relation , where is the restoring force , is the spring constant , and is the elongation ( displacement of the end of the spring from its equilibrium position ) . because most of the springs available today are _ preloaded_ , that is , when in the relaxed position , almost all of the adjacent coils of the helix are in contact , application of only a minimum amount of force ( weight ) is necessary to stretch the spring to a position where all of the coils are separated from each other . at this new position ,the spring response is linear , and hooke s law is satisfied .it is not difficult to show that , when two or more springs are combined in series ( one after another ) , the resulting combination has a spring constant less than any of the component springs .in fact , if ideal springs are connected in sequence , the expression relates the spring constant of the combination with the spring constant of each individual segment . in general , for a cylindrical spring of spring constant having coils , which is divided into smaller segments , having coils , the spring constant of each segment can be written as excluding the effects of the material from which a spring is made , the diameter of the wire and the radius of the coils , this equation expresses the fact that the spring constant is a parameter that depends on the number of coils in a spring , but not on the way in which the coils are wound ( i.e. tightly or loosely ) . in an early paper , galloni and kohen showed that , under _ static _ conditions , the elongation sustained by a non - null mass spring is equivalent to assuming that the spring is massless and a fraction of one - half of the spring mass should be added to the hanging mass .that is , if a spring of mass and relaxed length ( neither stretched nor compressed ) is suspended vertically from one end in the earth s gravitational field , the mass per unit length becomes a function of the position , and the spring stretches _ non - uniformly _ to a new length . when a mass is hung from the end of the spring , the total elongation is found to be where is the _ dimensionless elongation factor _ of the element of length between and , and is the acceleration due to gravity .an important number of papers dealing with the static and dynamic effects of the spring mass have been written in the physics education literature .expressions for the spring elongation as a function of the coil and the mass per unit length of the spring have also been derived .we want to show that , with just _ one _ single spring , it is possible to confirm experimentally the validity of equations and .this approach differs from souza s work in that the constants are determined from the same single spring , and there is no need of cutting the spring into pieces ; and from the standard experiment in which more than one spring is required .a soft spring is _ divided _ into three separate segments by placing a paint mark at selected points along its surface ( see ) .these points are chosen by counting a certain number of coils for each individual segment such that the original spring is now composed of three marked springs connected in series , with each segment represented by an index ( with ) , and consisting of coils .an initial mass is suspended from the spring to stretch it into its _ linear _ region , where the equation is satisfied by each segment .once the spring is brought into this region , the traditional static hooke s law experiment is performed for several different suspended masses , ranging from to .the initial positions of the marked points are then used to measure the _ relative _ displacement ( elongation ) of each segment after they are stretched by the additional masses suspended from the spring ( ) . the displacements are determined by the equations where the primed variables represent the new positions of the marked points , are the initial lengths of the spring segments , and , by definition .representative graphs used to determine the spring constant of each segment are shown in figures [ fig3 ] , [ fig4 ] , and [ fig5 ] .as pointed out by some authors , it is important to note that there is a difference in the total mass hanging from each segment of the spring .the reason is that each segment supports not only the mass of the segments below it , but also the mass attached to the end of the spring .for example , if a spring of mass is divided into three _ identical _ segments , and a mass is suspended from the end of it , the total mass hanging from the first segment becomes .similarly , for the second and third segments , the total masses turn out to be and , respectively .however , in a more realistic scenario , the mass of the spring and its effect on the elongation of the segments must be considered , and equation should be incorporated into the calculations . therefore , for each individual segment , the elongation should be given by where is the mass of the segment , is its corresponding total hanging mass , and is the segment s spring constant .consequently , for the spring divided into three identical segments ( ) , the total masses hanging from the first , second and third segments are now , and , respectively .this can be explained by the following simple consideration : if a mass is attached to the end of a spring of length and spring constant , for three identical segments with elongations , , and , the total spring elongation is given by & = \int_0^{\frac{l}{3 } } \xi(x)\,\rmd x + \int_{\frac{l}{3}}^{\frac{2l}{3 } } \xi(x)\,\rmd x + \int_{\frac{2l}{3}}^l \xi(x)\,\rmd x \nonumber \\[10pt ] & = \frac{(m + \frac{5}{6 } m_{\mathrm{s}})\,g}{3\,k } + \frac{(m + { \frac{1}{2}}m_{\mathrm{s}})\,g}{3\,k } + \frac{(m + \frac{1}{6 } m_{\mathrm{s}})\,g}{3\,k } \nonumber \\[10pt ] & = \frac{(m + { \frac{1}{2}}m_{\mathrm{s}})\,g}{k}\ , .\label{eq : dl2}\end{aligned}\ ] ] as expected , equation is in agreement with equation , and reveals the contribution of the mass of each individual segment to the total elongation of the spring .it is also observed from this equation that as we know , is the mass of each identical segment , and is the spring constant for each .therefore , the spring stretches non - uniformly under its own weight , but uniformly under the external load , as it was also indicated by sawicky .two particular cases were studied in this experiment .first , we considered a spring - mass system in which the spring mass was small compared with the hanging mass , and so it was ignored . in the second case ,the spring mass was comparable with the hanging mass and included in the calculations .we started with a configuration of three approximately _ identical _ spring segments connected in series ; each segment having coils ( ) when the spring was stretched by different weights , the elongation of the segments increased linearly , as expected from hooke s law . within the experimental error ,each segment experienced the same displacement , as predicted by .an example of experimental data obtained is shown in .simple linear regression was used to determine the slope of each trend line fitting the data points of the force versus displacement graphs .( a ) clearly shows the linear response of the first segment of the spring , with a resulting spring constant of .a similar behaviour was observed for the second and third segments , with spring constants , and , respectively . for the entire spring ,the spring constant was , as shown in ( b ) .the uncertainties in the spring constants were calculated using the _ correlation coefficient _ of the linear regressions , as explained in higbie s paper `` uncertainty in the linear regression slope '' . comparing the spring constant of each segment with that for the total spring, we obtained that , and .as predicted by , each segment had a spring constant three times larger than the resulting combination of the segments in series , that is , .the reason why the uncertainty in the spring constant of the entire spring is smaller than the corresponding spring constants of the segments may be explained by the fact that the displacements of the spring as a whole have smaller `` relative errors '' than those of the individual segments .shows that , whereas the displacements of the individual segments are in the same order of magnitude that the uncertainty in the measurement of the elongation ( ) , the displacements of the whole spring are much bigger compared with this uncertainty .we next considered a configuration of two spring segments connected in series with and coils , respectively ( , ) .( a ) shows a graph of force against elongation for the second segment of the spring .we obtained using linear regression .for the first segment and the entire spring , the spring constants were and , respectively , as shown in ( b ) .then , we certainly observed that and .once again , these experimental results proved equation correct ( and ) .we finally considered the same two spring configuration as above , but unlike the previous trial , this time the spring mass ( ) was included in the experimental calculations .figures [ fig5](a)(b ) show results for the two spring segments , including spring masses , connected in series ( , ) . using this method , the spring constant for the whole spring was found to be slightly different from that obtained when the spring was assumed ideal ( massless ) . this difference may be explained by the corrections made to the total mass as given by .the spring constants obtained for the segments were and with for the entire spring .these experimental results were also consistent with equation . the experimental data obtained is shown in .when the experiment was performed by the students , measuring the positions of the paint marks on the spring when it was stretched , perhaps represented the most difficult part of the activity .every time that an extra weight was added to the end of the spring , the starting point of each individual segment changed its position .for the students , keeping track of these new positions was a laborious task .most of the experimental systematic error came from this portion of the activity . to obtain the elongation of the segments , using equation substantially facilitated the calculation and tabulation of the data for its posterior analysis .the use of computational tools ( spreadsheets ) to do the linear regression , also considerably simplified the calculations .in this work , we studied experimentally the validity of the static hooke s law for a system of springs connected in series using a simple single - spring scheme to represent the combination of springs .we also verified experimentally the fact that the reciprocal of the spring constant of the entire spring equals the addition of the reciprocal of the spring constant of each segment by including well - known corrections ( due to the finite mass of the spring ) to the total hanging mass .our results quantitatively show the validity of hooke s law for combinations of springs in series [ equation ] , as well as the dependence of the spring constant on the number of coils in a spring [ equation ] .the experimental results were in excellent agreement , within the standard error , with those predicted by theory .the experiment is designed to provide several educational benefits to the students , like helping to develop measuring skills , encouraging the use of computational tools to perform linear regression and error propagation analysis , and stimulating the creativity and logical thinking by exploring hooke s law in a combined system of _ springs in series _ simulated by a _ single _ spring . because of it easy setup , this experiment is easy to adopt in any high school or undergraduate physics laboratory , and can be extended to any number of segments within the same spring such that all segments represent a combination of springs in series .the authors gratefully acknowledge the school of mathematical and natural sciences at the university of arkansas - monticello ( # 11 - 2225 - 5-m00 ) and the department of physics at eastern illinois university for providing funding and support for this work .comments on earlier versions of the paper were gratefully received from carol trana .the authors are also indebted to the anonymous referee for the valuable comments and suggestions made .44 & & + & & + & & & & & + 0.000 & 0.000 & 0.000 & 0.000 & & 0.000 + 0.005 & 0.005 & 0.005 & 0.015 & & 0.049 + 0.010 & 0.010 & 0.010 & 0.030 & & 0.098 + 0.015 & 0.014 & 0.015 & 0.044 & & 0.147 + 0.020 & 0.019 & 0.019 & 0.058 & & 0.196 + 0.024 & 0.024 & 0.025 & 0.073 & & 0.245 + 0.029 & 0.029 & 0.029 & 0.087 & & 0.294 + 0.034 & 0.034 & 0.033 & 0.101 & & 0.343 + 0.038 & 0.039 & 0.039 & 0.116 & & 0.392 + 0.043 & 0.044 & 0.043 & 0.130 & & 0.441 + 0.048 & 0.048 & 0.049 & 0.145 & & 0.491 + & & + & & + & & & & + 0.000 & 0.000 & 0.000 & & 0.000 + 0.001 & 0.003 & 0.004 & & 0.010 + 0.002 & 0.005 & 0.007 & & 0.020 + 0.003 & 0.006 & 0.009 & & 0.029 + 0.004 & 0.008 & 0.012 & & 0.039 + 0.005 & 0.010 & 0.015 & & 0.049 + 0.006 & 0.012 & 0.018 & & 0.059 + 0.007 & 0.014 & 0.021 & & 0.069 + 0.008 & 0.016 & 0.024 & & 0.078 + 0.009 & 0.018 & 0.027 & & 0.088 + 0.010 & 0.020 & 0.030 & & 0.098 + schematic of the mass - spring system .an initial mass is suspended from the spring to bring it into its linear region . is the initial length of the spring segment with spring constant ( ) .an additional mass suspended from the spring elongates each segment by a distance . ] applied force as a function of the displacement for the first spring segment and the total spring .the spring was considered massless and divided into three _identical _ segments ( ) .( a ) the spring constant of the first segment , , was obtained from the slope of the trend line .( b ) a comparison between elongations of the first segment and total spring . here , . the spring constants and were also calculated . it can be observed that , as predicted by equation . ]applied force as a function of the displacement for the first and second spring segments , and the total spring .the spring was considered massless and divided into two _ non - identical _ segments ( ) .( a ) the spring constant of the second segment is .( b ) a comparison between elongations of the first and second segments with the total spring . and . here , and . ] applied force as a function of the displacement for the first and second spring segments , and the total spring .the mass of the spring was included in the experimental calculations , and the spring divided into two _ non - identical _ segments ( ) .( a ) the spring constants of the segments were calculated with the corrections to the mass .the second segment has a spring constant of .( b ) differences between the spring elongations of the two segments and the total spring are shown . here , .
|
springs are used for a wide range of applications in physics and engineering . possibly , one of its most common uses is to study the nature of restoring forces in oscillatory systems . while experiments that verify the hooke s law using springs are abundant in the physics literature , those that explore the combination of several springs together are very rare . in this paper , an experiment designed to study the static properties of a combination of springs in series using only one single spring is presented . paint marks placed on the coils of the spring allowed us to divide it into segments , and considered it as a collection of springs connected in series . the validity of hooke s law for the system and the relationship between the spring constant of the segments with the spring constant of the entire spring is verified experimentally . the easy setup , accurate results , and educational benefits make this experiment attractive and useful for high school and first - year college students .
|
the optimization problem appears in several fields of physics and mathematics .it is known from mathematics that every local minimum of a convex function defined over a convex set is also the global minimum of the function .but the main problem is to find this optimum . from the physical point of viewevery dynamical process can be considered in terms of finding the optimum of the action functional .the best example is the trajectory of the free point mass in mechanics which follows the shortest way between two points .let us assume one has successfully set up a mathematical model for the optimization problem under consideration in the form where is a scalar potential function ( fitness function ) defined over a d - dimensional vector space .let be the absolute minimum of which is the search target .problems of this type are called parameter optimization .typically the search space is high - dimensional ( ) .the idea of evolution is the consideration of an ensemble of searchers which move through the search space . as a illustrative example we consider the relation between the equation of motion in mechanics , obtained by variation of the action functional to get a trajectory of minimal action , and the introduction of the probability distribution for all possible trajectories by a functional integral . because of the weight factor for every trajectory , the trajectory of minimal action has the highest probability .the equation of the probability distribution deduced from the functional integral is the diffusion equation for a free classical particle .the same idea is behind the attempt to describe optimization processes with the help of dynamical processes .we will be concerned with the time evolution of an ensemble of searchers defined by a density .the search process defines a dynamics \ ] ] with continuous or discrete time steps .an optimization dynamics is considered as successful , if any ( or nearly any ) initial density converges to a target density which is concentrated around the minimum of .we restrict ourselves here to the case where is given as a second order partial differential equation . among the most successful strategies are the class of thermodynamic oriented and the class of biological oriented strategies ) .our aim is to compare on the basis of pde - models thermodynamical and biologically motivated strategies by reducing both to equivalent eigenvalue problems .further we introduce a model for mixed strategies and investigate their prospective power .at first we want to investigate the simplest case of an evolutionary dynamics known in the literature as `` simulated annealing '' .the analogy between equilibrium statistical mechanics and the metropolis algorithm was first discussed by kirkpatrick et al . in . there , an ensemble of searchers determined by the distribution move through the search space . in the following we consider only the case of a fixed temperature .then the dynamics is given by the fokker - planck equation =d\triangle p+d\nabla ( \beta p\nabla u ) \label{boltzmann1}\ ] ] with the `` diffusion '' constant , the reciprocal temperature and the state vector .the stationary distribution is equal to the extremum of functional ( [ liapunov1]) .for the case the complete analytical solution is known and may be expressed by the well - known heat kernel or green s function , respectively .the corresponding dynamics is a simulated annealing at an infinite temperature which describes the diffusion process . in this casethe optimum of the potential will be found by a random walk , because the diffusion is not sensitive to the potential .the average time which the process requires to move from the initial state to the optimum is given by we note that in several cases a generalization from the number to the case where is a symmetrical matrix is possible .further solvable cases can be extracted from the ansatz (\vec{x},t)\label{ansatz1}\ ] ] which after separation of the time and space variables leads to the eigenvalue equation with eigenvalue and potential this equation is the well - known stationary schrdinger equation from quantum mechanics . under the consideration of a discrete spectrumthis leads to the general solution \sum\limits_{i=0}^\infty c_i\psi_i(\vec{x})\exp(-\epsilon_i t ) \label{generalsolution1 } \qquad .\ ] ] to discuss more properties of equation ( [ eigeneq1 ] ) , one has to introduce the concept of the liapunov functional defined by the formula : in the case of equation we obtain for the original equation ( [ boltzmann1 ] ) , the construction of such functional is impossible .we remark that the main difference between schrdinger equation and thermodynamical strategies is given by the time - dependent factor in the solution . in quantum mechanicsthis set of factors forms a complete basis in the hilbert space of functions over but in the solution of ( [ boltzmann1 ] ) this is not the case . because of the existence of an equilibrium distribution the first eigenvalue vanishes and the solutionis given by \qquad .\ ] ] that means that the equilibrium distribution is located around the optimum since the exponential is a monotonous function and the optimum is unchanged . in the limit the distribution converges to the equilibrium distribution and the strategy successfully terminates at the optimum .but this convergence is dependent on the positiveness of the operator defined in ( [ eigeneq1 ] ) .usually the laplace operator is strictly negative definite with respect to a scalar product in the hilbert space of the square integrable functions ( -space ) .therefore the potential alone determines the definiteness of the operator .we thus obtain a sufficient condition for that is which means , that the curvature of the landscape represented by must be smaller than the square of the gradient .depending on the potential it thus is possible to fix a subset of on which the operator is positive definite .now we approximate the fitness function by a taylor expansion around the optimum including the second order . because the first derivative vanishes one obtains the expression for we get the simple harmonic oscillator which is solved by separation of variables .the eigenfunctions are products of hermitian polynomials with respect to the dimension of the search space .apart from a constant the same result is obtained in the case .a collection of formulas can be found in appendix a. the approximation of the general solution ( [ generalsolution1 ] ) for large times leads to \psi_1(\vec{x})+\ldots\ ] ] because of the condition , the time can be interpreted as relaxation time , i.e. the time for the last jump to the optimum .even more interesting than the consideration of the time is the calculation of velocities .one can define two possible velocities .a first velocity on the fitness landscape and a second one in the -th direction of the search space .the measure of the velocities is given by the time - like change of the expectation values of the vector or the potential , respectively .with respect to equation ( [ boltzmann1 ] ) we obtain and the velocity depends on the curvature and the gradient .so we can deduce a sufficient condition for a positive velocity which is up to a factor the same as condition ( [ condition1 ] ) .for the quadratic potential ( [ quadratic ] ) one obtains this is a restriction to a subset of . in this caseit is also possible to explicitly calculate the velocities for it is interesting to note that only the first two eigenvalues are important for the velocities and that both velocities vanish in the limit . besides , the first velocity is independent of the parameter except for special initial conditions where the factor depends on the parameter .the other case is similar and can be found in appendix a.in principle , the biologically motivated strategy is different from the thermodynamical strategy .whereas in the thermodynamical strategy the population size remains constant , it is changed with respect to the fundamental processes reproduction and selection in the case of biologically motivated strategies , but is kept unchanged on average .the simplest model with a similar behaviour is the fisher - eigen equation given by (\vec{x},t)+d\triangle p(\vec{x},t ) \label{darwin}\\ <u>(t ) & = & \frac{\int u(x)p(\vec{x},t ) dx}{\int p(\vec{x},t ) dx } \nonumber\end{aligned}\ ] ] in this case one can also form a liapunov functional which satisfies the equation ( [ functionalequation ] ) similar to the thermodynamical strategy .one obtains the positive functional which also has the stationary distribution as an extremum . by using the ansatz (\vec{x},t)\label{ansatz2}\ ] ] and the separating time and space variables , the dynamics reduces to the stationary schrdinger equation \psi_i(\vec{x})=0 \label{eigenequation}\ ] ] where are the eigenvalues and are the eigenfunctions .this leads to the complete solution the difference to the thermodynamical strategy is given by the fact that the eigenvalue in the case of the fisher - eigen strategy is a non zero value , i.e. the relaxation time is modified and one obtains for the harmonic potential ( [ quadratic ] ) the problem is exactly solvable for any dimension and the solution is very similar to the thermodynamical strategy for . in the other case we obtain a different problem known from scattering theory .if the search space is unbound the spectrum of the operator is continuous . from the physical point of viewwe are interested in positive values of the potential or fitness function ( [ quadratic ] ) , respectively .this leads to a compact search space given by the interval ] . together with the expression for every dirac operator , one can simple calculate the squares of the dirac operators to establish with the adjoint operator .this means that both problems can be described by the motion of a fermion in a field or , respectively .the equilibrium state ( or stationary state ) is given by the kernel of the operator which is the direct sum . if the dimension is even , which is always true for the high - dimensional case , this splitting can be introduced by the product of all -matrices usually denoted by . because of the compactness of the underlying space , the kernel of is finite - dimensional and the spectrum is discrete .we note , that the spectrum of both , and , is equal up to the kernel .so the interesting information about the problem is located in the asymmetric splitting of the kernel .the physical interpretation is given by an asymmetric ground state of the problem which is only connected to the geometry of the landscape .in more mathematical terms both dirac operators are described as covariant derivatives with a suitable connection . together with the fiber bundle theory and the classification given by the k - theory one obtains a possible classification of the fitness landscapes in dependence of convergence velocities given by the periodicity of the real k - theory with period 8 .each of these 8 classes describes a splitting of the kernel and leads to a different velocity .a complete description of this problem will be published later on .in physics the most classical dynamical processes follow the principle of minimization of a physical quantity which often leads to an extremum of the action functional .this problem frequently has a finite number of solutions given by the solutions of a differential equation also known as the equation of motion .investigating this fact in relation to optimization processes , one obtains in the simplest case the thermodynamical and the biological strategy .the description is given by the distribution of the searcher and a dynamics of the distribution converging to an equilibrium distribution located around the optimum of the optimization problem .with the help of the kinetics and the eigenfunction expansion , we investigated both strategies in view of the convergence velocity . in principleboth strategies are equal because one obtains a stationary schrdinger equation .but , the main difference is the transformation of the fitness function ( or potential ) from to ( see ( [ trafo ] ) ) in the case of the thermodynamical strategy .the difference of both strategies leads to the idea of adding a small amount of the `` complementary '' strategy in order to hope for an improvement .the difference in the velocity on the one hand and the similarity in the equation on the other look like a unified treatment of both strategies under consideration .this is represented in the last section in the formalism of fiber bundles and heat kernels to get the interesting result , that up to local coordinate transformations the strategies are split into 8 different classes .for the case of quadratic potentials ( [ quadratic ] ) the problem ( [ boltzmann1 ] ) may be solved explicitly . we get the eigenvalues and the eigenfunctions which lead to the solution \exp(-\epsilon_i t ) \label{solution1}\ ] ] with .next we have to fix initial conditions for this problem . atfirst one starts with a strong localized function , i.e. with a delta distribution . because of the relation : we obtain for the coefficients with . in the case of the full symmetry we can calculate the radial problem to obtain the eigenvalues with as the dimension of the landscape .the calculation of the velocities leads to two cases for the potential ( [ quadratic ] ) : 1 . 2 . where is the normalization factor and is the interval length .for a harmonic potential ( [ quadratic ] ) the problem ( [ darwin ] ) is exactly solvable for any dimension .we get the eigenvalues and the eigenfunctions {\frac{a_i}{2d } } x_i \right)\end{aligned}\ ] ] which lead to the solution \exp(-\epsilon_i t)\ ] ] with {a_k/(2d)}$ ] .we now come to the problem of the maximum , i.e. the potential ( [ quadratic ] ) with .the solution for one dimension ( direction ) is simply obtained as {\frac{a_i}{2d}}\ ; e^{-i\pi/4 } x_i \right)\ ] ] with and as parabolic bessel functions depending on the parameter . in practice oneis interested in positive values of the fitness function or potential which leads to a restriction of the search space to the a hypercube with length in every dimension .we claim that the solution vanishes at the boundary of the hypercube .this restriction leads to a discrete spectrum .the zeros of the parabolic bessel function are given by the solution of the equation where the coefficients can be found in the book and the zeros are defined by . for the eigenvalues in ( [ eigenequation ] )one obtains 1 . ( minimum ) : with as normalization .2 . case ( maximum ) : {\frac{|a_k|}{2d}}\end{aligned}\ ] ] with {\frac{|a_k|}{2d } } \qquad \qquad b_k=\frac{n_k}{2\sqrt{d |a_k| } } \\ n & = & \sum\limits_{n=1 \atop n_1+\ldots n_d = n}^{\infty } e^{\epsilon_n t}\prod\limits_{i=1}^d \left(\sqrt{\left| \frac{\gamma(\frac{1}{4}+ib_k/2)}{\gamma(\frac{3}{4}+ib_k/2)}\right|}\ , i_k()\ , \right)\end{aligned}\ ] ] and functions defined in page 692 as series
|
several standard processes for searching minima of potential functions , such as thermodynamical strategies ( simulated annealing ) and biologically motivated selfreproduction strategies , are reduced to schrdinger problems . the properties of the landscape are encoded in the spectrum of the hamiltonian . we investigate this relation between landscape and spectrum by means of topological methods which lead to a possible classification of landscapes in the framework of the operator theory . the influence of the dimension of the search space is discussed . the connection between thermodynamical strategies and biologically motivated selfreproduction strategies is analyzed and interpreted in the above context . mixing of both strategies is introduced as a new powerful tool of optimization . 2
|
natural proteins fold into unique compact structures in spite of the huge number of possible conformations . for most single domain proteins ,each of these native structures corresponds to the global minimum of the free energy .it has been proposed phenomenologically that the number of possible structures of natural proteins is only about one thousand , which suggests that many sequences can fold into one preferred structure .there have been theoretical studies for the existence of such preferred structures . in many of theoretical studies for the protein folding ,a simplified model called hp model is adopted .hp model is one of 2-letter codes lattice models where a protein is represented by a self - avoiding chain of beads placed on a lattice , with two types of beads , hydrophobic(h ) and polar(p ) . in the hp model ,the energy of a structure is given by the nearest - neighbor topological contact interactions as where and are monomer indexes , are monomer types ( h or p ) ; if and are topological nearest neighbors not along the sequence , and otherwise . based on the hp model , a concept of _ designability _ has recently been introduced ; the number of sequences that have a given structure as their non - degenerate ground state ( native state ) is called the _ designability _ of this structure .when many sequences have a common native structure , one say that the structure is _highly designable_. adding to the importance in the protein design problem , the designability also have evolutional significance because highly designable structures are found to be relatively stable against mutations . in the original study of h. li _et al_. , hp models on the square and cubic lattices are employed , with the energy parameters in eq.([hphamil ] ) being . for each sequence , they calculated the energy over all maximally compact structures and picked up the native structure .the results indicated that highly designable structures actually exist on both lattices .a. irbck and e. sandelin studied the hp models on the square and triangular lattices .they adopted different energy parameters from h. li _et al_. , namely , . in the calculation of the designability , they considered all the possible structures , not restricting to the maximally compact ones . for the square lattice , they confirmed the existence of the highly designable structures as in ref. .for the triangular lattice , however , no such structures were found .in addition to the nearest - neighbor topological contact interactions , they considered local interactions represented by the bend angle and calculated the designability . indeed the local interactions reduced degeneracy ( _ i.e. _ , the number of sequences which have non - degenerate ground state increased ) and made the designability higher .but they found that the designability on the square lattice was still much higher than that on the triangular lattice .they concluded that the difference in the designability for these two lattices are related to the even - odd problem , that is , whether the lattice structure is bipartite or not .quite recently , h. li _et al_. proposed a new model based on the hp model on the square lattice . in the model ,the hydrophobic interaction is treated in such a way that the energy decreases if the hydrophobic residue is buried in the core .they justify this treatment in two reasons : ( 1 ) the hydrophobic force which is dominant in folding originates from aversion of hydrophobic residues from water .( 2 ) the miyazawa - jernigan matrix contains a dominant hydrophobic interaction of the linear form .they took where represent a sequence : if the _i_-th amino acid is h - type and if it is p - type . and represent a structure : if the _i_-th amino acid is on the surface and if it is in the core .they calculated the designability over all maximally compact structures , whose result is consistent with their former study [ see table .[ table1 ] ] . in our view , there are many points to be explored further for the designability problem .first , since the structures of natural proteins are compact but not necessarily `` maximally compact '' in general , how can we justify the discussion where only the maximally compact structures are taken into account ?second , is it adequate to consider only nearest - neighbor interactions? properties of a system with only nearest - neighbor interactions are directly affected by the lattice structure , in particular , whether the lattice is bipartite or not .is it good , only from these facts , to conclude immediately that the absence of the highly designable structures on the triangular lattice should be ascribed to the even - odd problem associated with the triangular lattice? one should discuss the problem on the triangular lattice by using a model like the one in ref. where the interactions do not depend on the contact between monomers , hence , _ do not directly reflect the non - bipartiteness_. our aim of this paper is to examine the above points and clarify what determines the designability of protein structures .we use a new model with a 2-letter codes ( h and p ) on the square and triangular lattices and calculate the designability over _ all possible _ structures . in our model , based on ref. , the energy increases if the hydrophobic residue is exposed to the solvent . we will call this model `` solvation model '' . in brief , the solvation model is a 2-letter codes lattice model where the hydrophobic force to form a core is dominant and the interactions do not directly reflect the bipartiteness . using the solvation model and the hp model, we investigate model - independent properties of designability .in the solvation model , based on ref . , a protein is represented by a self - avoiding chain of beads with two types h and p , placed on a lattice .a sequence is specified by a choice of monomer types at each position on the chain .we used two - dimensional lattice models because a computable length by numerical enumeration of the full conformational space is limited ( square lattice : 18 , triangular and cubic lattices : 13 ) . even with this chain - length limitation, we can make a `` hydrophobic core '' in two dimensions , in contrast with the three - dimensional case .a structure is specified by a set of coordinates for all the monomers and is mapped into the number of contacts with the solvent . in our model ,the total energy is given in terms of the monomer - solvent interactions , and depends only on the number of contacts with the solvent : where represent a sequence : if the _i_-th monomer is the h - type and if it is p - type .the variable denotes the number of contacts with the solvent , for example , on the square lattice and on the triangular lattice . in other words, means that the -th monomer is buried away from the solvent .we take .that is , the possible minimum energy is zero . and these parametersare selected so that the larger the number of contacts with the solvent is , the more the degree of energy increase is ; the hydrophobic residue is energetically unfavorable to be at the corner .although the choice of these values is somewhat arbitrary , we have considered the following points : ( 1 ) these values should not increase too much rapidly with the increase in the number of contacts with the solvent , and ( 2 ) the way of choosing these values must not bring about nonessential accidental degeneracies ( due to simple rational ratios between the parameters ) . using the model on the square and triangular lattices, we calculate the designability for all the sequences , where is the number of monomers , by exact computer - enumeration method over the full conformational space . to get correct data , we exclude overcounting coming from redundant structures which are mutually related by rotation , reflection and reverse - labeling . on the basis of data obtained by the solvation model and the hp model , we examine what determines the designability from three points of view : ( 1 ) the effect of the search - space restriction , namely , the search within maximally compact structures ( in this paper , we just used maximally compact structures as a simplest example of the search - space restriction , and we may consider other one , _e.g. _ , structures with the biggest core ) , ( 2 ) the effect of the lattice structure , namely , whether the lattice is bipartite or not ( or , equivalently , the even - odd problem ) , ( 3 ) the effect of the number of monomers ( or , the length of the chain ) .let us now give results of calculations .\(1 ) the effect of the search within maximally compact structures in fig .[ fig1 ] , we show the designability calculated on the square lattice for , using maximally compact structures . for comparison , in fig .[ fig2 ] , we show the designability of the same system without the search - space restriction ( _ i.e. _ , search over all possible structures ) . in both cases , there are some highly designable structures . however , these structures are not common to both cases . in fig .[ fig2 ] , the number of sequences that have native structures is 8277 , but the number of sequences that have maximally compact structures as native is only 1087 out of 8277 .that is , most sequences that have native structures have non - maximally compact structures as native .the importance of non - maximally compact structures has also been pointed out for the hp model .these facts imply that it is not good to calculate the designability over only maximally compact structures .such calculation picking up a `` native '' structure out of maximally compact structures , is not correct if the true native structure is non - maximally compact .further , when the lowest - energy non - maximally compact structure and the lowest - energy maximally compact structure are degenerate , there is no native structure ( native structure must be non - degenerate ) , but the restricted - search - space calculation gives a false result that there is a native ( and maximally compact ) structure .we should say that the designability calculated over only maximally compact structures may be erroneous .\(2 ) the effect of the lattice structure : bipartite or non - bipartite in two previous studies using the hp model , interactions of the system directly reflected whether the lattice is bipartite or not .moreover the designability on the triangular lattice was calculated with the energy parameters in eq.([hphamil ] ) being , which would cause accidental degeneracies . in their results ,highly designable structures were not found for the triangular lattice .also , it seemed that native structures are likely to contain the hydrophobic core where a group of hydrophobic monomers contact with each other ; such contact can be made only if the distance between the monomers along the sequence is odd .therefore , the bipartiteness has been thought to be a main source of the designability. .if so , highly designable structures do not actually exist , _i.e. _ , the concept of _ designability _ itself could be meaningless .on the other hand , if such preferred structures should exist on the basis of the proposal by c. chothia , the use of the lattice model would be inadequate .then , we used the solvation model , which does not directly reflect the bipartiteness , and calculated the designability on the square and triangular lattices .besides , we also calculated the designability on the triangular lattice using the hp model , with the energy parameters being . in table .[ table2 ] , we show the total number of sequences that have non - degenerate ground state ( ) and the highest designabilities ( ) on the triangular lattice for , obtained by using different interactions .this result shows that , even if we take different values of energy parameters , or even if we use the solvation model , the triangular lattice is still unfavorable for the designability although varies largely . on the other hand , for the square lattice , highly designable structures are found in the solvation model as well as in the hp model ( fig .[ fig2 ] ) .these results imply that the absence of the highly designable structures for the triangular lattice should not be ascribed to the even - odd problem ( or , the non - bipartiteness ) , but to other reasons .the properties that highly designable structures are found on the square lattice and no such structures are found on the triangular lattice might be general in 2-letter codes lattice models where the hydrophobic force is dominant .\(3 ) the effect of the number of monomers then , why are the highly designable structures absent for the triangular lattice ?smallness of number of monomers ( in other words , the length of a chain is too short ) , may be a possible reason .important object in the protein structure is the hydrophobic core which consists of buried monomers in no contact with the solvent .recall that the limit of a computable length by exact enumeration of the full conformational space on the triangular lattice is 13 .the biggest core which we can make by using this limited length is the one which consists of only three monomers ; the length is too short for the hydrophobic force to form a core .this monomer - number effect is also found on the square lattice .consider the following conditions : at least ten sequences have a given structure as their native state , and at the same time , there are at least five such structures .only if these conditions are satisfied , let us say that `` there are highly designable structures . ''then , at or less , there are no highly designable structures even for the square lattice [ table .[ table3 ] , table .[ table4 ] ] .this result implies that , when we discuss whether there are highly designable structures or not , we need a long chain enough to make a core of enough size .this further implies that , in three - dimensional case , we will need a chain of longer length than that in two - dimensional case to make a core .let us see table .[ table3 ] , table .[ table4 ] and table .[ table5 ] . in table .[ table5 ] , we show the designability calculated on the triangular lattice for . on the square lattice for ,the biggest core consists of two monomers . both on the triangular lattice for and on the square lattice for , the biggest core consists of three monomers .we see that the triangular lattice is unfavorable for designability compared with square lattice , even when the biggest possible core size is same or a little larger. a possible reason would be the number of all possible structures , particularly the number of structures with the biggest core . as the length of a chain becomes long, the number of all possible structures increases almost exponentially as ( for the square lattice , and for the triangular lattice) . on the triangular lattice for ,the number of all possible structures is 6,279,601 and the number of structures with the biggest core is 4,110 out of them .on the other hand , on the square lattice for , the number of all possible structures is 2,034 , 5,513 and the number of structures with the biggest core is 23 , 5 , respectively .thus the number of all possible structures and the number of structures with the biggest core on the triangular lattice are much larger than those on the square lattice . in consequence , the degeneracy tends to grow , which is unfavorable for designability . in this view, designable structures on the triangular lattice would be more difficult to appear than on the square lattice .we have calculated the designability of the protein structure using the solvation model and the hp model , to deduce model - independent properties of designability .the solvation model introduced in this paper satisfies two conditions : ( 1)the hydrophobic force is dominant , ( 2 ) the model does not directly reflect the bipartiteness .we have examined what determines the designability from three points of view : effect of restricted search within maximally compact structures , the bipartite / non - bipartite effect , the length of the chain . in result, we have found that it is inadequate to calculate the designability within maximally compact structures .our results imply that the reason why no highly designable structures on the triangular lattice have been found is not the non - bipartiteness .we suppose that the main factor which affects the designability is the chain length , because for sufficiently large hydrophobic core to form , long enough chains are required .triangular lattice is more unfavorable for the designability than square lattice irrespective of models or energy parameters , probably because the number of all possible structures is large .however , if we can deal with longer chain than in the present study , it is possible that we find highly designable structures even on the triangular lattice .the calculations of the designability for longer chains on the triangular lattice are highly desirable .these conclusions would apply to a wide variety of 2-letter codes lattice models , where the hydrophobic force is dominant , regardless of energy parameters and further details of the model .though a concept of designability is currently defined for a 2-letter codes lattice model , our final goal is to examine whether natural proteins have highly designable structures .therefore it is an interesting problem to extend the study of the designability for a 20-letter codes model ( _ e.g. _ , mj model , kgs model ) and an off - lattice model .substituting 20-letter codes for 2-letter codes certainly reduces degeneracy , and most of all sequences come to have a structure as non - degenerate ground state ( _ i.e. _ , native structure ) .we would like to thank y. akutsu and m. kikuchi for useful discussions and careful reading of the manuscript .m. vendruscolo , b. subramanian , i. kanter , e. domany and j. lebowitz , pre * 59 * , 977 ( 1999 ) : in this paper , it is showed that the larger the number of all possible structures is , the larger the number of structures with the same number of contacts between monomers is .
|
we examined what determines the designability of 2-letter codes ( h and p ) lattice proteins from three points of view . first , whether the native structure is searched within all possible structures or within maximally compact structures . second , whether the structure of the used lattice is bipartite or not . third , the effect of the length of the chain , namely , the number of monomers on the chain . we found that the bipartiteness of the lattice structure is not a main factor which determines the designability . our results suggest that highly designable structures will be found when the length of the chain is sufficiently long to make the hydrophobic core consisting of enough number of monomers .
|
over the last two decades , it has become apparent that mechanical forces play a central role for cellular decision - making , leading to the emerging field of mechanobiology . in order to understand how forces impact cellular processes ,it is essential to measure them with high spatiotemporal resolution and to correlate them either statistically or causally with the cellular process of interest . the most common approach is to measure forces at the cell - matrix interface .this field has grown rapidly over the last years and has become to be known as _ traction force microscopy _ ( tfm ) . using this approach ,it has been shown e.g. that cellular traction often correlates with the size of adhesion contacts but also that this correlation depends on the growth history of the adhesion contact under consideration . for most tissue cell types ,high extracellular stiffness correlates with large traction forces and large cell - matrix adhesion contacts .these large contacts are thought to not only ensure higher mechanical stability , but also to reflect increased signaling activity .this leads to a stiffness - sensitive response of cells , e.g. during cell spreading and migration or stem cell differentiation . while tfm has become a standard tool in many labs working on mechanobiology , in practise the details of its implementation vary significantly and the development of new approaches is moving forward at a very fast pace . from a general point of view , forces are not an experimentally directly accessible quantity but have to be infered from the fact that they create some kind of motion . despite the fact that this motion can follow different laws depending on the details of the system under consideration ( e.g. being elastic or viscous ) , a force measurement essentially requires to monitor some kind of dynamics .this is illustrated best with a linear elastic spring . hereforce is defined as , with spring constant and displacement . without a measurement of , no statement on be possible ( is a constant that can be obtained from a calibration experiment ) . in order to measure ,the reference state has to be known , and therefore one typically needs a relaxation process to determine the absolute value of .thus even seemingly static situations require some dynamical measurement .another instructive example is the stress acting over a fictitious surface inside a static but strained elastic body . in order to measure this stress directly , in principle onehas to cut the surface open and to introduce a strain gauge that measures forces by the movement of a calibrated spring .alternatively one needs to use a model that allows one to predict this stress from an elastic calculation . in summary, each direct measurement of cellular forces has to start with the identification of a suitable strain gauge .thus a helpful classification of the wide field of tfm can be introduced by considering the different ways in which a strain gauge can be incorporated in a cell culture setup ( ) . the most obvious way to dothis is to replace the glass or plastic dishes of cell culture by an elastic system that can deform under cell forces .early attempts to do so used thin elastic sheets , which buckle under cellular traction and thus provide an immediate visual readout ( a ) .however , due to this non - linear response ,it is difficult to evaluate these experiments quantitatively . therefore this assay was first improved by using thin silicone films under tension and then thick polyacrylamide ( paa ) films that do not buckle but deform smoothly under cell traction ( b ) .today the use of thick films made of different materials is a standard approach in many mechanobiology labs .fiducial markers can be embedded into these substrates and their movement can be recorded to extract a displacement field . solving the inverse problem of elasticity theory , cellular traction forcescan be calculated from these data .an interesting alternative to solving the inverse problem is the direct method that constructs the stress tensor by a direct mapping from a strain tensor calculated from the image data . herewe will review these methods that are based on the experimental setup shown in b. a simple alternative to tfm on soft elastic substrates is the use of pillar arrays , where forces are decoupled in an array of local strain gauges ( c ) .pillars can be microfabricated from many different materials , including elastomeres like polydimethylsiloxane ( pdms ) or solid material like silicium , as long as they have a sufficiently high aspect ratio to deform under cellular traction .one disadvantage of this approach is that cells are presented with topographical cues and that their adhesion sites grow on laterally restricted islands , making this system fundamentally different from unconstrained adhesion on flat substrates .moreover it has recently been pointed out that substrate warping might occur if the base is made from the same elastic material , thus care has to be taken to correctly calibrate these systems .a very promising alternative to macroscopically large elastic strain gauges is the use of molecular force sensors ( d ) .such a sensor typically consists of two molecular domains connected by a calibrated elastic linker . in the example for an extracellular sensor shown in d, the distal domain is bound to a gold dot on the substrate that quenches the cell - bound domain and fluorescence ensues as the linker is stretched by cellular forces . for intracellular sensors , one can use frster resonance energy transfer ( fret ) , which means that fluorescence decreases as the linker is stretched .fluorescent stress sensors give a direct readout of molecular forces , but for several reasons one has to be careful when interpreting these signals .first the effective spring constant of the elastic linker might depend on the local environment in the cell , even if calibrated in a single - molecule force spectroscopy experiment .second the fluorescent signal is a sensitive function of domain separation and relative orientation , thus a direct conversion into force can be problematic .third it is difficult to control the number of engaged sensors , thus the fluorescent signal can not easily be integrated over a larger region .fourth the molecular stress sensor reads out only part of the force at work in the cellular structure of interest ( e.g. the adhesion contact ) .therefore fluorescent stress sensors are expected to complement but not to replace traditional tfm in the future .one advantage of fluorescent stress sensors over soft elastic substrates and pillar assays is that they can be more easily adapted to force measurements in tissue , for example in developmental systems with fast and complicated cell rearrangements , although the same issues might apply as discussed above for single cells .recently , however , it has been shown that macroscopic oil droplets can be used to monitor forces during developmental processes . in principle, also subcellular structures such as focal adhesions , stress fibers , mitochondria or nuclei can be used as fiducial markers for cell and tissue deformations .one disadvantage of this approach however is that subcellular structures are usually highly dynamic and can exhibit their own modes of movement , thus not necessarily following the overall deformation of the cell .nevertheless conceptually and methodologically these approaches are similar to traditional tfm and also work in the tissue context .another important subfield of tfm is estimating internal forces from cell traction using the concept of force balance . this concept has been implemented both for forces between few cells and for forces within laterally extended cell monolayers . in the latter case ( _ monolayer stress microscopy _ ), one assumes that the cell monolayer behaves like a thin elastic film coupled to the underlying matrix by stresses ( alternatively one can assume coupling by strain ) . combined with a negative pressure that represents the effect of actomyosin contractility , the physics of thin elastic films is now increasingly used to describe forces of cell monolayers in general .recently single cell and monolayer approaches for internal force reconstruction have been combined by tracking each cell inside a monolayer . for single cells ,the combination of modeling and tfm has recently been advanced to estimate the tensions in the whole set of stress fibers within cells on pillar arrays and soft elastic substrates . for the latter casean actively contracting cable network constructed from image data has been employed to model contractility in the set of stress fibers within u2os cells . despite the many exciting developments in the large field of tfm , the most commonly used setup to measure cellular forcesis traction force microscopy on soft elastic substrates .here we review the underlying principles and recent advances with a special focus on computational aspects . for the following ,it is helpful to classify the different approaches in this field .because cells become rather flat in mature adhesion on elastic substrates , traditionally only tangential deformations have been considered ( 2d tfm ) .more recently , tracking of bead movements in all three dimensions has been used to reconstruct also z - direction forces ( 3d tfm ) .for both 2d and 3d tfm , one further has to differ between linear and non - linear procedures .the central quantity in this context is strain , which is defined as relative deformation and therefore dimensionless .if the substrate is sufficiently stiff or the cell sufficiently weak to result in strain values much smaller than unity ( ) , one is in the linear regime and can work with the small strain approximation .the standard approach to estimating forces is the solution of the inverse problem of linear elasticity theory ( inverse tfm ) . in the linear case, one can use the green s function formalism that leads to very fast and efficient algorithms for force reconstruction using inverse procedures , both for the standard case of thick substrates and for substrates of finite thickness ( gf - tfm ) .if in contrast one is in the regime of large strain ( ) , green s functions can not be used .one way to deal with this problem is the use of the finite element method ( fem - tfm ) in a non - linear formulation .moreover fem - tfm can also be used in a linear formulation . for both small and large strain ,alternatively the stress can in principle be constructed directly from the displacement ( direct tfm ) . in all approachesused , an important issue is the role of noise on the force reconstruction .most tfm - approaches use an inversion of the elastic problem to calculate forces from displacement .however , this is an ill - posed problem in the sense that due to the long - ranged nature of elasticity , the calculated traction patterns are highly sensitive to small variations in the measured displacement field and thus solutions might be ambiguous in the presence of noise . also non - conforming discretization ofthe problem can cause ill - posedness , e.g. if the mesh is chosen too fine compared to the mean distance of measured bead displacements . in order to avoid ambiguous solutions ,a regularization procedure has to be employed in one way or the other , e.g. by filtering the image data or by adding additional constraints to the force estimation . in model - based tfm ( mb - tfm ) , this problem can be avoided if the model is sufficiently limiting . in direct tfm, care has to be taken how to calculate the derivatives from noisy data .traction reconstruction with point forces ( trpf ) can be considered to be a variant of mb - tfm , but requires regularization if one uses many point forces ..abbreviations for different variants of traction force microscopy ( tfm ) on flat elastic substrates as discussed in this review .the corresponding references are given in the main text .regularization is required to deal with experimental noise .the first five entries in the list exist in both linear and non - linear versions .the other entries are typically used with a linear substrate model . [cols="<,<,<",options="header " , ] [ tab_software ] over the last years , different software packages have been developed for the image processing and force reconstruction tasks . in tab .ii we list some of them for the convenience of the tfm - user .the imagej plugin for fttc is a ready - to - use solution to implement standard tfm .the finite thickness software is required when cells are very strong or the substrate is very thin .the regularization tools by hansen can be used for regularized optimization strategies in inverse tfm based on optimization , compare .helpful image processing tools are available also from other fields like environmental physics , fluids dynamics , microrheology , single molecule imaging or super resolution microscopy . a standard choice for correlative tracking is the matlab piv toolbox .alternatively one can use single particle tracking routines . for large deformation 3d tfm, a specialized code has been developed very recently .here we have reviewed recent progress in tfm on soft elastic substrates from the computational perspective .our overview shows that this field is moving at a very fast pace and that many different variants of this approach have been developed over the last two decades , each with its respective advantages and disadvantages .thus the situation is similar to the one for optical microscopy , a field in which also a lot of progress has been made over the last decades at many different fronts simultaneously , ranging from super resolution microscopy ( sted , palm , storm , sim ) through correlative and fluctuation microscopy to light sheet microscopy ( spim ) . like in this field , too , for tfm the choice of method depends on the experimental question one is addressing and in many case a combination of different approaches will work best .our review shows that force reconstruction can not be separated from data analysis and image processing , in particular due to the noise issue .irrespective of the approach used , care has to be taken to deal with the experimental noise that is always present in the displacement data .each of the methods discussed above includes some kind of regularization , either implicitly through image filtering or explicitly through some regularization scheme . for 2d tfm , the standard approach is fttc and this has become the common procedure in many labs working in mechanobiology due to its short computing times .reg - fttc makes computation time only slightly longer but introduces a more rigorous treatment of the noise issue .fttc can be extended easily to 3d tfm , but only if the image data is of very good quality .the simplest version of 3d tfm implies tracking of marker beads in z - direction on planar substrates .as such experiments usually require relatively soft substrates , one typically leaves the linear domain and large deformation methods have to be used for force reconstruction .similar techniques can then be used also in full 3d tfm , e.g. when cells are encapsulated in hydrogels .here however care has to be taken that the cellular environment is indeed elastic ; otherwise a viscoelastic or even plastic theory has to be employed .the more complex the questions and experiments become that are conducted in mechanobiology , the more difficult it will get to extract meaningful correlations and cause - effects relations .we therefore envision that in the future , such experiments will be increasingly combined with mathematical models that allow us to extract useful information from microscopy data in a quantitative manner .simple examples discussed above are trpf and mb - tfm , which use the assumptions of localization of force transmission to the adhesion contacts ( trpf ) and force generation in the actin cytoskeleton ( mb - tfm ) to improve the quality of the data that one can extract from tfm - experiments . in a similar vein, we expect that in the future , more and more data will be extracted from microscopy images based on some bayesian assumptions that have been validated before by other experimental results .another very exciting development is the combination of tfm with fluorescent stress sensors , which complement it with molecular information and which can be more easily used in a tissue context .the authors acknowledge support by the bmbf - programm mechanosys and by the heidelberg cluster of excellence cellnetworks through its program for emerging collaborative topics .we thank christoph brand for critical reading and help with for mb - tfm .we thank nils hersch , georg dreissen , bernd hoffmann and rudolf merkel for the data used in and jonathan stricker , patrick oakes and margaret gardel for the data used in .we apologize to all authors whose work we could not cite for space reasons .nathalie q. balaban , ulrich s. schwarz , daniel riveline , polina goichberg , gila tzur , ilana sabanay , diana mahalu , sam safran , alexander bershadsky , lia addadi , and benjamin geiger .force and focal adhesion assembly : a close relationship studied using elastic micropatterned substrates . 3 ( 2001 ) 466472 .jerome m. goffin , philippe pittet , gabor csucs , jost w. lussi , jean - jacques meister , and boris hinz .focal adhesion size controls tension - dependent recruitment of alpha - smooth muscle actin to stress fibers .172 ( 2006 ) 259268 .masha prager - khoutorsky , alexandra lichtenstein , ramaswamy krishnan , kavitha rajendran , avi mayo , zvi kam , benjamin geiger , and alexander d. bershadsky .fibroblast polarization is a matrix - rigidity - dependent process controlled by focal adhesion mechanosensing . 13( 2011 ) 14571465 .la trichet , jimmy le digabel , rhoda j hawkins , sri ram krishna vedula , mukund gupta , claire ribrault , pascal hersen , raphal voituriez , and benot ladoux .evidence of a large - scale mechanosensing mechanism for cellular adaptation to substrate stiffness . 109 ( 2012 ) 6933 - 6938 .britta trappmann , julien e. gautrot , john t. connelly , daniel g. t. strange , yuan li , michelle l. oyen , martien a. cohen stuart , heike boehm , bojun li , viola vogel , joachim p. spatz , fiona m. watt , and wilhelm t. s. huck .extracellular - matrix tethering regulates stem - cell fate .11 ( 2012 ) 642649 .jessica h. wen , ludovic g. vincent , alexander fuhrmann , yu suk choi , kolin c. hribar , hermes taylor - weiner , shaochen chen , and adam j. engler .interplay of matrix stiffness and protein tethering in stem cell differentiation .13 ( 2014 ) 979987 .schwarz , n.q .balaban , d. riveline , a. bershadsky , b. geiger , and s.a .calculation of forces at focal adhesions from elastic substrate data : the effect of localized force and the need for regularization .83 ( 2002 ) 13801394 .sergey v. plotnikov , benedikt sabass , ulrich s. schwarz , and clare m. waterman .high - resolution traction force microscopy . in jennifer c. waters and torsten wittman , editor , _ methods in cell biology _ , volume 123 ( 2014 ) 367394 .c. franck , s. hong , s. a. maskarinec , d. a. tirrell , and g. ravichandran .three - dimensional full - field measurements of large deformations in soft materials using confocal microscopy and digital volume correlation .47 ( 2007 ) 427438 .saba ghassemi , giovanni meacci , shuaimin liu , alexander a. gondarenko , anurag mathur , pere roca - cusachs , michael p. sheetz , and james hone .cells test substrate rigidity by local contractions on submicrometer pillars . 109 ( 2012 )53285333 .carsten grashoff , brenton d. hoffman , michael d. brenner , ruobo zhou , maddy parsons , michael t. yang , mark a. mclean , stephen g. sligar , christopher s. chen , taekjip ha , and martin a. schwartz .measuring mechanical tension across vinculin reveals regulation of focal adhesion dynamics . 466 ( 2010 )263266 . brandonl. blakely , christoph e. dumelin , britta trappmann , lynn m. mcgregor , colin k. choi , peter c. anthony , van k. duesterberg , brendon m. baker , steven m. block , david r. liu , and christopher s. chen . a dna - based molecular probe for optically reporting cellular traction forces .11 ( 2014 ) 12291232 .yang liu , rebecca medda , zheng liu , kornelia galior , kevin yehl , joachim p. spatz , elisabetta ada cavalcanti - adam , and khalid salaita .nanoparticle tension probes patterned at the nanoscale : impact of integrin clustering on force transmission .14 ( 2014 ) 55395546. anna - lena cost , pia ringer , anna chrostek - grashoff , and carsten grashoff . how to measure molecular forces in cells : a guide to evaluating genetically - encoded fret - based tension sensors .8 ( 2014 ) 96 - 105 .otger campas , tadanori mammoto , sean hasso , ralph a. sperling , daniel oconnell , ashley g. bischof , richard maas , david a. weitz , l. mahadevan , and donald e. ingber .quantifying cell - generated mechanical forces within living embryonic tissues .11 ( 2014 ) 183189 .ning wang , keiji naruse , dimitrije stamenovi , jeffrey j. fredberg , srboljub m. mijailovich , iva marija toli - nrrelykke , thomas polte , robert mannix , and donald e. ingber .mechanical behavior in living cells consistent with the tensegrity model .98 ( 2001 ) 77657770 .shaohua hu , jianxin chen , ben fabry , yasushi numaguchi , andrew gouldstone , donald e. ingber , jeffrey j. fredberg , james p. butler , and ning wang .intracellular stress tomography reveals stress focusing and structural anisotropy in cytoskeleton of living cells .285 ( 2003 ) c1082c1090 .zhijun liu , john l. tan , daniel m. cohen , michael t. yang , nathan j. sniadecki , sami alom ruiz , celeste m. nelson , and christopher s. chen .mechanical tugging force regulates the size of cell cell junctions .107 ( 2010 ) 99449949 .dhananjay t. tambe , c. corey hardin , thomas e. angelini , kavitha rajendran , chan young park , xavier serra - picamal , enhua h. zhou , muhammad h. zaman , james p. butler , david a. weitz , jeffrey j. fredberg , and xavier trepat . collective cell guidance by cooperative intercellular forces .10 ( 2011 ) 469475 .dhananjay t. tambe , ugo croutelle , xavier trepat , chan young park , jae hun kim , emil millet , james p. butler , and jeffrey j. fredberg .monolayer stress microscopy : limitations , artifacts , and accuracy of recovered intercellular stresses . 8( 2013 ) e55172 .michel moussus , christelle der loughian , david fuard , marie couron , danielle gulino - debrac , hlne delano - ayari , and alice nicolas .intracellular stresses in patterned cell assemblies .10 ( 2014 ) 24142423 .aaron f. mertz , shiladitya banerjee , yonglu che , guy k. german , ye xu , callen hyland , m. cristina marchetti , valerie horsley , and eric r. dufresne .scaling of traction forces with the size of cohesive cell colonies .108 ( 2012 ) 198101 .aaron f. mertz , yonglu che , shiladitya banerjee , jill m. goldstein , kathryn a. rosowski , stephen f. revilla , carien m. niessen , m. cristina marchetti , eric r. dufresne , and valerie horsley .cadherin - based intercellular adhesions organize epithelial cell matrix traction forces .110 ( 2013 ) 842847 .sebastian rausch , tamal das , jerome soine , tobias hofmann , christian boehm , ulrich schwarz , heike boehm , and joachim spatz . polarizing cytoskeletal tension to induce leader cell formation during collective cell migration .8 ( 2013 ) 32 .amit pathak , christopher s. chen , anthony g. evans , and robert m. mcmeeking .structural mechanics based model for the force - bearing elements within the cytoskeleton of a cell adhered on a bed of posts . 79 ( 2012 ) 061020061020 .jerome r. d. soine , christoph a. brand , jonathan stricker , patrick w. oakes , margaret l. gardel , and ulrich s. schwarz .model - based traction force microscopy reveals differential tension in cellular actin bundles .11 ( 2015 ) e1004076 .claudia m. cesa , norbert kirchgessner , dirk mayer , ulrich s. schwarz , bernd hoffmann , and rudolf merkel .micropatterned silicone elastomer substrates for high resolution analysis of cellular force patterns .78 ( 2007 ) 034301 .robert w. style , rostislav boltyanskiy , guy k. german , callen hyland , christopher w. macminn , aaron f. mertz , larry a. wilen , ye xu , and eric r. dufresne . traction force microscopy in physics and biology .10 ( 2014 ) 40474055 .wesley r. legant , colin k. choi , jordan s. miller , lin shao , liang gao , eric betzig , and christopher s. chen .multidimensional traction force microscopy reveals out - of - plane rotational moments about focal adhesions .110 ( 2013 ) 881886 .juan c. del alamo , ruedi meili , baldomero alonso - latorre , javier rodriguez - rodrguez , alberto aliseda , richard a. firtel , and juan c. lasheras .spatio - temporal analysis of eukaryotic cell motility by improved force cytometry . 104( 2007 ) 1334313348 .juan c. del alamo , ruedi meili , begoa alvarez - gonzalez , baldomero alonso - latorre , effie bastounis , richard firtel , and juan c. lasheras .three - dimensional quantification of cellular traction forces and mechanosensing of thin substrata by fourier traction force microscopy . 8( 2013 ) e69850 .wesley r legant , jordan s miller , brandon l blakely , daniel m cohen , guy m genin , and christopher s chen .measurement of mechanical tractions exerted by cells in three - dimensional matrices . 7 ( 2010 ) 969971 .casey e. kandow , penelope c. georges , paul a. janmey , and karen a. beningo .polyacrylamide hydrogels for cell mechanics : steps toward optimization and alternative uses . in yu - li wang and dennise. discher , editor , _ methods in cell biology _ , volume 83 , pages 2946 . academic press , 2007 .jean - louis martiel , aldo leal , laetitia kurzawa , martial balland , irene wang , timothe vignaud , qingzong tseng , and manuel thry .measurement of cell traction forces with imagej . in ewak. paluch , editor , _ methods in cell biology _ , volume 125 , pages 269287 . academic press , 2015 .ryan j. bloom , jerry p. george , alfredo celedon , sean x. sun , and denis wirtz .mapping local matrix remodeling induced by a migrating tumor cell using three - dimensional multiple - particle tracking .95 ( 2008 ) 40774088 .florian rehfeldt , andr e. x. brown , matthew raab , shenshen cai , allison l. zajac , assaf zemel , and dennis e. discher .hyaluronic acid matrices show matrix stiffness in 2d and 3d dictates cytoskeletal order and myosin - ii phosphorylation within stem cells . 4 ( 2012 ) 422430 .sung sik hur , juan c. del lamo , joon seok park , yi - shuan li , hong a. nguyen , dayu teng , kuei - chun wang , leona flores , baldomero alonso - latorre , juan c. lasheras , and shu chien .roles of cell confluency and fluid shear in 3-dimensional intracellular forces in endothelial cells . 109 ( 2012 )1111011115 .william ronan , vikram s. deshpande , robert m. mcmeeking , and j. patrick mcgarry .cellular contractility and substrate elasticity : a numerical investigation of the actin cytoskeleton and cell adhesion .13 ( 2013 ) 417435 .pouria moshayedi , luciano da f. costa , andreas christ , stephanie p. lacour , james fawcett , jochen guck , and kristian franze .mechanosensitivity of astrocytes on optimized polyacrylamide gels analyzed by quantitative morphometry . 22( 2010 ) 194114 .edgar gutierrez , eugene tkachenko , achim besser , prithu sundd , klaus ley , gaudenz danuser , mark h. ginsberg , and alex groisman .high refractive index silicone gels for simultaneous total internal reflection fluorescence and traction force microscopy of adherent cells . 6 ( 2011 ) .kevin kit parker , amy lepre brock , cliff brangwynne , robert j. mannix , ning wang , emanuele ostuni , nicholas a. geisse , josehphine c. adams , george m. whitesides , and donald e. ingber . directional control of lamellipodia extension by constraining cell shape and orienting cell tractional forces .16 ( 2002 ) 1195 1204 .v damljanovic , bc lagerholm , and k jacobson .bulk and micropatterned conjugation of extracellular matrix proteins to characterized polyacrylamide substrates for cell mechanotransduction assays . 39 ( 2005 )847851 .qingzong tseng , eve duchemin - pelletier , alexandre deshiere , martial balland , herv guillou , odile filhol , and manuel thry. spatial organization of the extracellular matrix regulates cell cell junction positioning . 109 ( 2012 ) 15061511 .nico hampe , thorsten jonas , benjamin wolters , nils hersch , bernd hoffmann , and rudolf merkel .defined 2-d microtissues on soft elastomeric silicone rubber using lift - off epoxy - membranes for biomechanical analyses . 10 ( 2014 ) 24312443 .franziska klein , thomas striebel , joachim fischer , zhongxiang jiang , clemens m franz , georg von freymann , martin wegener , and martin bastmeyer .elastic fully three - dimensional microstructure scaffolds for cell force measurements . 22 ( 2010 )868871 .franziska klein , benjamin richter , thomas striebel , clemens m franz , georg von freymann , martin wegener , and martin bastmeyer .two - component polymer scaffolds for controlled three - dimensional cell culture . 23 ( 2011 ) .
|
the measurement of cellular traction forces on soft elastic substrates has become a standard tool for many labs working on mechanobiology . here we review the basic principles and different variants of this approach . in general , the extraction of the substrate displacement field from image data and the reconstruction procedure for the forces are closely linked to each other and limited by the presence of experimental noise . we discuss different strategies to reconstruct cellular forces as they follow from the foundations of elasticity theory , including two- versus three - dimensional , inverse versus direct and linear versus non - linear approaches . we also discuss how biophysical models can improve force reconstruction and comment on practical issues like substrate preparation , image processing and the availability of software for traction force microscopy .
|
recent astronomical observations of distant supernovas snia type strongly indicate that the current universe is undergoing an accelerated phase of expansion .if the universe evolution is described by homogeneous and isotropic models filled with a perfect fluid then the acceleration should be driven by a perfect fluid violating the strong energy condition .if different candidates for a fluid termed dark energy are suggested , the simple candidates for the dark energy in the form of positive cosmological constant seems to be the best one .while the lambda cdm model is a good phenomenological description of the acceleration phase of the expansion of the universe there is serious problem with the interpretation of the lambda term as a quantum vacuum energy because of the fine tuning problem .our studies show that when the lcdm has the status of an effective theory which offers description of the observational facts rather than their explanation this theory introduces principally the new theoretical element which plays the role of an effective parameter changing dramatically the dynamics . the theory which is called an effective theory ( although it is not yet a technical notion ) is characterized by a few important features , as follow : * an effective theory _ works _ in a certain field of physics .in most cases this scope of application is described in terms of energy or distance scale .the theory which is effective in a specific physical regime describes behavior of elaborated objects but often does not explain the nature of them .for example the standard model is the effective theory of gluons and quarks in the distant scale of m. intuitively that feature of effective theories was described by h. georgi : + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( ... ) we can divide up parameter space of the world into different regions , in each of which there is a different appropriate description of the important physics .such an appropriate description of the important physics is an effective theory .the two key words here are appropriate and important .the word important is key because the physical processes that are relevant differ from one place in parameter space to another .the word appropriate is key because there is no single description of physics that is useful everywhere in parameter space . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * every effective theory uses parameters which can be called informational input being assigned to the theory without explanations ; i.e. we do not have to understand the nature of these _ input parameters _ as to successfully operate the theory .it is important to distinguish _ input parameters _ from any kind of information being used by the theory or model ( initial conditions for example ) .specific parameters do have the status of _ input parameters _ only in the frame of the effective theory .their values can be determined experimentally but only more fundamental theory in fact provides an explanation for them being like that . * therefore , we can say for example that the nucleus spin , the elementary charge or the magnetic property are _ input parameters _ for the effective theory which uses them successfully but without understanding nature of them .the effective theories can be put into specific series with respect to _ the input parameters_. this is called by some philosophers of science a tower of effective theories .* we have written that an effective theory could be used in a certain area of physics .indeed this kind of theory _ works _ successfully ( effectively ) on that level but it breaks down when we exceed the limits of its application .the very important feature of limited applicability criterium is an easily separation of these regimes . *the effective theories coexist with each other .let s consider two cases .we presented in table i a few examples of coexisting ( but independent ) different dark energy models . in this caseone can speak of sets of models .if two or more theories or models are used to solve current physical problem ( for example the problem of accelerating universe ) we can evaluate their _ effectiveness _ by pointing out what type of knowledge they provide .it is also possible to explain relations between effective theories in terms of structures .an effective theory is not a fundamental theory .we called that kind of structure of effective theories the series or tower of theories .it is possible because the theories of that kind not only coexist with each other ( but not in the same regime ) but also under condition of each other .the theory on the lower level determines some parameters for the theory of upper one .that kind of relation can be elaborated by using the notions of emergence or supervenience . if we study that series of the effective theories we can meet the problem of a fundamental theory , a base theory .this final theory actually_ ends _ that series in the terms of a methodological reconstruction , but at the same time _ begins _ the series in the framework of emergence .one can speculate about the possibility of existence of the fundamental theory , but there are also opinions that searching for the series of effective theories leads to theoretical view described by expression : _ never - ending tower of effective theories _ .we study methodological status of the concordance cosmological lcdm model from the view point of the debate on reductionism and emergence in science .our main result is that structural stability notion taken from the dynamical system theory may be useful in our understanding of the emergence cdm to lcdm model as well as in understanding of the reduction lcdm to cdm one .we argue that the concepts of structural stability might be a suitable setup for the philosophy of cosmology discussion .the lcdm model should be treated in our opinion as an effective theory for the following reasons : 1 .theory of gravity which describes the gravitational sector of cosmology is very complicated but if we postulate some simplified assumption like symmetry assumption idealization then we obtain simplest model which can be representing in the form of the dynamical system . in the cosmology assumption of homogeneity and isotropy of space like sections of constant cosmic time ( ) seems to be justified by the distribution of large scale structure of astronomical objects ( cosmological principle ) .if we assume that the topology of spacetime is , where is a maximally symmetric 3-dimensional space then we obtain a geometrical structure of the spacetime modulo a single function of called the scale factor . in orderif we postulate that source of gravity is in the form of perfect fluid with energy density and pressure then einstein field equation reduces to the ordinary system of differential equation determining a single function .these equations called friedmann equations frw model describe the evolution of the universe at the large scale .of course this model is very simplified but it can be very useful instrument of construction of a new effective theory of the universe ( heuristic function of the model ) .recently many authors ( see ) argue that observational cosmology will change significantly the essence of our world - view .basing on this simple toy model one can effectively derive observables ( for example cosmological tests ) which can be used in testing theory itself ( function of testing the theory ) . note that this is impossible basing on general relativity without the symmetry assumption where there is universal time conception .the general acceptance of the lcdm model as the working is good strategy but one may also seek alternative physics ( pragmatism ) .after many years of hypotheses and markets of models we have standard cosmological models which is leading as to joint the physical model of the world .einstein field equations constitute in general very complicated system of partial nonlinear differential equations but in the cosmology important role plays its solutions with some symmetry assumptions postulated at the very beginning .usually an isotropy , homogeneity itself and its symmetry are assumed . in this caseeinstein field equations can be reduced to the system of ordinary differential equations , i.e. dynamical systems . hence to the cosmologycould be applied the dynamical system methods in natural way .the application of these methods allows to reveal some stability properties of particular visualized in geometrical way as the trajectories in the phase space .hence one can see how large is the class of solutions leading to the desired property in tools of the attractors and the inset of limit set ( an attractor is a limit set with an open inset all the initial conditions that end up in the some equilibrium state ) .the attractors are the most prominent experimentally .it is because the probability of an initial state of the experiment to evolve asymptotically to the limit set it is proportional to the volume of inset .the idea , now called structural stability emerged early in the history of dynamics investigation in 1930 s the writings of andronov , leontovich and pontryagin in russia ( 1934 ) ( the authors do not use the name _ structural stability _ but rather the name roughly systems ) . this idea is based on an observation an actual state of the system can never be specified exactly and application of the dynamical systems might be useful anyway if it can describe the features of the phase portrait that persist when the state of the system is allowed to move around ( see for more comments ) . among all dynamicists there is shared prejudices that : 1 .there is a class of phase portraits that are far simpler than arbitrary ones which can explain why a considerable portion of the mathematical physics has been dominated by the search for the generic properties .the exceptional case should not arise very often in application and they de facto interrupt discussion ( classification ) .2 . the physically realistic models of the world should possess some kind of the structural stability because having so many dramatically different models all agreeing with observation would be fatal for the empirical method of science ( see also .these prejudices in the holton terminology can be treated as a thematic principles . in the cosmology a property ( for example acceleration ) is believed to be physically realistic if it can be attributed by the generic subsets of the models within a space of all admissible solutions or if it possesses a certain stability , i.e. if it is shared by a epsilon perturbed model .for example g. f. r. ellis formulated so called a probability principle the universe model should be one that is a probable model within the set of all universe models and a stability assumption which states that the universe should be stable to the perturbations .the problems are how to define : 1 . the space of state and its equivalence , 2 .the perturbation of the system .the dynamical system is called structurally stable if all -perturbation of it ( sufficiently small ) have the epsilon equivalent phase portrait . therefore for the conception of structural stability we considered a -perturbation of vector field determined by the right - hand sides of the system which is small ( measured by delta ) .we also need a conception of the the epsilon equivalence .this has the form of topological equivalence a homeomorphism of the state space preserving the arrow of time on each trajectory . in the definition of structural stabilityconsiders only the deformation of rubber sheet type stretches or slides the phase space a small amount measured by epsilon .there are developed other concepts of stability used by some authors .for example concepts of rigidity and fragility is used in the sense that the attractor solutions never change as long as some conditions are met . in the structural stability conceptionthe global dynamics is important rather than the fragility of solutions against changes in the shape of a functional form of the hubble function .it is also used the concept of rigidity in the context of a final theory of physics ( toe ) . roughly speaking a mathematical structureis said to be rigid , with respect to a certain deformation parameter , if its all deformation with respect to this parameter yields again the same structure ( see also ) .it is interesting that while the deformation parameter is not defined uniquely , the deformation procedure can be strictly defined .the main advantage of the structural stability is that it is the characterization of global dynamics itself .recently , properties of structural stability of cosmological models were investigated by s. kokarev ( see also and ) . in the introduction to the paper authorclaims that the history of cosmology shows that corrections of cosmological models are realized mainly by the sequence of their , in some sense small , modifications and some of them may survive after small changes , while the other may disappear . in the former case the propertyis referred to as rough or structurally stable , in the later one thin or structurally unstable .the author studies how some model properties , like singularities for example , will be present in the model if we perturb the model ( for example generalize the lagrangian of general relativity ) . in our approachthe property of structural stability is the property of the model itself .also the type of perturbations is not specified ( epsilon perturbation idea ) .therefore , if we prove the structural instability of cdm model , the result will not depend on the choice of the type of perturbation .then the property of structural stability becomes its constitutive property at the very beginning without restriction to the class of perturbation induced by considering new theories with generalized lagrangian .[ fig:1 ] illustrates the property of structural stability of single spiral attractor ( focus ) and saddle point and structural instability of center .the addition of a delta perturbation pointing outward ( no matter how weak ) results in a point repeller .we call such a system structurally unstable because the phase portrait of the center and focus are not topologically equivalent ( notice that all phase curves around the center are closed in contrast to the focus .hence one can claim that a pendulum system ( without friction ) is structurally unstable .idea of structural stability attempts to define the notion of stability of differential deterministic models of the physical processes . in the case of planar dynamical systems( as in the case of models under consideration ) there is true peixoto theorem ( peixoto 1982 ) which states that structurally stable dynamical systems form open and dense subsets in the space of all dynamical systems defined on the compact manifold .this theorem is basic characterization of structurally stable dynamical systems in the plane which offers the possibility of exact definition generic ( typical ) and nongeneric ( exceptional ) cases ( properties ) in tools of the notion of structural stability .unfortunately there is no counterparts of this theorem in more dimensional case when structurally unstable systems can also form open and dense subsets .for our aims , it is important that peixoto theorem can give the characterization of generic cosmological models in terms of potential function of the scale factor which determine the motion of the system of newtonian type : . therefore we can treat frw equation with various forms of dark energy as the two - dynamical systems which looks like newtonian type where the role of coordinate variable is played by the cosmological radius ( or redshift : ) . we can construct an effective potential , the second order acceleration equation has exactly the newtonian form , where the role of a coordinate variable is played by the cosmological radius . using the term of the structural stability introduced first by andronov , leontovich and pontryagin in thirties , one can classify different models of cosmic acceleration .it will be demonstrated that models with the accelerating phase which follows the deceleration are natural and typical from the point of view of the dynamical systems theory combined with the notion of structural stability in contrast to the models with bounces . in fig .[ fig:2 ] there are illustrated two cases : a ) inverted single - well potential and b ) more complicated form of the potential with two maxima corresponding to the saddle point and minimum corresponding to the center ( structurally unstable ) .let us introduce the following definition : if the set of all vector fields having a certain property contains an open dense subset of , then the property is called generic . from the physical point of viewit is interesting to know whether certain subset of ( representing a class of cosmological accelerating models in our case ) contains a dense subset because it means that this property ( acceleration ) is typical in ( see fig.1 ) .it is not difficult to establish some simple relation between the geometry of potential function and the localization of critical points and its character for the case of dynamical systems of newtonian type : 1 .the critical point of the system under consideration , lies always on -axis , i.e. they are representing static universe , ; 2 .the point is a critical point of the newtonian system if it is a critical point of the potential function , i.e. ( is total energy of the system ; for the case flat models and in general ) ; 3 .if is a strict local maximum of , it is a saddle type critical point ; 4 .if is a strict local minimum of the analytic function , it is a center ; 5 .if is a horizontal inflection point of the , it is a cusp .therefore the geometry of potential function will determine the critical points as well as its stability .the integral of energy defines the algebraic curves in the phase space which are representing the evolution of the system with time . in any casethe eigenvalues of the linearization matrix satisfy the characteristic equation .the cosmology is based on the einstein field equation which represents a very complicated system of partial nonlinear differential equation .fortunately , the majority of main class of cosmological models from the point of view of observational data , belong to the class of the spatially homogeneous ones , for which has sense the absolute cosmological time . as a consequence , the evolution of such models can be reduced to the systems of ordinary differential equations . hence to the cosmology could be naturally applied the methods of dynamical system theory or qualitative theory of differential equation . among these class of modelsespecially interesting are the cosmological models with maximally symmetric space sections , i.e. homogeneous and isotropic .they are called frw models ( friedmann - robertson - walker ) if source of the gravity is a perfect fluid described in terms of energy density and pressure , both are the functions of cosmological time .the frw dynamics is described by two basic equations : where the potential , is the scale factor and is hubble s function , a overdot means the differentiation with respect to the cosmological time .the first equation is a consequence of the einstein equations for the component ( 1,1 ) , ( 2,2 ) , ( 3,3 ) and the energy momentum tensor .this equation is called the raychaudhuri or acceleration equation .the second equation represents the conservation condition .it is very strange and unreasonable that such two simple equations satisfactorily describe the universe evolution at the large scales .of course there is a more general class of cosmological models called the bianchi models which has only the symmetry of homogeneity but they do not describe the current universe which is isotropic as indicated measurement of the cosmic microwave background ( cmb)radiation .the system of equations ( 1 ) and ( 2 ) admit the first integral called the friedmann equation where is curvature constant and plays the role of effective energy density . if we consider the lambda cdm model then i.e. energy density is a sum of dust matter ( cold ) and dark energy .therefore the potential function for the flat frw model assumes the following form : or in terms of redshift formally the curvature effects as well as the cosmological constant term can be incorporated into the effective energy density ( ; ; ) . to represent the evolutional paths of cosmological models in this form is popular since peebles monograph ( see also and modern applications and references therein ) .the form of equation ( 1 ) suggests the possible interpretation for the evolutional paths of cosmological models as a motion of a fictitious particle of unit mass in a one - dimensional potential parameterized by the scale factor . following this interpretation the universe is accelerating in the domain of configuration space in which the potential is a decreasing function of the scale factor . in the opposite caseif potential is a growing function of the universe is decelerating .the limit case of zero acceleration corresponds to an extremum of the potential function .it is useful to represent evolution of the systems in terms of the dimensionless density parameter , where is present value of hubble s function . for this aimit is sufficient to introduce the dimensionless scale factor which measures the value of in the units of the present value ( which we choose ) and reparameterize the cosmological time following rule .hence we obtain a 2-dimensional dynamical system describing the evolution of cosmological models : and where is redshift ; for dust matter and quintessence matter satisfying the equation of state , .the form ( 6 ) of the dynamical system opens the possibility of adopting dynamical system methods in investigations of all possible evolutional scenarios for all possible initial conditions .theoretical research in this area obviously shift from founding and analyzing particular cosmological solution to investigating a space of all admissible solutions and discovering how certain properties ( like acceleration , existence of singularities for example ) are distributed in this space .the system ( 6 ) is hamiltonian one and adopting hamiltonian formalism into the admissible motion seems to be natural .this gives at once insight into dynamics of accelerating universe because our problem is similar to the problems of classical mechanics .it is achieved due to particle like description of accelerating cosmology .this cosmology identifies the unique form of the potential function .different potential functions for different propositions of solving the acceleration problem contains table [ tab:1 ] ..the potential function for different dark energy models [ cols="^,^,^ " , ] it is the interesting question whether the global dynamics of the cdm model is structurally stable under a perturbation term .the global structure of dynamics ( phase portraits ) depends on the geometry of potential function because its localization as well as character depends on the first and the second derivatives of potential function , respectively , where are eigenvalues of the linearization matrix of the system and are a solution of the characteristic equation .we reduce the dynamics to the 2-dimensional system in the form ( or ; ) , where is the constant of energy . from the above equationone can be seen that all critical points ( right - hand sides of the system are vanishing ) are situated on the axis ( ) .from the characteristic equation we obtain that for the dynamical system under consideration only three types of critical points are admissible 1 .saddle if and ; 2 .focus if ; 3 . degenerated critical point if .therefore in the first case the eigenvalues are real of opposite signs , and in the second one they are purely imaginary . because the center and degenerated ( non - hyperbolic ) critical points are structurally unstable only in the presence of single saddle point to guarantee the structural stability of the system at finite domain .the critical points of the perturbed system satisfy the condition therefore at least there is only present such a single critical point . because the second derivative of the potential function is always upper convex ,i.e. , and critical point if exists is saddle type .if we consider only lambda term in the perturbation ( i.e. the lcdm model ) then other terms do not change the global phase portraits of the lcdm system or all perturbed systems are topologically equivalent .the relation of global dynamics on the phase plane is the equivalence relation , therefore the lcdm model can be treated as a representative model in this class . after the introducing the projective map covering a circle at infinity one can check that the system admits the critical point which corresponds to and ( or ) , i.e. , the big - rip singularity .this critical point is degenerate , therefore the whole system is structurally unstable .the phase portraits of the cdm model and the lcdm models with an adjoint circle at infinity are shown in fig .[ fig:3 ] , ) were reproduced with and without the circle at infinity ] . .the critical point represents the big - bang singularity ( an unstable node).,title="fig:",scaledwidth=30.0% ] .the critical point represents the big - bang singularity ( an unstable node).,title="fig:",scaledwidth=30.0% ] .the critical point represents the big - bang singularity ( an unstable node).,title="fig:",scaledwidth=30.0% ] represents a big - rip singularity characteristic for the phantom cosmology . in this casethe scale factor as well as its time derivative are infite at finite time . in the right figureappears an additional critical point of saddle type in the case of the negative cosmological constant.,title="fig:",scaledwidth=45.0% ] represents a big - rip singularity characteristic for the phantom cosmology . in this casethe scale factor as well as its time derivative are infite at finite time . in the right figureappears an additional critical point of saddle type in the case of the negative cosmological constant.,title="fig:",scaledwidth=45.0% ] the big bang singularity is glued with the big ripe one .note that both phase portraits are topologically equivalent.,title="fig:",scaledwidth=45.0% ] the big bang singularity is glued with the big ripe one .note that both phase portraits are topologically equivalent.,title="fig:",scaledwidth=45.0% ] note that if we consider oscillatory universes with evolution describing by the center type of critical points ( not the limit cycle ) , then such models are nontypical from the point of view of the structural stability . on the other hand ,if we consider the cdm models perturbed by chaplygin gas then we obtain the phase portrait equivalent to lcdm . while cdm system is structurally unstable because of the presence of non - hyperbolic critical points , the lcdm model is structurally stable .the following statement will characterize structurally stable dynamical system of newtonian type describing perturbation of cdm models .let us consider more complicated class potential function than single inverted well potential ( see fig .[ fig:6 ] ) for example with two maximum points .thus must exist minimum critical points .but its presence means that we have a centre in the phase space , i.e. the system is structurally unstable . plane . ] if is function of scale factor ( or redshift ) , then there is only one differentiate type of the critical point ( modulo diffeomorphism ) which determines the structurally stable global phase portrait .this global dynamics is equivalent to the lcdm one .finally the lcdm model is the simplest structurally stable generic perturbation of the cdm model which is nongeneric .the emergence of the lcdm model one can understand as a transition from a zero measure set of a dynamical system on the plane toward such which forms the open and the dense subsets in an ensemble of the dynamical systems on the plane models of the deterministic processes .in our discussion it would be useful to consider a common approach to reduction in physics , so called _ deductive criterion of reducibility _ of nagel . in this conception reductionis a relation of derivation between upper - level and base - level theories . in upper - level theoryhas termed not already in the base - level one , the terms must be connected using bridge laws .let us consider two models which must be connected using the cosmological parameter .this parameter plays the role of control parameter in the model and we assume that it assumes zero ( vanishes ) in the basal model .we are looking for weakly emergent properties of the model which can be derived ( via bifurcation ) from the complete knowledge of the basal model information .for this aims we use bifurcation theory , from that information about new unveiling properties of the system can be predicted , at least in principle as we change the control parameter .then in principle we can derive the system behavior because we can perform bifurcation analysis answering the question how the structure of the phase space qualitatively changes as parameter is moved . as a result we can predict its future behavior with complete certainty .such a point of view seems to be very close traditional conceptions of emergence ( broad , popper , nagel ) that focus on unpredictability properties of upper - level model even given complete the basal information .let us illustrate our point of view in the very simple but instructive example .the dynamics of the flat cosmological models the the r - w symmetry of space - like section , cosmological constant and without the matter ( only for simplicity of presentation ) is governed by very simple equation ( one - dimensional system ) where is hubble parameter which measures average rate of expansion of the universe ; is here the cosmological constant parameter ; denotes differentiation with respect to the cosmological time .of course the above system can be simply integrated in quadratures .calculation gives or = \sqrt{\frac{3}{\lambda } } x , \quad x(t)=\sqrt{\frac{3}{\lambda } } \tanh \left[\sqrt{\frac{\lambda}{3 } } ( t - t_0)\right],\ ] ] where is integration constant .equation ( 24 ) can be also integrated for the special case of note that there is no transition from the solution ( 25 ) to ( 26 ) as although such a transition exist on the level of the dynamical equation .one can observe on this example how some small changes right hand side of the system dramatically changes its solution . as a result in this system and solution emerges the new asymptotic states representing de sitter model .the bifurcation theory serve to clarify the emergence of new properties ( sometimes unexpected ) of the system without the solving this equation .let us consider the system in the framework of bifurcation theory . for there are two critical points at .from the physical point of view they are representing de sitter model ( expanding and contracting ) .derivative ( ) , and and we can see that the critical point at is stable while the critical point is unstable . for , there is only one critical point at and it is a nonhypebolic critical point since ; the vector field is structurally unstable ; is a bifurcation value . for are no critical points .the phase portraits for this differential equation are shown in fig .[ fig:7 ] .there is no critical point . for a single degenerated critical point at the origin . for have two critical points , unstable des and stable des.,title="fig : " ] there is no critical point . for a single degenerated critical point at the origin .for we have two critical points , unstable des and stable des.,title="fig : " ] there is no critical point . for a single degenerated critical point at the origin .for we have two critical points , unstable des and stable des.,title="fig : " ] in this case we have and as a stable and unstable manifolds respectively . and for the one dimensional center manifold is given by .all of the pertinent information concerning the bifurcation that takes place in this system at is captured in the bifurcation diagram shown in fig .[ fig:8 ] .the curve determines the position of the critical points of the system , a solid curve is used to indicate a family of stable critical points while a dashed curve is used to indicate a family of unstable critical points .this type of bifurcation is called a saddle - mode bifurcation .bifurcates to the expanding ( upper stable branch ) and contracting ( lower unstable branch ) de sitter models.,scaledwidth=70.0% ] the system under consideration constitutes only example of dynamical system analysis of the system cosmological origin but there are many other system with some parameter which shows hidden and unexpected properties as parameter varies .let us remember some of them . in the problem of the motion star aroundthe elliptic galaxy appears henon , heiles hamiltonian system .this system possesses the energy first integral and if then transition to the chaotic behavior appears .another example of bifurcation and emergence of the cyclic behavior in the system of the limit cycle type offers famous van der pol equation . for system is of harmonic oscillator type and for , van der pol s equation has a unique limit cycle and it is stable . the limit cycle is representing a closed trajectory in the phase space which attracts all trajectories from neighborhood . in this case is a bifurcation value parameter and limit cycle behavior is an upper - level emergent property . for the interesting discussion on emergence ,basal and upper - level models and reducibility see .also interesting experiences of emergence new type of dynamical behavior give us hopf bifurcation phenomena ( * ? ? ?* s. 341 ) .this bifurcation can occur in the system with parameter at a nonhyperbolic equilibrium point when the matrix had a simple pair of pure imaginary eigenvalue and no other eigenvalues with zero real point . in the generic case hopf bifurcationoccurs where a periodic orbit is created as the stability of equilibrium point changes .this type of behavior plays important role in the description route to turbulence scenario. it would be worthy to mention the important role of hopf bifurcation in rulle - takens scenario of route to deterministic chaos .the concept of turbulence war originally introduced by landau in 1944 and later revised by ruelle and takens in 1941 . according to landau, turbulence is reached at the end of an indefinite superposition of oscillatory bifurcations , each bringing its unveiling phase into dynamics of the system . in the ruelle - takens scenarioinfinite number of periodic behavior is not required when nonlinearities are acting .they argue that turbulence should be treated as a stochastic regime of deterministic chaos at which we have long term unpredictability due to property of sensitive dependence on initial condition .this stage is reached only after a finite and small number of bifurcation .for some recent philosophical discussion of significance of chaos see .in conventional methodology of deriving einstein equation one derives the equation of motion from the lagrangian which is a sum of - lagrangian for gravity and - lagrangian for matter source which we assume that depends on both metric and the scalar field .therefore there are two different ways of introducing the cosmological constant . in the first approach we put them into the gravitational lagrangian , i.e. , where is a parameter in the ( low energy effective ) action just like the newtonian gravitational constant .the second route is by shifting the matter lagrangian .therefore a shift is clearly equivalent to adding the cosmological constant to the .the symmetry is a symmetry of matter sector .the matter equation of motion do not care about . in the conventional approach gravitybreaks this symmetry .this is the root case of the so - called cosmological constant problem .as long as gravitational field equation are in the form , where is some geometrical quantity ( in g.r . )the theory can not be invariant under the shifts of the form . since such shift are allowed by the matter sector it is very difficult to imagine a solution to the cosmological constant within the conventional approach to gravity .if the metric represents the gravitational degree of freedom that is varied in the action and we demand full general covariance , we can not avoid coupling and can not obtain of the equation of motion which are invariant under the shift .clearly a new dramatically different approach to gravity is required .the main aim of this paper was to show the effectiveness of using the framework of dynamical system theories ( especially the notion of structural stability and tools of bifurcation analysis ) in study on a weak emergence and reducibility of two cosmological models cold dark matter model and lambda cold dark matter model .the latter model contains the cosmological constant term which is favored by recent observational data of the current universe .we have shown the structural instability of cdm model and that value of lambda zero is bifurcation value of cosmological constant .for interpretation of this analysis in the weak emergence context we consider levels the notion of states ( the same theory with the lambda term ) instead of the notions of basal and upper level .the level is corresponding to cdm model and any other level is parametrized by the cosmological constant .the bifurcation analysis reveals a new property of the upper level namely de sitter state at which the universe permanently stays in the accelerating phase of evolution .bifurcation analysis shows that lcdm model has a new property de sitter asymptotic state ( property offering explanation of the observational data ) .the structural instability of cdm model informs us that this model is fragile and its small perturbation changes the structure of evolutional laws .this result has a purely qualitative character and does not depend on postulated type of perturbation . in some sense structural instability propertyinforms us about a new property of the model which appears in the model after its perturbation .we have also demonstrated nonreducibility of solutions of lcdm upper state after taking the limit .it is consequence of structural instability of the lower state representing cdm model . of course it is a reduction of the level of basic equation but never on the level of its solutions.the analogical problem appear in the wayne analysis of limit cycle behavior emergence in nonlinear system . in this case is bifurcation value of parameter . in our interpretation space of state of the systemcan be parametrized by epsilon parameter which measures the strength of the nonlinear term . as a consequence bifurcation analysis reveals a new type of dynamical behavior for any value of epsilon parameter .this upper state can not be reduced to the basal state because there is a place for limit cycle behavior in the linear systems .we can find strict analogy to the system under consideration of cosmological origin and wayne analysis of emergence and singular limits . following the common approach to reductionism in physics, so called deductive criterion of reducibility of nagel(1961 ) , the reduction is a derivational relation between upper level and base level theories .the structural instability of the models learn us that one should distinguish the derivational relation on the both level of basic equation and the level of solutions .notice that , if the model is structurally stable , such distinct is not necessary. we always , in the mathematical modeling of physical processes , try to convey the features of typical , garden - variety , dynamical systems . in mathematicsthe exceptional cases are more complicated and numerous , and they interrupt the physical discussion . moreover dynamicists shared an opinion that such exceptional systems not arise very often because they are not typical . in the history of mathematical dynamicswe observe how we have searched for generic properties .we would like to distinguish a class of phase portraits that are far simpler than the arbitrary ones .this program was achieved for dynamical systems on the plane by peixoto due to the conception of structural stability introduced by andronov and leontovich in 1934 . the criteria for structural stability rely upon two supplementary notions : a perturbation of the phase portraits ( or vector field ) and the topological equivalence ( homeomorphism of the state phase ) .a phase portrait has the property of structural stability if all sufficiently small perturbations of it have equivalent phase portraits .for example if we consider a center type of critical points then the addition of perturbation pointing outward results in a point repellor which is not topologically equivalent to the center .this is a primary example of structurally unstable system . in the opposite case saddle type of critical pointis structurally stable and the phase portrait does not change under small perturbation .in this paper we define the class of frw cosmological models filled by dark energy as a two - dimensional dynamical systems of a newtonian type .they are characterized through the single smooth effective potential function of the scale factor or redshift . among these class of models we distinguish typical ( generic ) and exceptional ( nongeneric ) cases with the help of structural stability notion and the peixoto theorem .we find that the lcdm model in opposition to the cdm model is structurally stable .we demonstrate that this model represents a typical structurally stable perturbation of cdm one .therefore , the transition from the cdm model of the universe toward the lcdm one , which includes the effects of the cosmological constant , can be understood as an emergence of the model from the exceptional case to the generic one .this case represents a generic model in this sense that small changes of its right - hand sides do not change the global phase portraits . in the terms of the potential ,the second order differential equation one can classify different models of cosmic acceleration . it is shown that models with the accelerating phase ( which follows the deceleration ) are natural and typical from the point of view of the dynamical systems theory combined with the notion of structural stability . it is interesting that the new class of lambda perturbated solutions does not reduce to the cdm model solutions ( which reveals their new quality ) , although the corresponding equation reduces to the cdm one after taken limit .the small value of lambda parameter dramatically changes its asymptotic states ( de sitter asymptotic is emerged ) .the universe is accelerating for some value of redshift transition and this phase of acceleration is followed by the deceleration phase dominated by matter .one can say that the lcdm model is emerging from the cdm model as the universe evolves .this very simple two phases model of past evolution of the universe give rise to its present acceleration detected by distant supernovae .therefore , the simplicity and genericity are the best guides to understanding of our universe and its acceleration .more complicated evolutional scenarios are exceptional in the space of all models with a 2-dimensional phase space .there are many different theoretical possibilities of explaining accelerating universe in terms of dark energy ( substantial approach ) or using modification of gravity ( nonsubstantial approach ) . among all candidates the lcdm model is favored by bayesian selection methods .these methods indicate the best model in respect to admissible data .one can ask why the lcdm model is the best one .our answer is that the lcdm model possesses a property of simplicity and in the same time flexibility with respect to the data .the latter can be interpreted in the tools of the structural stability notion .the observations indicates that we live in expanding universe with current accelerations .it seems that this acceleration phase proceeded the deceleration phase .provided that we assume that there was no other qualitative dynamical changes in whole evolution of the universe ( at early as well as late time ) the lcdm model is sufficiently complex to explain such a simple evolution of the universe .no simpler neither the more complex model can be better description of the universe dynamics .the future evolution of our universe is eternal expansion with the accelerating phase according to the lcdm scenario .other possible futures given by other models are unjustified because of the structural instability .such futures are highly improbable because they require a very special fine - tuned model to the reality .it seems that there is possibility of an ideal description of the physical reality in such a way that our model is no more a model but described reality itself . in this casethe structural stability or instability does not matter .but when as in cosmology we have a bunch of models which very roughly describe the universe evolution ( the effective theories ) they should accommodate the reality inside the error margin generated by the perturbation . but this feature is possessed by the structural stable models only .this is an argument in favor of dealing with structural stable models in cosmology .we have found the only structural stable two - phase model of universe dynamics with a deceleration and then acceleration phase is the lcdm model . this work has been supported by the marie curie actions transfer of knowledge project cocos ( contract mtkd - ct-2004 - 517186 ) .the authors are grateful to m. heller , o. hrycyna , u. czyewska and a. krawiec for useful discussion .
|
recent astronomical observations strongly indicate that the current universe is undergoing an accelerated phase of its expansion . if the universe evolution is described by the frw model then the acceleration should be driven by some perfect substance violating the strong energy condition . hence the negative pressure is required for the explanation of acceleration . while different candidates for the fluid termed dark energy have been suggested , the positive cosmological constant seems to be the simplest candidate for dark energy description . however the lambda term treated as a quantum vacuum energy has no simple physical interpretation because of the fine tuning problem . the paper is related to the methodological status of effective theories in context of cosmological investigations . we pointed out that modern effective cosmological theories may provide an interesting case study in the current philosophical discussions . we argue that the standard cosmological model ( lcdm model ) as well as the cdm ( cold dark matter cosmological model ) have a status of effective theories only , similarly to the standard model of particle physics . the lcdm model is studied from the point of view of the debate on reductionism and epistemological emergence in the science . it is shown that bifurcation as well as structural instability notion can be useful in the detection of emergence the lcdm model from the cdm model . we demonstrate the structural stability of the lcdm model can explain the flexibility of the model to accommodation of the observational data . therefore one can explain why the lcdm model is favored over the other model when in confrontations with observations .
|
multiple - input multiple - output ( mimo ) transmission has been of special interest in wireless communication for the past one decade .the alamouti code for two transmit antennas , due to its orthogonality properties , allows a low complexity maximum - likelihood ( ml ) decoder .this scheme paved the way for generalized orthogonal stbcs .such codes allow the transmitted symbols to be decoupled from one another and single - symbol ml decoding is achieved over quasi - static rayleigh fading channels .another aspect of these codes is that they achieve the maximum diversity gain for any number of transmit and receive antennas and for any arbitrary complex constellations .unfortunately , for more than two antennas , rate 1 codes can not be constructed using orthogonal designs . with a view of increasing the transmission rate , quasi - orthogonal designs ( qods ) were proposed in .however , these codes come at the price of a smaller diversity gain and are also double symbol decodable for 4 antennas . as an improvement ,coordinate interleaved orthogonal designs ( ciods ) were proposed .these codes have the same transmission rate as qods but additionally enjoy full diversity while being single symbol decodable for certain complex constellations . but none of the above class of codes is _ full - rate _ , where an stbc is said to be of full - rate if its rate in complex symbols per channel use is equal to the minimum of the number of transmit and the receive antennas . full - rate , full - diversity stbcs are of prime importance in systems like wimax .low - decoding complexity , full - rate stbcs have been proposed in and for and in for mimo systems .these codes allow a simplified ml decoding when compared with codes from division algebras , which are not amenable for low decoding complexity though they offer full - rate .the fast decodable code proposed in for systems , which we call the bhv code , outperforms the best known djabba code only at low snrs while allowing a reduction in the ml decoding complexity .the bhv code does not have full - diversity as it is based on the quasi orthogonal design for 4 antennas , when all the symbols are take values from one constellation . in this paper, we propose a new stbc for mimo transmission .our code is based on the coordinate interleaved orthogonal designs ( ciods ) proposed in ( defined in section [ sec3 ] ) .the major contributions of this paper can be summarized as follows : * our code has a decoding complexity of the order of , for all complex constellations , where is the size of the signal constellation , whereas the djabba code has the corresponding complexity of order and the bhv code has order ( for square qam constellations - though this has not been claimed in ) . *our code has a better cer ( codeword error rate ) performance than the best code in the djabba family due to a higher coding gain for qam constellations . *our code outperforms the bhv code for qam constellations due to its higher diversity gain . * combining the above , it can be seen that when qam constellations are used , our code is the best among all known codes for systems .the remaining content of the paper is organized as follows : in section [ sec2 ] , the system model and the code design criteria is given .the proposed stbc and its decoding complexity are discussed in section [ sec3 ] . in section [ sec4 ] , the decoding scheme for the proposed stbc using sphere decodingis discussed . in section [ sec5 ] ,simulation results are presented to illustrate the comparisons with best known codes . concluding remarks constitute section [ sec6 ] ._ notations : _ let be a complex matrix. then , and ] denotes the trace operation . for a matrix the vector obtained by columnwise concatenation one below the otheris denoted by the kronecker product is denoted by and denotes the identity matrix .given a complex vector ^t, ] is the information symbol vector .code design is based on the analysis of pairwise error probability ( pep ) given by , which is the probability that a transmitted codeword is detected as .the goal is to minimize the error probability , which is upper bounded by the following union bound . where denotes the signal constellation size and is the number of independent information symbols in the codeword .it is well known , that an analysis of the pep leads to the following design criteria : . : to achieve maximum diversity , the codeword difference matrix must be full rank for all possible pairs of codeword pairs and the diversity gain is given by .if full rank is not achievable , then , the diversity gain is given by , where is the minimum rank of the codeword difference matrix over all possible codeword pairs . : for a full ranked stbc , the minimum determinant , defined as \ ] ] should be maximized .the coding gain is given by , with being the number of transmit antennas .if the stbc is non full - diversity and is the minimum rank of the codeword difference matrix over all possible codeword pairs , then , the coding gain is given by where , are the non - zero eigen values of the matrix it should be noted that , for high signal - to - noise ratio ( snr ) values at each receive antenna , the dominant parameter is the diversity gain which defines the slope of the cer curve .this implies that it is important to first ensure full diversity of the stbc and then try to maximize the coding gain . for the mimo system ,the objective is to design a code that is full - rate , i.e transmits 2 symbols per channel use , has full diversity and allows simplified ml decoding .in this section , we present our stbc for the mimo system .the design is based on the ciod for 4 antennas , whose structure is as defined below .ciod for transmit antennas is as follows : + \ ] ] where are the information symbols and and are the real and imaginary parts of respectively .notice that in order to make the above stbc full rank , the signal constellation from which the symbols are chosen should be such that the real part ( imaginary part , resp . ) of any signal point in is not equal to the real part ( imaginary part , resp . ) of any other signal point in .so if square or rectangular qam constellations are chosen , they have to be rotated .the optimum angle of rotation , which we denote by , has been found in to be degrees and this maximizes the diversity and coding gain .our stbc is obtained as follows .our code matrix , denoted by encodes eight symbols drawn from a qam constellation , denoted by .we denote the rotated version of by , with the angle of rotation chosen to be degrees .let , so that the symbols are drawn from the constellation .the codeword matrix is defined as with ] . from , we obtain = \\\sum_{m=1}^{4}\vert y - ht_m\vert_f^2 - 3tr\left(yy^h\right)\end{aligned}\ ] ] therefore , +tr\left[hs_1\left(hs_2\right)^h\right ] { } \nonumber\\ & & { } -tr\left[hs_2 y^h\right]-tr\left[y\left(hs_2 \right)^h\right ] { } \nonumber\\ & & { } + tr\left[hs_2 \left(hs_2 \right)^h\right]\end{aligned}\ ] ] { } \nonumber\\ & & { } + \sum_{m=1}^{4}tr\left[ht_m\left(hs_2 \right)^h\right ] + \vert y - hs_2 \vert_f^2 { } \nonumber\\ & & { } -4tr(yy^h ) \end{aligned}\ ] ] hence , when is given , i.e , symbols and are given , the ml metric can be decomposed as with and being a function of symbol alone .thus decoding can be done as follows : choose the quadruplet and then parallelly decode and so as to minimize the ml decoding metric . with this approach ,there are values of the decoding metric that need to be computed in the worst case .so , the decoding complexity is of the order of .now , we show how the sphere decoding can be used to achieve the decoding complexity of .it can be shown that can be written as where is given by with being the generator matrix for the stbc as defined in definition [ def3 ] and ^t.\ ] ] with , drawn from , which is a rotation of the regular qam constellation .let ^t ] with being a rotation matrix and defined as follows \ ] ] so , can be written as where . using this equivalent model ,the ml decoding metric can be written as on obtaining the qr decomposition of , we get = , where is an orthonormal matrix and is an upper triangular matrix .the ml decoding metric now can be written as if ] and hence obtain the symbols and .having found these , and can be decoded independently .observe that the real and imaginary parts of symbol are entangled with one another because of constellation rotation but are independent of the real and imaginary parts of , and when and are conditionally given .similarly , , and are independent of one another although their own real and imaginary parts are coupled with one another .having found the partial vector ^t$ ] , we proceed to find the rest of the symbols as follows .we do four parallel 2 dimensional real search to decode the symbols , , and .so , overall , the worst case decoding complexity of the proposed stbc is 4 .this due to the fact + 1 ) .an 8 dimensional real sd requires metric computations in the worst possible case .four parallel 2 dimensional real sd require metric computations in the worst case .+ this decoding complexity is significantly less than that for the bhv code proposed in , which is 2 ( as claimed in ) .we provide performance comparisons between the proposed code and the existing full - rate codes - the djabba code , and the bhv code .fig [ 4qam ] shows the codeword error rate ( cer ) performance plots for uncorrelated quasi - static rayleigh flat - fading channel as a function of the received snr at the receiver for 4-qam signaling .all the codes perform similarly at low and medium snr .but at high snr , the full diversity property of the djabba code and the proposed code enables them to outperform the bhv code .in fact , our code slightly outperforms the djabba code at high snr .fig [ 16qam ] shows the cer performance for 16-qam signaling , which shows a similar result .table [ table1 ] gives a comparision of some of the well known codes for mimo systems [ cols="^,^,^,^,^ " , ]in this paper , we have presented a full - rate , full diversity stbcs for mimo transmission which enables a significant reduction in the decoding complexity without having to pay in cer performance .in fact , our code performs better than the best known full rate codes for mimo systems .so , to summarize , among the existing codes for transmit antennas and receive antennas , the proposed code is the best for qam constellations .j. paredes , a.b .gershman , m. gharavi - alkhansari , a space - time code with non - vanishing determinants andfast maximum likelihood decoding , " in proc _ ieee international conference on acoustics , speech and signal processing(icassp 2007 ) , _ vol . 2 , pp.877 - 880 , april 2007 .s. sezginer and h. sari , `` a full rate full - diversity space - time code for mobile wimax systems , '' in proc ._ ieee international conference on signal processing and communications _ , dubai ,july 2007 .j. c. belfiore , g. rekaya and e. viterbo , `` the golden code : a full rate space - time code with non - vanishing determinants , '' _ ieee trans .inf . theory _ ,51 , no . 4 , pp .1432 - 1436 , april 2005 .v.tarokh , n.seshadri and a.r calderbank,"space time codes for high date rate wireless communication : performance criterion and code construction , _ ieee trans .inf . theory _ ,744 - 765 , 1998 .a. hottinen , y. hong , e. viterbo , c. mehlfuhrer and c. f. mecklenbrauker , a comparision of high rate algebraic and non - orthogonal stbcs , in _ proc .itg / ieee workshop on smart antennas wsa 2007 _ , vienna , austria , february 2007 .
|
this paper proposes a low decoding complexity , full - diversity and full - rate space - time block code ( stbc ) for 4 transmit and 2 receive ( ) multiple - input multiple - output ( mimo ) systems . for such systems , the best code known is the djabba code and recently , biglieri , hong and viterbo have proposed another stbc ( bhv code ) which has lower decoding complexity than djabba but does not have full - diversity like the djabba code . the code proposed in this paper has the same decoding complexity as the bhv code for square qam constellations but has full - diversity as well . compared to the best code in the djabba family of codes , our code has lower decoding complexity , a better coding gain and hence a better error performance as well . simulation results confirming these are presented .
|
the classical problem of estimating a continuous - valued function from noisy observations , known as _ regression _ , is of central importance in statical theory with a broad range of applications , see e.g. .when no structural assumptions concerning the target function are made , the regression problem is termed _nonparametric_. informally , the main objective in the study of nonparametric regression is to understand the relationship between the regularity conditions that a function class might satisfy ( e.g. , lipschitz or hlder continuity , or sparsity in some representation ) and the minimax risk convergence rates . a further consideration is the computational efficiency of constructing the regression function .the general ( univariate ) nonparametric regression problem may be stated as follows .let be a metric space , namely is a set of points and a distance function , and let be a collection of functions ( `` hypotheses '' ) ] is endowed with some fixed , unknown probability distribution , and the learner observes i.i.d. draws .the learner then seeks to fit the observed data with some hypothesis so as to minimize the _, usually defined as the expected loss for and some .two limiting assumptions have traditionally been made when approaching this problem : ( i ) the space is euclidean and ( ii ) , where is the target function and is an i.i.d .noise process , often taken to be gaussian . although our understanding of nonparametric regression under these assumptions is quite elaborate , little is known about nonparametric regression in the absence of either assumption .the present work takes a step towards bridging this gap .specifically , we consider nonparametric regression in an arbitrary metric space , while making no assumptions on the distribution of the data or the noise .our results rely on the structure of the metric space only to the extent of assuming that the metric space has a low `` intrinsic '' dimensionality . specifically , we employ the doubling dimension of , denoted , which was introduced by based on earlier work of , and has been since utilized in several algorithmic contexts , including networking , combinatorial optimization , and similarity search , see e.g. .( a formal definition and prevailing examples appear in section [ sec : tech ] . ) following the work in on classification problems , our risk bounds and algorithmic runtime bounds are stated in terms of the doubling dimension of the ambient space and the lipschitz constant of the regression hypothesis , although neither of these quantities need be known in advance . [ [ our - results . ] ] our results .+ + + + + + + + + + + + we consider two kinds of risk : ( mean absolute ) and ( mean square ) . more precisely ,for we associate to each hypothesis the empirical -risk [ eq : emprisk ] r_n(h ) = r_n(h ,q ) = n_i=1^n ^q and the ( expected ) -risk [ eq : exprisk ] r(h ) = r(h , q ) = ^q = _ ^q ( dx , dy ) .it is well - known that ] and ] , the induced family of functions mapping \mapsto[0,1]]-valued functions over ]-valued -lipschitz functions on .we proceed to bound the -fat - shattering dimension of .[ thm : fatf ] let be defined on a metric space , where . then _ ( ^q__l ) & & ^()+1 holds for and all .the notation is a convenient shorthand for combining the results for and and is not intended to imply an interpolation for intermediate values .fix a and recall what it means for to -shatter a set ( where and ^{|s|} ] to ] be the -perturbation of , as defined in ( [ eq : hetadef ] ). then \eta ) \leq { \mathrm{fat}}_{\gamma-\eta}({\mathcal{g}}).\ ] ] holds for all .suppose that \eta ] such that [ eq : gameps ] b(x)(f_b(x)-r(x))for all .now by definition , for each there is some so that .define to be such an -approximation for each .we claim that the collection shatters at level . indeed , replacing with in ( [ eq : gameps ] ) perturbs the left - hand side by an additive term of at most .[ cor : fatfeta ] let be defined on a metric space and \eta ] be the induced family of functions \to[0,1] ] be the -perturbation of for .then for all , [ eq : delprob ] p(_n([_l]_,q ) > ) & & 24n ^d(24en/ ) ( -^2n/36 ) where is the uniform deviation defined in ( [ eq : deltah ] ) and d = d(l , ) = ^()+1 . inverting the relation in ( [ eq : delprob ] ) we get an estimate analogous to ( [ eq : delta - bound ] ) : with probability at least , \eta , q)\le \epsilon(n , l,\delta)+24q\eta,\ ] ] where is as in ( [ eq : eps(n , l , d ) ] ) , implying a risk bound of so far , we have established the following .let be a doubling metric space and a collection of -lipschitz ] and let ] is a function such that for each , with probability at least , we have then , whenever some \eta ] achieves empirical risk on a sample of size , we have the following bound on , the true risk of : [ eq : riskbound ] r(h ) r_n(h ) + ( n , k , p_k ) + 24q , with probability at least ( where the diameter of the point set has been taken as 1 , and is the minimum value of for which the right - hand side of equation is at most ) . in the rest of this section , we devise an algorithm that computes a hypothesis that approximately minimizes our bound from on the true risk , denoted henceforth notice that on the right - hand side , the first two terms depend on , but only the first term depends on the choice of , and only the third term depends on . [ thm : risk - minimization ] let for be an i.i.d .sample drawn from , let , and let be a hypothesis that minimizes over all \eta ] with [ eq : risk - minimization ] r_(h ) 2r_(h^ * ) .we show in theorem [ thm : lipext ] how to quickly evaluate the hypothesis on new points . in proving the theorem, we will find it convenient to compare the output to a hypothesis that is smooth ( i.e. lipschitz but unperturbed ) . indeed , let be as in the theorem , and be a hypothesis that minimizes .then , and we get . accordingly , the analysis below will actually prove that , and then would follow easily , essentially increasing the additive error by .moreover , once equation is proved , we can use the above to conclude that , which compares the risk bound of our algorithm s output to what we could possibly get using smooth hypotheses . in the rest of this sectionwe consider the observed samples as fixed values , given as input to the algorithm , so we will write instead of .suppose that the lipschitz constant of an optimal _ unperturbed _ hypothesis were known to be . then is fixed , and the problem of computing both and its empirical risk can be described as the following optimization program with variables for ] with stretch ( i.e. , set ) and retain a constraint in lp if and only if its two variables correspond to two nodes that are connected in the spanner .it follows from the bounded degree of the spanner that each variable appears in constraints , which implies that there are total constraints .[ [ modifying - remaining - constraints . ] ] modifying remaining constraints .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + each spanner - edge constraint is replaced by a set of two constraints by the guarantees of the lp solver we have that in the returned solution , each spanner edge constraint will satisfy \\ & = & \beta + ( 1+\beta ) l ' \cdot \rho(x_i , x_j ) \end{array}\ ] ] now consider the lipschitz condition for two points not connected by a spanner edge : let be a -stretch -hop spanner path connecting points and .then the spanner stretch guarantees that \\ & \le & \beta c ' \log n + ( 1+\beta)l ' \cdot ( 1+\eta ) \rho(x , x ' ) \end{array}\ ] ] choosing , and noting that , we have that for all point pairs we claim that the above inequality ensures that the computed hypothesis ( represented by variables above ) is a -perturbation of some hypothesis with lipschitz constant . to prove this , first note that if , then the statement follows trivially .assume then that ( by the discretization of ) , .now note that a hypothesis with lipschitz constant is a -perturbation of some hypothesis with lipschitz constant .( this follows easily by scaling down this hypothesis by a factor of , and recalling that all values are in the range ] a -net , is at least ; and ( ii ) every point in is within distance from at least one point in .it can be easily constructed by a greedy process . ] then for every net - point set , and extend this function from to all of without increasing lipschitz constant by using the mcshane - whitney extension theorem for real - valued functions .observe that for every two net - points , it follows that ( defined on all of ) has lipschitz constant . now , consider any point and its closest net - point ; then .using the fact , we have that + ( 1 + 3\eta)l ' \cdot \rho(x , y ) \le \frac{\eta^2}{24q } + 2 + 5\eta^2 \le 3 \eta ] with .the parameters in theorem [ thm : risk - minimization ] are achieved by scaling down to and the simple manipulation .finally , we turn to analyze the algorithmic runtime .the spanner may be constructed in time .young s lp solver is invoked times , where the term is due to the binary search for , and the term is due to the binary search for . to determine the runtime per invocation , recall that each variable of the program appears in constraints , implying that there exist total constraints .since we set , we have that each call to the solver takes time , for a total runtime of .this completes the proof of theorem [ thm : risk - minimization ] for .above , we considered the case when the loss function is linear . herewe modify the objective function construction to cover the case when the loss function is quadratic , that is .we then use the lp solver to solve our quadratic program .( note that the spanner - edge construction above remains as before , and only the objective function construction is modified . )let us first redefine by the constraints it follows from the guarantees of the lp solver that in the returned solution , and . now note that a quadratic inequality can be approximated for ] , we will consider an equation set of the form which satisfies that the minimum feasible value of is in the range ] , we wish to evaluate a minimum lipschitz extension of on a new point . that is , denoting , we wish to return a value that minimizes . necessarily , this value is not greater than the lipschitz constant of the classifier , meaning that the extension of to the new point does not increase the lipschitz constant of and so theorem [ thm : delta - strat ] holds for the single new point .( by this local regression analysis , it is not necessary for newly evaluated points to have low lipschitz constant with respect to each other , since theorem [ thm : delta - strat ] holds for each point individually . )first note that the lipschitz extension label of will be determined by two points of .that is , there are two points , one with label greater than and one with a label less than , such that the lipschitz constant of relative to each of these points ( that is , ) is maximum over the lipschitz constant of relative to any point in .hence , can not be increased or decreased without increasing the lipschitz constant with respect to one of these points .note then that an exact lipschitz extension may be derived in time in brute - force fashion , by enumerating all point pairs in , calculating the optimal lipschitz extension for with respect to each pair alone , and then choosing the candidate value for with the highest lipschitz constant .however , we demonstrate that an approximate solution to the lipschitz extension problem can be derived more efficiently .[ thm : lipext ] an -additive approximation to the lipschitz extension problem can be computed in time .the algorithm is as follows : round up all labels to the nearest term ( for any integer ) , and call the new label function .we seek the value of , the optimal lipschitz extension value for for the new function .trivially , .now , if we were given for each the point with label that is the nearest neighbor of ( among all points with this label ) , then we could run the brute - force algorithm described above on these points in time and derive . however , exact metric nearest neighbor search is potentially expensive , and so we can not find these points efficiently .we instead find for each a point with label that is a -approximate nearest neighbor of among points with this label .( this can be done by presorting the points of into buckets based on their label , and once is received , running on each bucket a -approximate nearest neighbor search algorithm due to that takes time . )we then run the brute force algorithm on these points in time .the nearest neighbor search achieves approximation factor , implying a similar multiplicative approximation to , and thus also to , which means at most additive error in the value .we conclude that the algorithm s output solves the lipschitz extension problem within additive approximation .in previous sections , we defined an efficient regression algorithm and analyzed its finite - sample performance . here we show that it enjoys the additional property of being strongly consistent .note that our regression hypothesis is constructed via approximate nearest neighbors ; see for consistency results of nearest - neighbor regression functions in euclidean spaces .we say that a regression estimator is _ strongly consistent _ if its expected risk converges almost surely to the optimal expected risk .further , it is called _ universal _ if this rate of convergence does not depend on the sampling distribution . in this section ,we establish the strong , universal consistency of our regression estimate .[ thm : exact - consist ] let be a compact metric space and suppose ] that achieves where is defined in ( [ eq : exprisk ] ) and the infimum is taken over all continuous ] , we have consider the case .then = which proves the claim for this case . for ,recall that the lipschitz constant of a differentiable real function is bounded by the maximal absolute value of its derivative .the function \to[0,1] ] has , which proves the claim for .in this section , we prove the following theorem . see section [ sec : tech ] for the definition of a spanner . [ thm : spanner ]every finite metric space on points admits a -stretch spanner with degree ( for ) and hop - diameter , that can be constructed in time .gottlieb and roditty presented for general metrics a -stretch spanner with degree and construction time , but this spanner has potentially large hop - diameter .our goal is to modify this spanner to have low hop - diameter , without significantly increasing the spanner degree .now , as described in , the points of are arranged in a tree of degree , and a spanner path is composed of three consecutive parts : ( a ) a path ascending the edges of the tree ; ( b ) a single edge ; and ( c ) a path descending the edges of the tree .we will show to decrease the number of hops in parts ( a ) and ( c ) . below we will prove the following lemma .[ lem : tree ] let be a tree containing directed child - parent edges ( ) , and let be the degree of . then may be augmented with directed descendant - ancestor edges to create a dag with the following properties : ( i ) has degree ; and ( ii ) the hop - distance in from any node to each of its ancestors is . note that theorem [ thm : spanner ] is an immediate consequence of lemma [ lem : tree ] applied to the spanner of .it remains only to prove lemma [ lem : tree ] .we will first need a simply preliminary lemma : [ lem : path ] consider an ordered path on nodes .let these nodes be assigned positive weights , and let the weight of the path be .there exists a dag on these nodes with the following properties : 1 .edges in always point to the antecedent node in the ordering .the hop - distance from any node to the root node is not more than .3 . the hop - distance from any node to an antecedent is not more than .4 . has degree 3 .the construction is essentially the same as in the biased skip - lists of bagchi et al .let and be the left and right _ end nodes _ of the path , and let the other nodes be the _ middle nodes_. partition the middle nodes into two child subpaths ( the left child path ) and ( the right child path ) , where is chosen so that the weight of the middle nodes of each child path is not more than half the weight of the middle nodes of the parent path .( if the parent path has three middle nodes or fewer , then there will be a single child path . )the child paths are then recursively partitioned , until the recursion reaches paths with no middle nodes .the edges are assigned as follows .a right end node of a path has two edges leaving it .one points to the left end node of the path ( unless the path has only one node ) .the other edge points to the right end node of the right ( or single ) child path .a left end node of a path has one edge leaving it : if this path is a right child path , the edge points to the left sibling path s right end node . if this path is a left or single child path , then the edge points to the parent s left end node .the lemma follows via standard analysis .given tree , decompose into _ heavy paths _ :a heavy path is one that begins at the root and continues with the heaviest child , the child with the most descendants . in a heavy path decomposition, all off - path subtrees are recursively decomposed . for each heavy path , let the weight of each node in the path be the number of descendant nodes in its off - path subtrees . for each heavy path, we build the weighted construction of lemma [ lem : path ] .now , a path from node to traverses a set of at most heavy paths , say paths .the number of hops from to is bounded by ) , and the degree of is at most .
|
we present a framework for performing efficient regression in general metric spaces . roughly speaking , our regressor predicts the value at a new point by computing a lipschitz extension the smoothest function consistent with the observed data while performing an optimized structural risk minimization to avoid overfitting . the offline ( learning ) and online ( inference ) stages can be solved by convex programming , but this naive approach has runtime complexity , which is prohibitive for large datasets . we design instead an algorithm that is fast when the the doubling dimension , which measures the `` intrinsic '' dimensionality of the metric space , is low . we use the doubling dimension multiple times ; first , on the statistical front , to bound fat - shattering dimension of the class of lipschitz functions ( and obtain risk bounds ) ; and second , on the computational front , to quickly compute a hypothesis function and a prediction based on lipschitz extension . our resulting regressor is both asymptotically strongly consistent and comes with finite - sample risk bounds , while making minimal structural and noise assumptions .
|
restoration is one of the most fundamental issues in imaging science and plays an important role in many mid - level and high - level image processing applications . on account of the imperfection of an imaging system ,a recorded image may be inevitably degraded during the process of image capture , transmission , and storage .the image formation process is commonly modeled as the following linear system where the vectors and represent the true scene and observation whose column vectors are the successive -vectors of and , respectively . is gaussian white noise with zero mean , and is a blurring matrix constructed from the discrete point spread function , together with the given boundary conditions .it is well known that image restoration belongs to a general class of problems which are rigorously classified as ill - posed problems . to tackle the ill - posed nature of the problem , regularization techniques are usually considered to obtain a stable and accurate solution . in other words , we seek to approximately recover by minimizing the following variational problem : where denotes the euclidean norm , is conventionally called a regularization functional , and is referred to as a regularization parameter which controls the balance between fidelity and regularization terms in ( [ eq2 ] ) . how to choose a good functional is an active area of research in imaging science . in the early 1960s ,d. l. phillips and a. n. tikhonov proposed the definition of as an -type norm ( often called tikhonov regularization in the literature ) , that is , with an identity operator or difference operator .the functional of this type has the advantage of simple calculations , however , it produces a smoothing effect on the restored image , i.e. , it overly smoothes edges which are important features in human perception .therefore , it is not a good choice since natural images have many edges . to overcome this shortcoming , rudin , osher and fatemi proposed to replace the -type norm with the total variation ( tv ) seminorm , that is , they set . then the corresponding minimization problem is where and the discrete gradient operator is defined by with and for and refers to the entry of the vector ( it is the pixel location of the image , and this notation is valid throughout the paper unless otherwise specified ) . the problem ( [ eq3 ] ) is commonly referred to as the rof model .the tv is isotropic if the norm is the euclidean norm and anisotropic if 1-norm is defined . in this work ,we only consider the isotropic case since the isotropic tv usually behaves better than the anisotropic version . in the literature , many algorithms have been proposed for solving ( [ eq3 ] ) . in case is the identity matrix , then the problem ( [ eq3 ] ) is referred to as the denoising problem . in the pioneering work , the authors proposed to employ a time marching scheme to solve the associated euler - lagrange equation of ( [ eq3 ] ) .however , their method is very slow due to cfl stability constraints .later , vogel and oman proposed a lagged diffusivity fixed point method to solve the same euler - lagrange equation of ( [ eq3 ] ) . in ,chan and mulet proved this method had a global convergent property and was asymptotically faster than the explicit time marching scheme .chambolle studied a dual formulation of the tv denoising problem and proposed a semi - implicit gradient descent algorithm to solve the resulting constrained optimization problem .he also proved his algorithm is globally convergent with a suitable step size . in , goldstein and osher proposed the novel split bregman iterative algorithm to deal with the artificial constraints , their method has several advantages such as fast convergence rate and stability , etc .in , chan , golub and mulet considered to apply newton s method to solve the nonlinear primal - dual system of the system ( [ eq3 ] ) for image deblurring problem .recently , wang _ proposed a fast total variation deconvolution ( ftvd ) method which used splitting technique and constructs an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values . almost at the same time, huang , ng and wen proposed a fast total variation ( fast - tv ) minimization method by introducing an auxiliary variable to replace the true image .their methods belong to penalty methods from the perspective of optimization . in ,beck and teboulle studied a fast iterative shrinkage - thresholding algorithm ( fista ) which is a non - smooth variant of nesterov s optimal gradient - based algorithm for smooth convex problems .later , afonso _ et al . _ proposed an augmented lagrangian shrinkage algorithm ( salsa ) which is an instance of the so - called alternating direction method of multipliers ( admm ) .more recently , chan , tao and yuan proposed an efficient and effective method by imposing box constraint on the rof model ( [ eq3 ] ) .their numerical experiments showed that their method could obtain much more accurate solutions and was superior to other state - of - the - art methods .the methods of solving rof model ( [ eq3 ] ) mentioned above are just a few examples , we refer the interested readers to and the references therein for further details .although total variation regularization has been proven to be extremely useful in a variety of applications , it is well known that tv yields staircase artifacts .therefore , the approaches involving the classical tv regularization often develop false edges that do not exist in the true image since they tend to transform smooth regions ( ramps ) into piecewise constant regions ( stairs ) .to avoid these drawbacks , nonlocal methods were considered in .besides , in the literature , there is a growing interest for replacing the tv regularizer by the high - order total variation ( htv ) regularizer , which can comprise more than merely piecewise constant regions .the majority of the high - order norms involve second - order differential operators because piecewise - vanishing second - order derivatives lead to piecewise - linear solutions that better fit smooth intensity changes , namely , we choose the regularization functional . then the minimization problem ( [ eq2 ] ) is treated as following htv - based problem : where with .note that , denotes the second order difference of at pixel . the minimization problem ( [ eq4 ] ) is usually called llt model which was first proposed by lysaker , lundervold , and tai . in , the authors applied gradient descent algorithm to solve the corresponding fourth - order partial differential equation . later in ,chen , song and tai employed the dual algorithm of chambolle for solving ( [ eq4 ] ) and they verified that their method was faster than the original gradient descent algorithm .a similar dual method was also proposed by steidl but from the linear algebra point of view by consequently using matrix - vector notation .recently , wu and tai considered to employ the alternating direction method of multipliers ( admm ) to tackle the problem ( [ eq3 ] ) .also , some other high order models have been proposed in the literature , we refer the interested reader to see and references therein for details .note that there exist other different types of regularization functionals , such as the markov random field ( mrf ) regularization , the mumford - shah regularization , and frame - based regularization . in this paper , however , we consider to set in ( [ eq2 ] ) to be the overlapping group sparsity total variation ( ogs - tv ) functional which we have introduced in for the one - dimension signal denoising problem .the numerical experiments there showed that the ogs - tv regularizer can alleviate staircase effect effectively .then it is natural to extend this idea to the 2-dimensional case such as image restoration considered in this work .the rest of the paper is organized as follows . in the next section, we will briefly introduce the definition of the overlapping group sparsity total variation functional for image restoration .we will also review the majorization - minimization ( mm ) methods and admm , which are the essential tools for us to propose our efficient method . in section 3 ,we derive an efficient algorithm for solving the considered minimization problem .consequently , in section 4 , we give a number of numerical experiments of image denoising and image deblurring to demonstrate the effectiveness of the proposed method , as compared to some other state - of - the - art methods .finally , discussions and conclusions are made in section 5 .in , we have denoted a -point group of the vector by \in \mathbb{r}^k\ ] ] note that can be seen as a block of contiguous samples of staring at index . with the notation ( [ eq5 ] ) ,a group sparsity regularizer is defined as the group size is denoted by . for the two - dimensional case , we define a -point group of the image ( note that the vector is obtained by stacking the columns of the matrix ) \\ & \in \mathbb{r}^{k\times k}\\ \end{split}\ ] ] with , m_2 = [ \frac{k}{2}] ] denotes the greatest integer not greater than .let be a vector which is obtained by stacking the columns of the matrix , i.e. , .then the overlapping group sparsity functional of a two - dimensional array can be defined by the group size of functional ( [ eq8 ] ) is denoted by .consequently , we set the regularization functional in ( [ eq2 ] ) to be of the form in ( [ eq9 ] ) , if , then is the commonly used anisotropic tv functional. then we refer to the regularizer in ( [ eq9 ] ) as the overlapping group sparsity anisotropic total variation functional ( ogs - atv ) .the admm technique was initially proposed to solve the following constrained separable convex optimization problem : where are closed convex functions , are linear transforms , are nonempty closed convex sets , and is a given vector . using a lagrangian multiplier to the linear constraint in ( [ cop ] ) , the augmented lagrangian function for problem ( [ cop ] ) is where is the lagrange multiplier and is a penalty parameter , which controls the linear constraint .the idea of the admm is to find a saddle point of .usually , the admm consists in minimizing alternatively , subject to , such as minimizing with respect to , keeping and fixed .notice that the term in the definition of the augmented lagrangian functional in ( [ alf ] ) can be written as a single quadratic term after simple mathematical operations , leading to the following alternative form for a simple but powerful algorithm : the admm + + _ _ l + _ * initialization * _ : starting point , , + _ * iteration * _ : + + + + + * until a stopping criterion is satisfied . *+ + + an important advantage of the admm is to make full use of the separable structure of the objective function .note that the admm is a splitting version of the augmented lagrangian method where the augmented lagrangian method s subproblem is decomposed into two subproblems in the gauss - seidel fashion at each iteration , and thus the variables and can be solved separably in alternating order .the convergence of the alternating direction method can be found in .moreover , we have hence , and . especially , if matrices and have full column rank , it leads to and .[ [ mm ] ] mm ~~ the mm method substitutes a simple optimization problem for a difficult optimization problem .that is to say , instead of minimizing a difficult cost functional directly , the mm approach solves a sequence of optimization problems , .the idea is that each is easier to solve that . of course , iteration is the price we pay for simplifying the original problem .generally , a mm iterative algorithm for minimizing has the form where , for any , and , i.e. , each functional is a majorizor of .when is convex , then under mild conditions , the sequence produced by ( [ eq13 ] ) converges to the minimizer of .a good majorizing functional usually satisfies the following characteristics : ( a ) avoiding large matrix inversions , ( b ) linearizing an optimization problem , ( c ) separating the parameters of an optimization problem , ( d ) dealing with equality and inequality constraints gracefully , or ( e ) turning a nondifferentiable problem into a smooth problem .more details about the mm procedure can be found in and the references therein .before we proceed with the discussion of the proposed method , we consider a minimization problem of the form where is a positive parameter and the functional is given by ( [ eq8 ] ) . in , we analysed this problem elaborately .however , for the sake of completeness , we briefly introduce the solving method here . to derive an effective and efficient algorithm with the mm approach for solving the problem ( [ eq14 ] ) , we need a majorizor of , and fortunately , we only need to find a majorizor of because of the simple quadratic term of the first term in ( [ eq14 ] ) . to this end , note that for all and with equality when . substituting each group of into ( [ eq15 ] ) and summing them, we get a majorizor of \end{split}\ ] ] with provided for all . with a simple calculation , can be rewritten as where is a constant that does not depend on , and is a diagonal matrix with each diagonal component {l , l } = \sqrt{\sum_{i , j =- m_1}^{m_2 } \left[\sum_{k_1,k_2=-m_1}^{m_2}\left|u_{r - i+k_1,t - j+k_2}\right|^2 \right]^{-\frac{1}{2}}}\ ] ] with .the entries of can be easily computed using matlab built - in function ` conv2 ` .then a majorizor of can be easily given by with for all , and .to minimize , the mm aims to iteratively solve which has the solution where is a identity matrix with the same size of .note that the inversion of the matrix can be computed very efficiently via simple component - wise calculation .to summerize , we obtain the algorithm 2 for solving the problem ( [ eq14 ] ) .+ + _ _ l + + _ * initialization * _ : + starting point , , , maximum inner + iterations .+ _ * iteration * _ : + .{l , l}= \sqrt{\sum\limits_{i , j =- m_1}^{m_2 } \left[\sum\limits_{k_1,k_2=-m_1}^{m_2}\left|u_{r - i+k_1,t - j+k_2}\right|^2 \right]^{-\frac{1}{2}}} ] .such a constraint is called the box constraint .for instance , the images considered in this work are all 8-bit images , we would like to restore them in a dynamic range ] , \\ & a_u , & f_{i , j}>a_u\quad \ \ \\ \end{aligned } \right.\ ] ] by introducing new auxiliary variables , we change the minimization problem ( [ eq23 ] ) together with a constraint ( [ eq24 ] ) to the equivalent constrained minimization problem thus , problem ( [ eq25 ] ) satisfiesthe framework in ( [ cop ] ) with the following specifications : + 1 ) ; + 2 ) ; + 3 ) according to algorithm 1 , we get the iterative scheme we now investigate these subproblems one by one .the minimization problem ( [ eq26 ] ) with respect to is a least square problem which is equivalent to the corresponding normal equation since the parameters is positive , the coefficient matrix in ( [ eq29 ] ) is always invertible even when is singular .note that , and block circulant with circulant blocks ( bccb ) when the periodic boundary conditions are used .we know that the computations with bccb matrices can be very efficiently performed by using fast fourier transforms ( ffts ) . clearly ,the minimization ( [ eq27 ] ) with respect to are decoupled , i.e. , they can be solved separately .considering , we have + the corresponds to the following optimization problem for simplicity , we denote . it can be observed that problems ( [ eq30 ] ) and ( [ eq31 ] ) match the framework of the problem ( [ eq14 ] ) , thus the solutions of ( [ eq30 ] ) and ( [ eq31 ] ) can be obtained by using algorithm 3 , respectively . to .,scaledwidth=45.0% ] besides , the can be solved by the simple projection onto the box \ ] ] based on the discussions above , we get the resulting algorithm for solving ( [ eq23 ] ) shown as algorithm 3 .+ + _ _ l + + _ * initialization * _ : + starting point , , , + =0 , maximum inner iterations .+ _ * iteration * _ : + . compute according to + .compute according to + .compute according to + . compute to + .update according to + .k = k+1 ; + * until a stopping criterion is satisfied . * + + + + obviously , ogsatv - adm4 is an instance of admm if the minimizations in steps 1 are solved exactly ( i.e. , the subproblems have closed - form solutions ) , the convergence of ogsatv - adm4 is guaranteed .note that , although steps ( 2 ) and ( 3 ) in algorithm 3 can not be solved exactly , the convergence of algorithm 3 is not compromised as long as the sequence of errors of successive solutions in ( 2 ) and ( 3 ) are absolutely summable , respectively .the corresponding theoretical proof is given elaborately in and we will also verify this property in our numerical experiments .in this section , we present some numerical results to illustrate the performance of the proposed method .the test images are shown in fig .1 with sizes from to .all experiments are carried out on windows 7 32-bit and matlab v7.10 running on a desktop equipped with an intel core i3 - 2130 cpu 3.4 ghz and 4 gb of ram .+ .restoration results for different numbers ( ) of mm iterations in the ogsatv - adm4 [ cols="^,^,^,^,^,^",options="header " , ] [ tab : addlabel ] we make use of 9 test images for comparisons .we simulate the noisy images with two different noise levels .all of these images were corrupted by the zero - mean additive gaussian noise with the standard deviation and , respectively . in fig .3 , we show a noisy ( ) vase image and the corresponding denoised version using proposed ogsatv - adm4 . visually , we see that ogsatv - adm4 works very well for image denoising .the evolutions of relerrs vs iterations and cpu time using four different methods are plotted in fig .4 . from this figure, we observe that split bregman method converges extremely fast and consumes the least cpu time , and chambolle s method is the slowest one no matter in terms of iterations or consuming time . however , our method ogsatv - adm4 reach the lowest relerr value at the convergence point in reasonable time .moreover , the denoising comparison among four different methods is further illustrated in fig .5 , where we show fragments of two true images , noisy images ( ) and the corresponding denoised ones . as can be seen from the fig . 5 ,denoised images obtained by using tv methods ( the split bregman method and chambolle s dual method ) have apparent staircase effect ( such as the parts pointed by the left below arrow and upper right arrow of boats image , and the nose and the lower jaw of man image ) , while the llt - alm method and the ogsatv - adm4 overcome this drawback to a great extent .however , there also exists shortcoming caused by llt - alm , i.e. , some parts of the restored images are overly smoothed .the parts pointed by the upper left arrow and lower right arrow of boats image , and the eyelid and lips ( the parts pointed by left two arrows ) of man image show this effect .note that our method can avoid this drawback effectively .+ the output results in terms of psnr , relerr , cpu time , and iterations of four methods are given in table ii . from the table , we observe that the split bregman method and chambolle s dual method achieve similar psnr results , while the llt - alm method can sometimes performs better than both of them in terms of psnr .overall , our method ogsatv - adm4 can reach the highest psnr results among the four methods .we should also note that the split bregman usually costs least cpu time .+ + : _ image deblurring _ + in case is a blurring matrix , then the problem we aim to solve is deblurring . for this example, we compare our method with the method fasttv proposed by huang , ng , and wen , the admm method for solving constrained tv - l2 model ( cadmtvl2 ) by chan , tao , and yuan and the method llt - alm .note that the test images used in cadmtvl2 are all scaled to the interval ] .+ in this example , we test two different types of blurring kernels : gaussian blur ( g ) and average blur ( a ) , which can be generated by the matlab built - in function ` fspecial ` , more specifically , ` fspecial('gaussian',[7 7],2 ) ` and ` fspecial('average',9 ) ` . for each blurring case ,the blurred images are further corrupted by zero mean gaussian noise with bsnr = 40 .two image estimates obtained by ogsatv - adm4 are shown in fig .6 , with the blurred images also shown for illustration .it is clear from fig . 6that the proposed method can restore blurred images effectively and in high quality .7 shows the evolution of the psnr over computing time and iterations for four different methods with respect to restoration of the lena " image with average blur .it is obvious that our method reaches the highest psnr with least iterations .it is obvious that cadmtvl2 needs fewer computing time to achieve the convergency point .we also observe that the penalty method fasttv needs the maximum computing time and iterations comparing with other three methods .table iii shows the output results in terms of psnr , relerr , cpu time and iterations of four methods . from the table, we see that the quality in terms of psnr of the restored images by fasttv and cadmtvl2 is almost the same .however , cadmtvl2 consumes much less time and needs much fewer iterations than fasttv .overall , the proposed method reaches the highest psnr compared with other three state - of - the - art methods , and needs less computing time and iterations than fasttv and llt - alm ( except in the case of the jellyfish " image , the computing time and iterations by using llt - alm are less than our method ) .quantitatively , however , our method can obtain db improvement in psnr on average .+ moreover , in order to illustrate the superior capability of our method for image deblurring .we show the fragments of restored images w.station " and lena " in fig .8 . in the top row of fig .8 , we observe that the fences of the w.station image estimates obtained by fasttv and cadmtvl2 are very blocky ( staircase effect ) , however , they are restored very well by both llt - alm and the proposed method . on the other hand, llt - alm makes the white boxes locally over - smoothed while our method , together with fasttv and cadmtvl2 , can restore them almost the same as the true image .similar phenomena can also be seen from the bottom row of fig .8 , llt - alm and the proposed method can avoid staircase effect effectively , such as the lips and cheek .however , we notice that llt - alm fails to recover the brim of the hat ( edges ) correctly since it makes the brim over - smoothed . in contrast, our method can not only recover the edges very well , but avoids staircase effect as well .in this paper , we study the image restoration problem based on the overlapping group sparsity total variation regularizer . to solve the corresponding minimization problem, we proposed a very efficient algorithm ogsatv - adm4 under the framework of the classic admm and using mm method to tackle the associated subproblem . the numerical comparisons with many state - of - the - art methodsshow that our method is very effective and efficient .the results verify that the proposed method avoids staircase effect and yet preserves edges .we are currently working on extending our method to real applications involving compressed sensing , blind deconvolution , image enhancement and so on .the authors would like to thank prof .m. tao for providing us the code ( cadmtvl2 ) in and prof .m. ng for making their code ( fasttv ) in available online .n. b. karayiannis and a. n. venetsanopoulos , regularization theory in image restoration - the stabilizing functional approach , " _ ieee trans .speech signal processing , _ vol .1155 - 1179 , 1990 .r. chan , m. tao , and x. m. yuan , constrained total variational deblurring models and fast algorithms based on alternating direction method of multipliers , " _ siam j. imag .6 , 680 - 697 , 2013 .m. lysaker , a. lundervold , and x .- c .tai , noise removal using fourth - order partial differential equation with applications to medical magnetic resonance images in space and time , " _ ieee trans . image process .1579 - 1590 , 2003 .
|
image restoration is one of the most fundamental issues in imaging science . total variation ( tv ) regularization is widely used in image restoration problems for its capability to preserve edges . in the literature , however , it is also well known for producing staircase - like artifacts . usually , the high - order total variation ( htv ) regularizer is an good option except its over - smoothing property . in this work , we study a minimization problem where the objective includes an usual data - fidelity term and an overlapping group sparsity total variation regularizer which can avoid staircase effect and allow edges preserving in the restored image . we also proposed a fast algorithm for solving the corresponding minimization problem and compare our method with the state - of - the - art tv based methods and htv based method . the numerical experiments illustrate the efficiency and effectiveness of the proposed method in terms of psnr , relative error and computing time . image restoration , convex optimization , total variation , overlapping group sparsity , admm , mm .
|
for power expanding feynman integrals several methods exist , where all of them have their limitations .mellin - barnes techniques provides a very general method to obtain all powers .this method however fails if the integrals are getting too complex . on the other hand _method of regions _ is a convenient way to obtain the leading power , whereas it is getting rather complicated for higher powers because of the many contributing regions and because it is difficult to automatize .furthermore it is a very non - trivial task to make sure that one has not forgotten or counted twice any region .however in the euclidean limit , where no collinear divergences arise , automatizations exist , which rely on graph theory . another way to expand feynman integrals , which has been proposed and worked out in ,is based on differential equations .differential equation techniques , which has been proposed first in , is easy to automatize in a computer algebra system .this makes it a convenient method to obtain subleading powers , whereas the leading power is in most cases needed as an input like a boundary condition .another limitation is the fact that this method relies on a correct ansatz in terms of powers of the expansion parameter .however it is a priori not obvious which powers of the expansion parameter occur ( e.g. only integer powers or also half - integer powers ) . in the present paperi present a semi - numerical method , that provides the power expansion of feynman integrals by giving explicit expressions of the expansion coefficients in form of finite integrals the can be solved numerically . in particularthis method gives the contributing powers of the expansion parameter , from where one can read off the correct ansatz to solve the differential equations that determine the set of feynman integrals .the algorithm that is worked out in the present paper combines sector decomposition with mellin - barnes techniques .it is completely independent from any power counting argument such that it can be used as a cross check for method of regions .this is very useful in cases , where method of regions becomes involved because of many contributing regions .the paper is organized as follows . in section [ s1 ]the algorithm is explained in detail . in section [ s2 ]i apply this algorithm to a set of two feynman integrals , that are power expanded by differential equation techniques , where the leading powers are obtained by method of regions .i will show explicitly how this algorithms gives the correct ansatz for the differential equations and provides a non - trivial check for method of regions .we follow the steps of section 2 of . we start with a -loop feynman integral which using the feynman parameterization can be cast into the form : ^{-n}. \label{1.1}\ ] ] we define as usual . after performing the integration over the loop momentawe obtain : where \label{1.3}\ ] ] and let us assume ( [ 1.3 ] ) contains the parameter , in which we want to expand ( [ 1.1 ] ) . using the mellin - barnes representation where the integration contour over has to be chosen such that we modify ( [ 1.2 ] ) in the following way where the main idea behind the procedure below is the following : by closing the integration path to the right hand side of the imaginary axis we sum up all the residua on the positive real axis and obtain an expansion in .powers of appear because of poles of order higher than one and because of terms of the form in the expansion in .these terms turn after expanding in into powers of .we continue with part i and ii of .first we split the integral over the feynman parameters into and integrate out the -function by the substitution such that we obtain where is obtained by the substitution ( [ 1.4.1 ] ) . in ( [ 1.5 ] ) the integration over small leads to poles in .this behavior is made explicit , if we follow the steps of part ii of : look for a minimal set such that , or vanish , if these parameters are set to zero .we decompose the integral into subsectors and substitute which leads to the jacobian factor .now we are able to factorize out from , or . after repeating these steps , until , and contain terms that are constant in , we end up with integrals over the feynman parameters of the form where , and contain terms that are constant in . the procedure above can in principal lead to infinite loops .this problem was addressed in , where algorithms are proposed that avoid these endless loops by choosing appropriate subsectors .i have not yet faced any endless loop in the problems i dealt with .however one should keep in mind that they can occur and adapt the implementation of the algorithm if needed . from ( [1.6 ] ) we can read off that the poles in are located at : where .( [ 1.7 ] ) becomes clear if one taylor expands in ( [ 1.6 ] ) the terms outside the brackets with respect to and performs the integration . in ( [ 1.5 ] )we have to choose the contour of the integration over such that the integration over the feynman parameters converges .this leads to the condition the poles in ( [ 1.7 ] ) that have to be taken into account are those that are located on the right hand side of the integration contour , i.e. from ( [ 1.7 ] ) and ( [ 1.7.1 ] ) we conclude that ( [ 1.7.2 ] ) is fulfilled if and only if .in the next step we calculate the residue of ( [ 1.6 ] ) at .we write the feynman integral in the form and note that this term is singular in if and only if so following part iii of we expand around and obtain with a rest term , such that ( [ 1.8 ] ) becomes where we used that .we repeat this procedure for all where condition ( [ 1.9 ] ) is fulfilled .the remaining integrals do not diverge for .so it is save to expand them around and we can easily calculate the residue at . what is left is to calculate the laurent expansion in . from the previous procedure we obtain terms of the form the logarithms arise from taking the residues of terms of the form with .in ( [ 1.12 ] ) we wrote these logarithms explicitly such that we can expand around .the poles in in ( [ 1.12 ] ) originate from integrals with . repeating the procedure above weexpand and obtain for ( [ 1.13 ] ) all the remaining integrals over are finite and can in principle be calculated numerically .finally the original integral in ( [ 1.1 ] ) obtains the form where the contain finite integrals that can be numerically evaluated .the logarithms arise both due to poles of higher order in the mellin - barnes parameter and to the expansion in from terms of the form . depending on the values of in ( [ 1.7 ] )the sum over does not only run over integer numbers but also over numbers of the form where is integer .i stress that even if a numerical evaluation of the integrals is not possible , we can obtain non - trivial statements about the power expansion of from ( [ 1.16 ] ) together with ( [ 1.7 ] ) .that is to say ( [ 1.7 ] ) gives us information about the possible powers of e.g. we know if we only get integer powers or also powers of . andfrom ( [ 1.16 ] ) we can read off up to which power appears . as we will see in the next sectionthis information will prove to be useful to obtain the power expansion by means of differential equations .the idea to get the expansion of feynman integrals by differential equations has been proposed and worked out in . by the following examplewe will see that the algorithm shown in the last section will give us the correct ansatz to solve the given system of differential equations and help us with the calculation of the initial conditions .we start with the integrals given by fig .[ f1 ] , where we assume : let us assume that we want to expand these integrals in and need the result up to order . for simplicity let us also set and .using integration - by - parts identities , we get the following differential equations for and : with and \nonumber\\ & = & \frac{1}{(4\pi)^d}\gamma(\epsilon)^2 \frac{\lambda^{1 - 2\epsilon}-\lambda^{-\epsilon}}{4\lambda(1-\lambda ) } , \label{2.4}\end{aligned}\ ] ] where ( [ 2.3 ] ) and ( [ 2.4 ] ) are exact in and . by defining ( [ 2.2 ] ) becomes in ( [ 2.5 ] ) we have not yet specified which values the summation index takes and up to which maximum value the finite sum over runs . by implementing the steps of the last section , which led to ( [ 1.7 ] ) , in a computer algebra system we obtain from ( [ 1.7 ] ) that comes with the powers of and with where . from ( [ 2.7 ] ) and ( [ 2.8 ] ) we read off that takes the values . in ( [ 2.6 ] ) integer - valued and half - integer - valued do not mix .so we would have missed powers of , if we had made the nave ansatz that only come with integer powers of .now one could argue that is already contained in the sum over .however in order to solve ( [ 2.6 ] ) we have to assume that there exists such that for all .a computer algebra analysis of the algorithm in the previous section tells us that in our special case . solving ( [ 2.6 ] ) up to we note that we need and as initial conditions , which can be obtained by method of regions . in the case of we note that only the region participates where both integration momenta are hard : in this region we obtain which is the leading power of . for need the region where both and are soft , i.e. this region starts participating at : by comparing these results to ( [ 2.7 ] ) and ( [ 2.8 ] ) we note that ( [ 2.9 ] ) and ( [ 2.10 ] ) correspond to definite poles in the mellin - barnes representation i.e. at and . by ( [ 1.7 ] ) and ( [ 1.11 ] ) we can calculate the coefficients of and in the -expansion of and numerically .this is a non - trivial test that we have not forgotten a contributing region , which is in general a problem of method of regions .we normalize our integrals by multiplication with and obtain from the solution of ( [ 2.6 ] ) the analytical expansion in and : +\mathcal{o}(\epsilon^2 ) \end{split } \nonumber \\ \begin{split } & i_2 = \\ & \quad\quad\frac{1}{(4\pi)^4}\bigg [ -\frac{1}{2\epsilon^2}+\frac{-1 + 2\ln\lambda}{2\epsilon } + \frac{1}{2}+\frac{\pi^2}{4}+2\ln\lambda-\frac{1}{2}(\ln\lambda)^2 + \\ & \quad\quad\quad \epsilon\left(\frac{11}{2}+\frac{11\pi^2}{12}+ \frac{13}{3}\zeta(3)+\left(4+\frac{\pi^2}{6}\right)\ln\lambda- ( \ln\lambda)^2+\frac{1}{6}(\ln\lambda)^3 \right)+ \\ & \quad\quad \lambda^{\frac{1}{2}}(-4\epsilon\pi^2)+ \\ & \quad\quad \lambda\bigg ( -1-\frac{\pi^2}{3}+\ln\lambda-\frac{1}{2}(\ln\lambda)^2 + \\ & \quad\quad\quad \epsilon\left(11+\frac{2\pi^2}{3}-4\zeta(3)-3\ln\lambda-\frac{1}{2}(\ln\lambda)^2 + \frac{1}{2}(\ln\lambda)^3 \right ) \bigg)+ \\ & \quad\quad \lambda^{\frac{3}{2}}\epsilon\frac{4\pi^2}{3}+ \mathcal{o}(\lambda^2 ) \bigg]+\mathcal{o}(\epsilon^2 ) .\end{split } \label{2.11}\end{gathered}\ ] ] on the other hand our numeric method of section [ s1 ] gives +\mathcal{o}(\epsilon^2 ) \end{split } \nonumber\ ] ] +\mathcal{o}(\epsilon^2 ) , \end{split}\ ] ] which is consistent with ( [ 2.11 ] ) .by combining sector decomposition with mellin - barnes techniques i developed an algorithm for power expanding feynman integrals , where the coefficients in the expansion are given by finite integrals . even if these integrals can not be evaluated numerically , we can read off , which powers of the expansion parameter contribute and up to which power the logarithms occur .this non - trivial information provides the correct ansatz for solving the set of differential equations that determine the feynman integrals .another application of the presented algorithm is testing method of regions numerically . we have seen that every region , that has a unique scaling in the expansion parameter , corresponds to a definite power in the mellin - barnes expansion . so it can be tested separately . for method of regionsit is often an involved problem to make sure not to have missed or counted twice any region .this algorithm provides a test of method of regions that is independent of any power counting argument .i thank guido bell and christoph greub for helpful discussions and comments on the manuscript .the author is partially supported by the swiss national foundation as well as ec - contract mrtn - ct-2006 - 035482 ( flavianet ) . c. greub , t. hurth and d. wyler , phys. rev . * d54 * , 3350 ( 1996 ) , [ hep - ph/9603404 ] . v. a. smirnov , springer tracts mod .177 * , 1 ( 2002 ) .s. g. gorishnii , nucl .b319 * , 633 ( 1989 ) .m. beneke and v. a. smirnov , nucl . phys . *b522 * , 321 ( 1998 ) , [ hep - ph/9711391 ] .v. a. smirnov , commun .phys . * 134 * , 109 ( 1990 ) .k. g. chetyrkin , r. harlander , j. h. kuhn and m. steinhauser , nucl .a389 * , 354 ( 1997 ) , [ hep - ph/9611354 ] .t. seidensticker , .e. remiddi , nuovo cim . * a110 * , 1435 ( 1997 ) , [ hep - th/9711188 ] . v. pilipp , .v. pilipp , nucl . phys . *b794 * , 154 ( 2008 ) , [ arxiv:0709.3214 ] .r. boughezal , m. czakon and t. schutzmeier , jhep * 09 * , 072 ( 2007 ) , [ arxiv:0707.3090 ] .a. v. kotikov , phys . lett . *b254 * , 158 ( 1991 ) .t. binoth and g. heinrich , nucl . phys . *b585 * , 741 ( 2000 ) , [ hep - ph/0004013 ] .g. heinrich , nucl .* 116 * , 368 ( 2003 ) , [ hep - ph/0211144 ] .t. binoth and g. heinrich , nucl . phys . *b680 * , 375 ( 2004 ) , [ hep - ph/0305234 ] .g. heinrich , . c. bogner and s. weinzierl , comput .. commun . * 178 * , 596 ( 2008 ) , [ arxiv:0709.4092 ] .a. v. smirnov and m. n. tentyukov , .k. g. chetyrkin and f. v. tkachov , nucl .b192 * , 159 ( 1981 ) .f. v. tkachov , phys .b100 * , 65 ( 1981 ) .s. laporta , int .phys . * a15 * , 5087 ( 2000 ) , [ hep - ph/0102033 ] .
|
i present an algorithm based on sector decomposition and mellin - barnes techniques to power expand feynman integrals . the coefficients of this expansion are given in terms of finite integrals that can be calculated numerically . i show in an example the benefit of this method for getting the full analytic power expansion from differential equations by providing the correct ansatz for the solution . for method of regions the presented algorithm provides a numerical check , which is independent from any power counting argument .
|
learning and classification with a large amount of data raises the need for algorithms that scale well in time and space usage with the number of data points being trained on ._ streaming _ algorithms have properties that do just that : they run in a single pass over the data and use space polylogarithmic in the total number of points .the technique of making a single pass over the data has three key advantages : 1 ) points may be seen once and then discarded so they do not take up additional storage space ; 2 ) the running time scales linearly in the size of the input , a practical necessity for data sets with sizes in the millions , and 3 ) it enables these algorithms to function in a streaming model , where instead of data is not immediately available , individual data points may arrive slowly over time .this third feature enables data to be learned and models to be updated `` online '' in real time , instead of periodically running a batch update over all existing data .support vector machines ( svms ) are one such learning model that would benefit from an efficient and accurate streaming representation .standard 2-class svms models attempt to find the maximum - margin linear separator ( i.e. hyperplane ) between positive and negative instances and as such , they have a very small hypothesis complexity but provable generalization bounds .there have been several implementations of a streaming svm classifier ( , , ) , but so the most effective version has been based off the reduction from svm to the minimum enclosing ball ( meb ) problem introduced by .the connection between these two problems has made it possible to harness the work done on streaming mebs and apply them to svms , as was done in . in this paper , we utilize the blurred ball cover approximation to the meb problem proposed in to obtain a streaming svm that is both fast and space efficient . we also show that our implementation not only outperforms existing streaming svm implementations ( including those not based off of meb reductions ) but also that our error rates are competitive with libsvm , a state - of - the - art batch svm open - source project available [ here ] .the core vector machine ( cvm ) was introduced by as an new take on the standard support vector machine ( svm ) . instead of attempting to solve a quadratic system, the cvm makes use of the observation that many common kernels for the standard svm can be viewed as minimum enclosing ball ( meb ) problems .consider the following svm optimization problem on inputs : where and are the data points and labels , respectively , and is a feature map induced by the kernel of choice . here, the are error cushions that specify the cost of misclassifying .let be the optimal separating hyperplane . showed that if the kernel satisfies , a constant , then can be found by finding the minimum enclosing ball of the points , where \ ] ] where is the standard basis element ( all zeroes except for the position , which is 1 ) . if is the optimal meb center , then .a couple of things to note about this equivalence : first , it transforms the supervised svm training problem into an unsupervised one - the meb is blind to the true label of the point .second , the notion of a _ margin _ in the original svm problem is transformed into a _ core set _ , a subset of inputs such that finding the meb of the corset is a good approximation to finding the meb over the entire input . as such , core setscan be thought of as the minimal amount of information that defines the meb .the implementation of the cvm in follows the meb approximation algorithm described in : given , add to the core set the point that is the farthest from the current meb center .recompute the meb from the core set and continue until all points are within from , where is the radius of the meb .the core vector machine achieves a approximation factor but makes passes over the data and requires storage space linear in the size of the input , an approach that does nt scale for streaming applications . to this end , presented the streamsvm , a streaming version of the cvm , which computes the meb over the input in a streaming fashion , keeping a running approximate meb of all the data seen so far and adjusting it upon receiving a new input point .the streamsvm used only constant space and using a small lookahead resulted in a favorable performance compared to libsvm ( batch ) as well as the streaming perceptron , pegasos , and lasvm implementations .however , the streaming meb algorithm that powers streamsvm is only approximate and offers a worst - case approximation ratio of between and of the true meb , leaving open the possibility of a better streaming algorithm to improve the performance of streamsvm . in this paper, we present the blurred ball svm , a streaming algorithm based on the blurred ball cover proposed in .it takes a parameter and keeps track of multiple core sets ( and their corresponding mebs ) , performing updates only when an incoming data point lies outside the union of the expansion of all the maintained mebs .the blurred ball svm also makes a single pass over the input , with space complexity independent of .the algorithm consists of two parts : a _ training _ procedure to update the blurred ball cover and a _ classification _ method , which harnesses the blurred ball to label the point with one of the two possible classes . for simplicity ,we choose the classes to be .as described above , a ball with radius and center is a linear classifier consisting of a hyperplane passing through the origin with normal . for the rest of this paper, we will require the following assumptions , established by tsang et al : * the data points , denoted by , are linearly separable ( this is always the case if ) .* , a constant . with these assumptions ,the training procedure is described in algorithm [ updatealgo ] and is identical to the blurred ball cover update described in algorithm 1 of .} ]we ran the blurred ball svm on several canonical datasets and compared the accuracy of each run with the batch libsvm implementation , the stream svm proposed by subramanian , and the streaming setting of the perceptron ( which runs through the data only once but is otherwise identical to the perceptron training algorithm ) .table [ results ] shows the experimental results .all trials were averaged over 20 runs with respect to random orderings of the input data stream .the perceptron , lasvm and stream svm data were taken from the experiments documented in .the blurred ball svm on the mnist dataset was run with and , and on the ijcnn dataset was run with and .the choice of and was determined coarsely through experimentation .we offer two versions of the blurred ball svm - using lookahead buffer sizes of and .figures [ lookaheadgraph ] and [ lookaheadgraph - time ] compare performance of different lookaheads as is varied .all experiments were run on a macintosh laptop with a 1.7 ghz processor with 4 gb 1600 mhz standard flash memory .it s clear that the blurred ball svm outperforms other streaming svms , but even more surprising is that it also manages to outperform the batch implementation on the mnist dataset .we suspect that this is due to the fact that our classifier allows for non - convex separators ..results on standard datasets comparing the performance of the blurred ball svm with other streaming svms and the batch libsvm baseline . is the size of the lookahead used in the streaming algorithms .measurements were averaged over 20 runs ( w.r.t random orderings of the input stream ) .the bold number for each dataset is the streaming svm that gave the highest accuracy for that dataset . [cols="^,^,^,^,^",options="header " , ] [ results ] , with lookaheads and .despite diverging for large , the accuracies with both lookaheads were much more similar for small . ] , with lookaheads and . ]being able to learn an svm model in an online setting opens up myriad possibilities in the analysis of large amounts of data .there are several open questions whose answers may shed light on a streaming approach with higher accuracy than the blurred ball svm presented here : 1 .is there a streaming algorithm for maintaining an meb with better guarantees than the blurred ball cover proposed by ?the paper originally provided a bound of , which was improved by to less than .although showed that it is impossible to achieve an arbitrarily small approximation factor , with for any , it s possible that a better streaming meb algorithm exists with provable bounds better than the 1.22 factor demonstrated by .the structure of the points in this svm setup is unique : all data points lie on a sphere of radius centered at the origin .although there is no streaming meb algorithm for unrestricted points , does this specific structure lend itself to a meb approximation ?if so , we would be able to construct an svm with separator arbitrarily close to the optimal .we have presented a streaming , or `` online '' algorithm for svm learning by making use of a reduction from the minimum enclosing ball problem .our training algorithm is tunable using the parameter to adjust the desired approximation ratio .we also came up with multiple types of classifiers , some of them non - convex , and showed that our implementation surpassed the accuracy of other streaming implementations . one surprising finding is that our implementation surpasses the standard libsvm dataset on canonical mnist binary digit classification datasets. tests on other digit recognition datasets show similar results , suggesting that this better performance could be due to structural idiosyncrasies of the data .
|
a widely - used tool for binary classification is the support vector machine ( svm ) , a supervised learning technique that finds the `` maximum margin '' linear separator between the two classes . while svms have been well studied in the batch ( offline ) setting , there is considerably less work on the streaming ( online ) setting , which requires only a single pass over the data using sub - linear space . existing streaming algorithms are not yet competitive with the batch implementation . in this paper , we use the formulation of the svm as a minimum enclosing ball ( meb ) problem to provide a streaming svm algorithm based off of the blurred ball cover originally proposed by agarwal and sharathkumar . our implementation consistently outperforms existing streaming svm approaches and provides higher accuracies than libsvm on several datasets , thus making it competitive with the standard svm batch implementation .
|
with the advancement of modern technology , data sets which contain repeated measurements obtained on a dense grid are becoming ubiquitous .such data can be viewed as a sample of curves or functions and are referred to as functional data .we consider here the extension of the linear regression model to the case of functional data . in this extension ,both predictors and responses are random functions rather than random vectors .it is well known ( ramsay and dalzell ( ) ; ramsay and silverman ( ) ) that the traditional linear regression model for multivariate data , defined as may be extended to the functional setting by postulating the model , for , writing all vectors as row vectors in the classical model ( [ basic ] ) , and are random vectors in , is a random vector in , and and are , respectively , and matrices containing the regression parameters . the vector has the usual interpretation of an error vector , with =0 ] , denoting the identity matrix . in the functional model ( [ linear ] ) , random vectors and in ( [ basic ] ) are replaced by random functions defined on the intervals and .the extension of the classical linear model ( [ basic ] ) to the functional linear model ( [ linear ] ) is obtained by replacing the matrix operation on the right - hand side of ( [ basic ] ) with an integral operator in ( [ linear ] ) . in the original approach of ramsay and dalzell ( ) , a penalized least - squares approach using l - splines was adopted and applied to a study in temperature - precipitation patterns , based on data from canadian weather stations .the functional regression model ( [ linear ] ) for the case of scalar responses has attracted much recent interest ( cardot and sarda ( ) ; mller and stadtmller ( ) ; hall and horowitz ( ) ) , while the case of functional responses has been much less thoroughly investigated ( ramsay and dalzell ( ) ; yao , mller and wang ( ) ) .discussions on various approaches and estimation procedures can be found in the insightful monograph of ramsay and silverman ( ) . in this paper, we propose an alternative approach to predict from , by adopting a novel canonical representation of the regression parameter function .several distinctive features of functional linear models emerge in the development of this canonical expansion approach .it is well known that in the classical multivariate linear model , the regression slope parameter matrix is uniquely determined by , as long as the covariance matrix is invertible .in contrast , the corresponding parameter function , appearing in ( [ linear ] ) , is typically not identifiable .this identifiability issue is discussed in section [ sec2 ] .it relates to the compactness of the covariance operator of the process which makes it non - invertible . in section [ sec2 ] , we demonstrate how restriction to a subspace allows this problem to be circumvented . under suitable restrictions , the components of model ( [ linear ] ) are then well defined .utilizing the canonical decomposition in theorem [ th3.3 ] below leads to an alternative approach to estimating the parameter function .the canonical decomposition links and through their functional canonical correlation structure .the corresponding canonical components form a bridge between canonical analysis and linear regression modeling .canonical components provide a decomposition of the structure of the dependency between and and lead to a natural expansion of the regression parameter function , thus aiding in its interpretation .the canonical regression decomposition also suggests a new family of estimation procedures for functional regression analysis .we refer to this methodology as _ functional canonical regression analysis_. classical canonical correlation analysis ( cca ) was introduced by hotelling ( ) and was connected to function spaces by hannan ( ) .substantial extensions and connections to reproducing kernel hilbert spaces were recently developed in eubank and hsing ( ) ; for other recent developments see cupidon _ et al . _ ( ) .canonical correlation is known not to work particularly well for very high - dimensional multivariate data , as it involves an inverse problem .leurgans , moyeed and silverman ( ) tackled the difficult problem of extending cca to the case of infinite - dimensional functional data and discussed the precarious regularization issues which are faced ; he , mller and wang ( ) further explored various aspects and proposed practically feasible regularization procedures for functional cca . while cca for functional data is worthwhile , but difficult to implement and interpret , the canonical approach to functional regression is here found to compare favorably with the well established principal - component - based regression approach in an example of an application ( section [ sec5 ] ) .this demonstrates a potentially important new role for canonical decompositions in functional regression analysis .the functional linear model ( [ linear ] ) includes the varying coefficient linear model studied in hoover _et al . _ ( ) and fan and zhang ( ) as a special case , where ; here , is a delta function centered at and is the varying coefficient function . other forms of functional regression models with vector - valued predictors and functional responses were considered by faraway ( ) , shi , weiss and taylor ( ) , rice and wu ( ) , chiou , mller and wang ( ) and ritz and streibig ( ) .the paper is organized as follows . functional canonical analysis and functional linear models for -processesare introduced in section [ sec2 ] .sufficient conditions for the existence of functional normal equations are given in proposition [ pr2.2 ] .the canonical regression decomposition and its properties are the theme of section [ sec3 ] . in section [ sec4 ] ,we propose a novel estimation technique to obtain regression parameter function estimates based on functional canonical components .the regression parameter function is the basic model component of interest in functional linear models , in analogy to the parameter vector in classical linear models .the proposed estimation method , based on a canonical regression decomposition , is contrasted with an established functional regression method based on a principal component decomposition .these methods utilize a dimension reduction step to regularize the solution of the inverse problems posed by both functional regression and functional canonical analysis . as a selection criterion for tuning parameters , such as bandwidths or numbers of canonical components , we use minimization of prediction error via leave - one - curve - out cross - validation ( rice and silverman ( ) ) .the proposed estimation procedures are applied to mortality data obtained for cohorts of medflies ( section [ sec5 ] ) .our goal in this application is to predict a random trajectory of mortality for a female cohort of flies from the trajectory of mortality for a male cohort which was raised in the same cage .we find that the proposed functional canonical regression method gains an advantage over functional principal component regression in terms of prediction error .additional results on canonical regression decompositions and properties of functional regression operators are compiled in section [ sec6 ] .all proofs are collected in section [ sec7 ] .in this section , we explore the formal setting as well as identifiability issues for functional linear regression models . both response and predictor functions are considered to come from a sample of pairs of random curves .a basic assumption is that all random curves or functions are square - integrable stochastic processes . consider a measure on a real index set and let be the class of real - valued functions such that .this is a hilbert space with the inner product and we write if .the index set can be a set of time points , such as , a compact interval ] for all .let and .processes are subject to a functional linear model if where is the parameter function , is a random error process with =0 ] for all . without loss of generality ,we assume from now on that all processes considered have zero mean functions , and for all , .we define the regression integral operator by equation ( [ linear1 ] ) can then be rewritten as denote the auto- and cross - covariance functions of and by ,\qquad s , t\in t_{1},\\ r_{yy}(s , t)&=&\operatorname{cov}[y(s),y(t)],\qquad s , t \in t_{2 } , \quad\mbox{and}\\ r_{xy}(s , t)&=&\operatorname{cov}[x(s),y(t)],\qquad s\in t_{1 } , t\in t_{2}.\end{aligned}\ ] ] the autocovariance operator of is the integral operator , defined by replacing by , , we analogously define operators and , similarly . then and are compact , self - adjoint and non - negative definite operators , and and are compact operators ( conway ( ) ) . we refer to he _et al . _ ( ) for a discussion of various properties of these operators .another linear operator of interest is the integral operator , the operator equation is a direct extension of the least - squares normal equation and may be referred to as the functional population normal equation .[ pr2.2 ] the following statements are equivalent for a function : satisfies the linear model ( [ linear2 ] ) ; is a solution of the functional normal equation ( [ norm ] ) ; minimizes among all .the proof is found section [ sec7 ] . in the infinite - dimensional case ,the operator is a hilbert schmidt operator in the hilbert space , according to proposition [ pr6.6 ] below .a problem we face is that it is known from functional analysis that a bounded inverse does not exist for such operators .a consequence is that the parameter function in ( [ linear1 ] ) , ( [ linear2 ] ) is not identifiable without additional constraints . in a situation where the inverse of the covariance matrix does not exist in the multivariate case, a unique solution of the normal equation always exists within the column space of and this solution then minimizes on that space . our idea to get around the non - invertibility issue in the functional infinite - dimensional case is to extend this approach for the non - invertible multivariate case to the functional case .indeed , as is demonstrated in theorem [ th2.3 ] below , under the additional condition [ condc1 ] , the solution of ( [ norm ] ) exists in the subspace defined by the range of .this unique solution indeed minimizes .we will make use of the karhunen love decompositions ( ash and gardner ( ) ) for -processes and , with random variables , , , and orthonormal families of -functions and . here , , , and are the eigenvalues and eigenfunctions of the covariance operators and , respectively , with , .note that is the kronecker symbol with for , for .we consider a subset of on which inverses of the operator can be defined . as a hilbert schmidt operator , is compact and therefore not invertible on according to conway ( ) , page 50 , the range of is characterized by where defining we find that is a one - to - one mapping from the vector space onto the vector space thus , restricting to a subdomain defined by the subspace we can define its inverse for as then satisfies the usual properties of an inverse , in the sense that for all and for all the following condition [ condc1 ] for processes is of interest .[ condc1 ] the -processes with karhunen love decompositions ( [ kl ] ) satisfy }{\lambda_{xm}}\biggr\}^2 < \infty.\ ] ] if [ condc1 ] is satisfied , then the solution to the non - invertibility problem as outlined above is viable in the functional case , as demonstrated by the following basic result on functional linear models .[ th2.3 ] a unique solution of the linear model ( [ linear2 ] ) exists in if and only if and satisfy condition . in this case , the unique solution is of the form as a consequence of proposition [ pr2.2 ] , solutions of the functional linear model ( [ linear2 ] ) , solutions of the functional population normal equation ( [ norm ] ) and minimizers of are all equivalent and allow the usual projection interpretation .[ pr2.4 ] assume and satisfy condition .the following are then equivalent : 1 .the set of all solutions of the functional linear model ( [ linear2 ] ) ; 2 .the set of all solutions of the population normal equation ( [ norm ] ) ; 3 .the set of all minimizers of for ; 4 .the set .it is well known that in a finite - dimensional situation , the linear model ( [ norm ] ) always has a unique solution in the column space of , which may be obtained by using a generalized inverse of the matrix .however , in the infinite - dimensional case , such a solution does not always exist .the following example demonstrates that a pair of -processes does not necessarily satisfy condition [ condc1 ] . in this case , the linear model ( [ norm ] ) does not have a solution .[ ex2.5 ] assume processes and have karhunen love expansions ( [ kl ] ) , where the random variables , satisfy =\frac{1}{m^{2 } } , \qquad \lambda _ { yj}=e[\zeta_{j}^{2}]=\frac{1}{j^{2}}\ ] ] and let =\frac{1}{(m+1)^{2}(j+1)^{2 } } \qquad \mathrm{for } \m , j\geq1.\ ] ] as shown in he _et al . _ ( ) , ( [ ex1 ] ) and ( [ ex2 ] ) can be satisfied by a pair of -processes with appropriate operators , and .then }{\lambda_{xm}}\biggr\ } ^{2}&=&\lim_{n\rightarrow\infty } \sum_{m ,j=1}^{n}\biggl [ \frac{m}{(m+1)(j+1)}\biggr ] ^{4}\\ & = & \lim_{n\rightarrow\infty } \sum_{m=1}^{n}\biggl [ \frac{m}{(m+1)}\biggr ] ^{4}\sum_{j=1}^{\infty}\frac{1}{(j+1)^{4}}=\infty\end{aligned}\ ] ] and , therefore , condition [ condc1 ] is not satisfied .canonical analysis is a time - honored tool for studying the dependency between the components of a pair of random vectors or stochastic processes ; for multivariate stationary time series , its utility was established in the work of brillinger ( ) . in this section ,we demonstrate that functional canonical decomposition provides a useful tool to represent functional linear models .the definition of functional canonical correlation for -processes is as follows .[ def3.1 ] the first canonical correlation and weight functions and for -processes and are defined as where and are subject to for . the canonical correlation and weight functions , for processes and for are defined as where and are subject to ( [ def2 ] ) for and for we refer to and as the canonical variates and to as the canonical components .it has been shown in he _et al . _( ) that canonical correlations do not exist for all -processes , but that condition [ condc2 ] below is sufficient for the existence of canonical correlations and weight functions .we remark that condition [ condc2 ] implies condition [ condc1 ] .[ condc2 ] let and be -processes , with karhunen love decompositions ( [ kl ] ) satisfying }{\lambda_{xm}\lambda_{yj}^{1/2}}\biggr\ } ^{2}<\infty.\ ] ] the proposed functional canonical regression analysis exploits features of functional principal components and of functional canonical analysis . in functional principal component analysis ,one studies the structure of an -process via its decomposition into the eigenfunctions of its autocovariance operator , the karhunen love decomposition ( rice and silverman ( ) ) . in functional canonical analysis ,the relation between a pair of -processes is analyzed by decomposing the processes into their canonical components .the idea of canonical regression analysis is to expand the regression parameter function in terms of functional canonical components for predictor and response processes .the canonical regression decomposition ( theorem [ th3.3 ] ) below provides insights into the structure of the regression parameter functions and not only aids in the understanding of functional linear models , but also leads to promising estimation procedures for functional regression analysis .the details of these estimation procedures will be discussed in section [ sec4 ] .we demonstrate in section [ sec5 ] that these estimates can lead to competitive prediction errors in a finite - sample situation .we now state two key results .the first of these ( theorem [ th3.2 ] ) provides the canonical decomposition of the cross - covariance function of processes and .this result plays a central role in the solution of the population normal equation ( [ norm ] ) .this solution is referred to as _ canonical regression decomposition _ and it leads to an explicit representation of the underlying regression parameter function of the functional linear model ( [ linear2 ] ) .the decomposition is in terms of functional canonical correlations and canonical weight functions and .given a predictor process , we obtain , as a consequence , an explicit representation for , where is as in ( [ linear2 ] ) . for the following main results , we refer to the definitions of , , , , in definition [ def3.1 ] .all proofs are found in section [ sec7 ] .[ th3.2 ] assume that -processes and satisfy condition .the cross - covariance function then allows the following representation in terms of canonical correlations and weight functions and : [ th3.3 ] assume that the -processes and satisfy condition .one then obtains , for the regression parameter function ( [ regsol ] ) , the following explicit solution : to obtain the predicted value of the response process , we use the linear predictor this canonical regression decomposition leads to approximations of the regression parameter function and the predicted process via a finitely truncated version of the canonical expansions ( [ canreg1 ] ) and ( [ canreg2 ] ) .the following result provides approximation errors incurred from finite truncation .thus , we have a vehicle to achieve practically feasible estimation of and associated predictions ( section [ sec4 ] ) .[ th3.4 ] for , let be the finitely truncated version of the canonical regression decomposition ( [ canreg1 ] ) for and define .then , with =0 ]in this section , we provide sketches of proofs and some auxiliary results .we use tensor notation to define an operator proof of proposition [ pr2.2 ] to prove ( a ) ( b ) , we multiply equation ( [ linear2 ] ) by on both sides and take expected values to obtain .equation ( [ norm ] ) then follows from ( by propositions [ pr6.5 ] and [ pr6.6 ] ) and . for ( b ) ( c ) ,let be a solution of equation ( [ norm ] ) .for any , we then have . ] it follows that ,\beta \rangle|^{2}/\break e\|\mathcal{l}_x\beta\|^{2}\leq0 ] since is arbitrary , =0 ] thus , condition [ condc1 ] is equivalent to .suppose that a unique solution of ( [ linear2 ] ) exists in this solution is then also a solution of ( [ norm ] ) , by proposition [ pr2.2](b ) .therefore , , which implies [ condc1 ] . on the other hand ,if [ condc1 ] holds , then which implies that is a solution of ( [ norm ] ) , is in and , therefore , is the unique solution in and also the unique solution of ( [ linear2 ] ) in proof of proposition [ pr2.4 ] the equivalence of ( a ) , ( b ) and ( c ) follows from proposition [ pr2.2 ] and ( d ) ( b ) is a consequence of theorem [ th2.3 ]. we now prove ( b ) ( d ) .let be a solution of ( [ norm ] ) .proposition [ pr2.2 ] and theorem [ th2.3 ] imply that both and minimize for hence , which , by proposition [ pr6.6 ] , implies that therefore , it follows that or for an proof of theorem [ th3.2 ] according to lemma [ le6.2](b ) , condition [ condc2 ] guarantees the existence of the canonical components and canonical decomposition of and .moreover , = e\bigl[\bigl(x_{c,\infty}(s)+x_{c,\infty } ^{\bot}(s)\bigr)\bigl(y_{c,\infty}(t)+y_{c,\infty}^{\bot}(t)\bigr)\bigr]\\ & = & e[x_{c,\infty}(s)y_{c,\infty}(t ) ] = e\biggl[\sum_{m=1}^{\infty } u_{m}r_{xx}u_{m}(s)\sum_{m=1}^{\infty}v_{m}r_{yy}v_{m}(t)\biggr ] \\&=&\sum_{m , j=1}^{\infty } e[u_{m}v_{j}]r_{xx}u_{m}(s)r_{yy}v_{m}(t ) = \sum_{m=1}^{\infty}\rho _ { m}r_{xx}u_{m}(s)r_{yy}v_{m}(t).\end{aligned}\ ] ] we now show that the exchange of the expectation with the summation above is valid . from lemma [ le6.1](b ) , for any and the spectral decomposition , \|r_{xx}^{1/2}p_{m}\|^{2}=\sum _{ m=1}^{k}\langle p_{m},r_{xx}p_{m}\rangle \\ & = & \sum_{m=1}^{k } \sum_{j=1}^{\infty}\lambda_{xj}\langle p_{m},\theta_{j}\rangle^{2}=\sum_{j=1}^{\infty}\lambda _ { xj}\biggl(\sum_{m=1}^{k}\langle p_{m},\theta_{j}\rangle^{2}\biggr)\\ \\ & \leq & \sum_{j=1}^{\infty}\lambda_{xj}\|\theta _ { j}\|^{2}=\sum_{j=1}^{\infty}\lambda_{xj}<\infty,\end{aligned}\ ] ] where the inequality follows from the fact that is the square length of the projection of onto the linear subspace spanned by .similarly , we can show that for any , proof of theorem [ th3.3 ] note that condition [ condc2 ] implies condition [ condc1 ] .hence , from theorem [ th2.3 ] , exists and is unique in . we can show ( [ canreg1 ] ) by applying to both sides of ( [ norm ] ) , exchanging the order of summation and integration . to establish ( [ canreg2 ] ) , it remains to show that where in note that where the operator is defined in lemma [ le6.1 ] and can be written as with /\sqrt{\lambda_{xk}\lambda_{y\ell}}, ] for we have =0 ] for all -functions and .[ le7.1] and are uncorrelated . for any , write , with , which is equivalent to and . then with write furthermore , from lemma [ le6.2](b ) , =0 ] proof of theorem [ th6.4 ] calculating the covariance operators for , = \sum_{m , j}\rho_{m}\rho _ { j}e[u_{m}u_{j}]r_{yy}u_{m}(s)r_{yy}v_{j}(t ) \\ & = & \sum_{m}\rho_{m}^{2}r_{yy}u_{m}(s)r_{yy}v_{m}(t ) = \sum_{m}\rho _ { m}^{2}r_{yy}^{1/2}q_{m}(s)r_{yy}^{1/2}q_{m}(t)\end{aligned}\ ] ] so that {yy}^{1/2 } = r_{yy}^{1/2}\biggl[\sum_{m}\rho_{m}^{2}q_{m}\otimes q_{m}\biggr]r_{yy}^{1/2}=r_{yy}^{1/2}r_{0}r_{yy}^{1/2}.\ ] ] now , from lemmas [ le6.2 ] and [ le7.1 ] , =e\bigl[\bigl(y_{c,\infty } ( s)+y_{c,\infty}^{\bot}(s)\bigr)y^{\ast}(t)\bigr ] \\ & = & e[y_{c,\infty}(s)y^{\ast } ( t)]=e\biggl[\sum_{m}v_{m}r_{yy}v_{m}(s)\sum_{j}\rho _ { j}u_{j}r_{yy}v_{j}(t)\biggr ] \\ & = & \sum_{m , j}e[v_{m}u_{j}\rho _{ j}r_{yy}v_{m}(s)r_{yy}v_{j}(t)]\\ & = & \sum_{m}\rho _ { m}^{2}r_{yy}v_{m}(s)r_{yy}v_{j}(t)=r_{y^{\ast}y^{\ast}}(s , t).\end{aligned}\ ] ] hence , .the correlation operator for is with hence , and moreover , and note that with substituting into the equation on the left - hand side of ( [ t6.4 ] ) , one obtains the equation on the right - hand side of ( [ t6.4 ] ) .proof of proposition [ pr6.5 ] from the definition , must satisfy and note that and for the differences , we obtain \,\mathrm{d}s\,\mathrm{d}t=0 ] .since the integral operator has the -integral kernel , it is a hilbert schmidt operator ( conway ( ) ) .moreover , for , implying that is self - adjoint .furthermore , is non - negative definite because , for arbitrary , \beta(w , t)\beta(s , t)\,\mathrm{d}w\,\mathrm{d}s\,\mathrm{d}t\\ & = & e\biggl[\int(\mathcal{l}_x\beta)(t)(\mathcal{l}_x\beta ) ( t)\,\mathrm{d}t\biggr]=e\|\mathcal{l}_x\beta\|^{2}\geq0.\end{aligned}\ ] ]we wish to thank two referees for careful reading and are especially indebted to one reviewer and the associate editor for comments which led to substantial changes and various corrections .this research was supported in part by nsf grants dms-03 - 54448 , dms-04 - 06430 , dms-05 - 05537 and dms-08 - 06199 .mller , h.g . ,wang , j.l . , capra , w.b ., liedo , p. and carey , j.r .( 1997b ) .early mortality surge in protein - deprived females causes reversal of sex differential of life expectancy in mediterranean fruit flies .usa _ * 94 * 27622765 .
|
we study regression models for the situation where both dependent and independent variables are square - integrable stochastic processes . questions concerning the definition and existence of the corresponding functional linear regression models and some basic properties are explored for this situation . we derive a representation of the regression parameter function in terms of the canonical components of the processes involved . this representation establishes a connection between functional regression and functional canonical analysis and suggests alternative approaches for the implementation of functional linear regression analysis . a specific procedure for the estimation of the regression parameter function using canonical expansions is proposed and compared with an established functional principal component regression approach . as an example of an application , we present an analysis of mortality data for cohorts of medflies , obtained in experimental studies of aging and longevity . , , +
|
weak gravitational lensing has been widely used as a direct probe of the large scale structure ( see reviews by ) . by measuring the systematic distortions of background galaxy images ,one can place constraints on the cosmological parameters ( ) . with accurate redshift information , the geometry and the structure growth rate of our universe can be constrained as functions of redshift separately , providing a consistency test of the gravity theory ( ) .a key issue in weak lensing is about how to measure the cosmic shear with galaxy shapes .this is difficult mainly because the signal - to - noise ratio of the measurement on one galaxy is typically only a few percent .it is therefore extremely important for any shear measurement method to carefully treat any possible systematic errors , including at least the following : the correction due to the image smearing by the point spread function ( psf , including the pixel response function ) ; the photon noise ; the pixelation effect due to the discrete nature of the ccd pixels .there have been a number of methods proposed to deal with the corrections due to the psf ( see ) .however , the photon noise and the pixelation effect remain to be treated in a more systematic way . for example , existing model fitting methods ( , ) use the variance of the noise to weight the pixels in their chi - square fittings .it is not clear to what level the noise contamination to the shear recovery can be removed in this way , especially for correlated background noise ( see , , ) . by integrating the model over the pixels , the model fitting methods essentially use linear interpolations to treat the pixelation effectthis is found not accurate as we will discuss in [ pixelation ] . indeed ,as we will discuss later in the paper , there are two types of noise : the astronomical photon noise and the `` photon counting '' shot noise . while the second type diminishes when the exposure time increases , the first type does not .we will mainly focus on the astronomical noise in this paper . for the counting shot noise , we will simply argue in [ summary ] that its contamination to the shear recovery can be significantly suppressed by increasing the exposure time .since these issues are not specifically addressed in any previous weak lensing literatures , it is important to point them out in this paper . in a recent work by zhang ( 2008 ) ( z08 hereafter ), a new and simple way of measuring the cosmic shear is found .its main advantages includes : 1 .it is mathematically simple ; 2 .it is free of assumptions on the morphologies of the galaxies and the psf ; 3 .it enables us to probe the shear information from galaxy substructures , thereby improving the signal - to - noise ratio .these facts encourage us to extend the method further by including a treatment of the photon noise and the pixelation effect .fortunately , we find that these two types of systematic errors can be treated in a simple and model - independent way based on the method of z08 .as will become clear later in this paper , the method we adopt to remove the noise contamination can also be considered in other shear measurement methods , and our treatment of the pixelation effect is generally useful for image processing of all purposes .the paper is organized as follows : in [ review ] , we briefly review the shear measurement method of z08 ; in [ systematics ] , we show how to treat the photon noise ( in [ noise ] ) and the pixelation effect ( in [ pixelation ] ) in weak lensing ; in [ numerics ] , we use computer - generated mock galaxy images to test the performance of our method ; finally , we summarize and discuss remaining issues in [ summary ] .z08 proposes a way of measuring the cosmic shear with the spatial derivatives of the galaxy surface brightness field . to do so ,let us define the surface brightness on the image plane as , and that on the source plane as , where and are the position angles on the image and source plane respectively .these quantities are related in a simple way as : where , and are the spatial derivatives of the lensing deflection angle .matrix is often expressed in terms of the convergence and the two shear components and . assuming the intrinsic galaxy image is statistically isotropic , the shear components can be simply related to the derivatives of the surface brightness field as ( ) : where the averages are taken over the galaxy .eq.[[shear12 ] ] is useful only when the angular resolution of the observation is infinitely high . in practice ,the observed galaxy surface brightness distribution is always equal to the lensed galaxy image convoluted with the psf , : where is the psf .z08 has shown how to modify eq.([shear12 ] ) when the psf is an isotropic gaussian function , which can be written as : where is the scale radius of the gaussian function .the new relation between the shear components and the derivatives of the surface brightness field is : where for a general psf , one can transform it into the desired isotropic gaussian form through a convolution in fourier space. the scale radius of the target psf should be larger than that of the original psf to avoid singularities in the convolution .furthermore , as shown in z08 , the spatial derivatives required by eq.([shear12psf ] ) can also be easily evaluated in fourier space .in this section , we introduce the basic ideas for treating the photon noise and the pixelation effect in [ noise ] and [ pixelation ] respectively .numerical examples are given in the next section .first of all , there are two types of photon noise : the astronomical photon noise due to the fluctuation of the background , and the `` photon counting '' shot noise due to finite exposure time .note that the first type of noise , like the source , is convoluted by the psf , while the second type of noise varies from pixel to pixel even if the pixel size is much smaller than the psf size . in the rest of the paper, we mainly deal with the astronomical noise . for the `` photon counting '' shot noise, we will simply argue in [ summary ] that its contamination to the shear recovery can be significantly suppressed by increasing the exposure time .the presence of the photon noise makes the measurement of the cosmic shear more complicated in two ways : 1 .the observed surface brightness is from both the lensed source and the un - lensed foreground noise ; 2 .because of the aliasing power caused by the non - periodic boundaries of the _ noisy _ map , the measurement of the spatial derivatives of the surface brightness field can not be accurately performed in fourier space .note that simple treatments such as filtering out the noise outside of the source image do not completely fix this problem , because the noise inside the image can still bias the shear estimate .fortunately , as we show in the rest of this section , our master equation [ eq.([shear12psf ] ) ] for estimating the cosmic shear can be easily adapted to solve both problems . in the method of z08 , to isolate the source signals in a noisy map , let us first write the total observed surface brightness as the sum of the contributions from the source and the noise , , .note that in this case , instead of , should be used in eq.([shear12psf ] ) to correctly measure the shear components .for this purpose , let us use the following relation : for simplicity , in this paper , we only consider the photon noise that is from the foreground or the instruments .the surface brightness distribution of the noise is therefore uncorrelated with that of the background sources . under this assumption, the cross - correlations between the source and the noise terms ( such as , , ) should vanish .eq.([osn ] ) therefore becomes : the relations in eq.([osn3 ] ) suggest an easy way of removing the contaminations from the photon noise : one can use a neighboring map of pure noise to estimate , , , , and subtract them from their counterparts evaluated from the noisy source map to get the source terms required by eq.([shear12psf ] ) . notethat since the noise photons are distributed differently in each map , the above procedure does not exactly remove the noise contribution for each source image .however , the method is statistically accurate as long as the statistical properties of the photon noise are stable over a reasonably large scale . in other words , the differences in the noise distributions of two mapsadd statistical errors to the measured cosmic shear through this procedure , but no systematic errors .finally , the pure noise map should be a close neighbor of the source map so that they share the same point spread function .to evaluate the derivatives of the surface brightness field of a noisy map in the fourier space , we need to deal with the non - periodic boundaries appropriately to avoid aliasing powers .this can be done by gradually attenuating the noise towards the boundaries of the map .the attenuation can take an arbitrary form as long as the following criterions are satisfied : 1 .the source region is not affected ; 2 .the edges of the map should be rendered sufficiently faint ; 3 .the attenuation amplitude should not have abrupt spatial variations ; 4 . to properly remove the noise contamination , the same attenuationshould also be applied to the neighboring map of pure noise . in [ numerics ] , we show numerical examples to support the noise treatment discussed above .modern astronomical images are commonly recorded on ccd pixels , the discrete nature of which may affect the accuracy of the cosmic shear measurements . in general , to avoid significant shear measurement errors , the pixel size should be at least a few times smaller than the scale radius ( or fwhm ) of the psf .for instance , we find that the method of z08 requires the scale radius of the psf to be roughly 3 or 4 times larger than the pixel size .this requirement is often not satisfied in space - based observations .it is therefore useful to have a method which can reconstruct continuous images from under - sampled ones .this is indeed a well - defined interpolation problem : how to reconstruct a continuous function if its value is only given at a set of discrete points . in the context of weak lensing, there is a quantitative way of testing the performance of the interpolation method , which is to check the accuracy of the shear recovery using the reconstructed images .the best method should yield the fastest convergence to an accurate image reconstruction ( or shear recovery ) as the pixel size becomes smaller .we have tested several standard 2d interpolation methods , including bilinear interpolation , bicubic interpolation , and bicubic spline interpolation ( see for details ) .although these conventional methods perform reasonably well , we find that they can all be significantly improved by interpolating the natural logarithms of the data instead of the data themselves .this is mainly due to two reasons : 1 .the values of the data have a lower bound zero ; 2 . at large distances, the psf typically falls off exponentially . for convenience , in the rest of the paper , we call such extensions of the three classical interpolation methods their original names with the prefix `` log- '' , and always abbreviate `` bicubic spline '' to `` spline '' .the mathematical definitions of the six methods ( , bilinear , bicubic , spline , log - bilinear , log - bicubic , log - spline ) are given in the appendix . in [ numerics ] , we show that the log - bicubic and log - spline methods are most accurate among the six . as will be shown in our numerical examples , continuous images that are reconstructed by these two interpolation methods yield negligible systematic errors in shear recovery as long as the pixel size is about smaller than the psf size ( twice its scale radius ) .note that the pixel size is _ rarely _ larger than the psf size in practice , because the pixel response function is a part of the psf .we present numerical examples to support the ideas introduced in the previous section .the general setup of our numerical simulations are given in [ setup ] . in [ test_noise ] and [ test_pixelation ] , we test our treatments of the photon noise and the pixelation effect separately . finally , the overall performance of our method is shown in [ overall ] .each of our galaxy images is placed on a grid , where is an integer ( typical chosen to be ) .note that such a choice facilitates the fast fourier transformation ( fft ) .each grid point is treated as the location of the center of a ccd pixel , whose side length is equal to the grid size .all sizes in our simulations are expressed in units of the grid size , , the pixel size .the form of the psf is chosen from the following two functions rotated by certain angles : \\ \nonumber & & w_b(x , y)\propto \exp\left[-\frac{1}{2r_{psf}^2}\left(x^2 + 0.8y^2\right)\right]\\ \nonumber & + & 0.03\exp\left[-\left(\frac{x^2}{r_{psf}^2}+0.2\right)\cdot\left(\frac{y^2}{r_{psf}^2}+0.2\right)\right]\end{aligned}\ ] ] where is the scale radius of the psf. a schematic view of the psf functions is shown in fig.[psfs ] .note that the additional term in mimics the diffraction spikes . as discussed in [ review ] , before measuring the shear using eq.([shear12psf ] ) , the psf is always transformed into the desired isotropic gaussian form , whose scale radius should be slightly larger than defined here to avoid singularities in the transformation . note that we have reserved the greek letter for the scale radius of the target psf to distinguish it from . )( rotated by certain angles ) .the contours mark 0.0025% , 0.025% , 0.25% , 2.5% , and 25% of the peak intensity . ]both the galaxies and the noise are treated as collections of point sources . for example , the image of a galaxy in our simulation is typically made of a few hundred or thousand points .the advantage of doing so is that one can easily lense the galaxy by displacing its point sources and modifying their amplitudes .the intensities of the point sources are distributed to the neighboring grid points of their locations according to the psf .for example , a point source of intensity at location contributes an intensity of to the grid point at location .the total intensity on a grid point is the sum of contributions from all the point sources .since everything is composed of point sources in our simulation , we will mostly call the surface brightness the flux density in the rest of the paper .there are two types of mock galaxies we use in this paper : regular disk galaxies and irregular galaxies .our regular galaxy contains a thin circular disk of an exponential profile and a co - axial de vaucouleurs - type bulge ( ) . on average ,its face - on surface brightness distribution is parameterized as : \ ] ] where is the distance to the galaxy center , and are the scale radii of the bulge and the disk respectively , and determines the relative brightness of the bulge with respect to the disk . in the simulation ,this profile is realized by properly and randomly placing a certain number ( typically a few hundred ) of point sources .these point sources are projected onto a randomly oriented image plane , lensed , and finally assigned to the ccd pixels according to the psf to yield the galaxy image .our irregular galaxies are generated by 2d random walks .the random walk starts from the center of the grid , and continues for a certain number of steps .each step has a fixed size and a completely random orientation in the image plane .the joint of every two adjacent steps gives the pre - lensing position of a point source of the galaxy in the image plane .the resulting irregular galaxies usually contain abundant substructures . for numerical manageability, we always cutoff the galaxy profile at a certain radius , which is denoted as the scale radius of the galaxy .this is done by excluding the points that are outside of radius in generating our regular galaxies .for the irregular galaxies , the random walker is sent back to the origin to continue from there when it reaches the radius . without loss of generality ,we set , , and for the regular galaxies . as our first example, we use mock regular galaxies to test the treatment of photon noise discussed in [ noise ] .each galaxy is made of point sources .the galaxy size is fixed at .the angle between the line - of - sight direction and the normal vector of the disk plane is randomly chosen from r > r_{core}$}.\end{array } \right.\end{aligned}\ ] ] where is the distance to the map / galaxy center , is the radius encircling the unaffected region , and determines the width of the transition area .note that the flat core of the filter should be sufficiently large to avoid affecting the galaxy images . in our simulations , we always choose , and .finally , to correct for the noise , for each galaxy map , we generate a map of pure noise to measure the noise properties required by eq.([osn3 ] ) .the same filter is also applied to the pure noise map before the fourier transformation . in fig.[shear_regular_noise_only ], we plot the recovered shear values for different noise - to - galaxy flux density ratios .the input cosmic shear is , displayed as dotted lines .the red data points with error bars are our main results achieved through the complete treatment of noise . for a comparison ,the blue ones show the shear values measured directly from the noisy galaxy maps using eq.([shear12psf ] ) without correcting for the noise ( but the filter given by eq.([window_noise ] ) is still applied to avoid the aliasing power in fourier transformation ) .the figure clearly demonstrates that our noise treatment works remarkably well even when the mean flux density of the noise is comparable with that of the galaxy , though we caution that the size of the error bar grows with increasing noise intensity . on the other hand ,the blue data points indicates that without a proper treatment of the noise , the measured shear values quickly drops to zero as the noise flux becomes dominant , deviating significantly from the input shear values .finally , it is worth noting that both here and in the simulations reported in the rest of the paper , the number density of the noise points is always chosen to be roughly one point per psf area ( ) or slightly more .this is due to two reasons : 1 . for a fixed mean noise flux density ,a higher number density of the poisson distributed noise points indeed leads to lower spatial fluctuations of the noise surface brightness field , therefore a less contamination to the shear signal , or a less challenging condition ; 2 . on the other hand, if the number density is much smaller than one point per psf area , the image turns into a collection of discrete point - like sources , causing both the nominator and the denominator of eq.([shear12psf ] ) to become small differences of large numbers , which are well known sources for numerical errors .the second point simply means it is hard to measure the shapes of sources that are much smaller than the size of the psf . in practice, we can avoid the second situation by increasing the observation / integration time ., , , respectively . ]error bars measured from images of different noise - to - galaxy flux density ratios . the measurement uses mock regular galaxies .the dotted lines indicate the input shear values .the red data points are our main results achieved through the complete treatment of noise introduced in [ noise ] .the blue ones show the shear values measured directly from the noisy galaxy maps using eq.([shear12psf ] ) without removing the noise contaminations . ] when the scale radius of the psf is less than or times the pixel size , the galaxy / noise images start to look pixelated , and the shear recovery accuracy may be strongly affected by the discrete nature of the ccd pixels . to reconstruct continuous images , we use 2d interpolation methods to insert finer grid points .the finer grid size is chosen to be ( is an integer ) times smaller than the original grid size , and at least less than a quarter of the psf scale radius .it is worth noting that interpolation of the pixelated image , if necessary , is always the first step in our shear measurement procedure .an example of a pixelated image is shown in the up - left corner of fig.[maps_interpolation ] , for which the scale radius of the psf is .the high resolution images reconstructed by the three interpolation methods and their logarithmic extensions discussed in [ pixelation ] are show in the lower half of fig.[maps_interpolation ] .the true high resolution image is on the up - right corner of the figure . by simply comparing the morphologies of the interpolated images by eye, one may already tend to conclude that the performances of the three conventional methods are improved if we interpolate the log of the data instead of the data itself .for instance , negative intensities ( denoted as blue regions in the figure ) are commonly found in the interpolated maps by the bicubic and spline methods , but absent in those by the log - bicubic or log - spline methods ; the filamentary features produced by the bilinear method become somewhat less prominent in the map processed by the log - bilinear method . to test the interpolation methods more quantitatively, we may compare the shear values measured from the interpolated maps using eq.([shear12psf ] ) ( in the absence of noise ) .clearly , all the interpolation methods should yield the same and correct shear estimates for a given galaxy and psf if the psf scale radius is much larger than the pixel size , , in the absence of the pixelation effect . for small psf sizes , as we have seen , the continuous images interpolated by different methods look unlike each other , resulting in possibly very different shear values . moreover , since the source is sparsely sampled in this case , the original ( pixelated ) image , the interpolated image , and the measured shear values all depend on the relative positions of the pixels with respect to the source . in fig.[test_single_image ] , we show the distributions ( as histograms ) of the shear values estimated from a _ single _ galaxy that is placed at different / random locations on the grid . for simplicity and clarity , we do not include any photon noise here .the ratio of the galaxy size to the psf scale radius is fixed at .the ratio of the pixel size to is chosen to be , , , , the results of which are represented by the purple , blue , red , and black histograms respectively .the figure shows that as the pixel size decreases relative to the psf size , the shear distributions converge more rapidly to a delta function at the correct position in methods with the prefix `` log- '' , manifesting again the value of the logarithmic extensions of the three classic interpolation methods . , , , respectively .each panel shows the results from a single interpolation method , whose name is indicated in the upper - left corner of the plot .all the histograms are normalized so that their peak values are one . ] finally , let us find out which interpolation method is best suited to weak lensing .for this purpose , we test the accuracy of shear recovery with a large number of interpolated galaxy images .to make it a more convincing test , we use our morphologically rich irregular galaxies , each of which is generated by 1000 random steps .we consider three choices for the random walk step size and the galaxy scale radius : ( , ) , ( , ) , and ( , ) , referring to large , medium , and small galaxies respectively .the psf we use is . of the target isotropic gaussian psf is set to be of .the cosmic shear ( , ) is chosen to be ( , ) .no photon noise is included .our results are summarized in fig.[test_multi_image ] , in which we plot the measured shear values against the ratio of the pixel size to . in the upper , middle , and lower panels of the figure , we report the results from averaging over 10000 large , medium , and small size galaxies respectively .the dotted lines refer to the input shear values .the cyan , blue , magenta , green , red , and black data points with error bars are from the bilinear , bicubic , spline , log - bilinear , log - bicubic , and log - spline methods respectively . according to the figure, we can draw several conclusions : \1 . as a sanity check, we confirm that all the interpolation methods work well when the psf size is much larger than the pixel size ; \2 .log - bicubic and log - spline are the two most successful methods. both of them can correctly recover the input shear as long as is about larger than a half of the pixel size , regardless of the galaxy size .note that one should not expect any interpolation method to work well when unless the source images are sufficiently smooth over the scale of the pixel size .the pixelation effect is less important for larger galaxies .for instance , by comparing the results in the three panels of fig.[test_multi_image ] , we see that the quality of shear recovery becomes increasingly poor for galaxies of smaller sizes for a given .this is not surprising because the structures / shapes of large galaxies are better resolved than those of smaller ones .meanwhile , it is encouraging to note that the log - bicubic and log - spline methods perform fairly well for even when the galaxy size is comparable to the psf size . .the upper , middle , and lower windows show the results from averaging over 10000 relatively large , medium , and small size galaxies respectively .the definition of the galaxy size in this example can be found in [ test_pixelation ] .the cyan , blue , magenta , green , red , and black data points with error bars are the results from the bilinear , bicubic , spline , log - bilinear , log - bicubic , and log - spline interpolation methods respectively . the input shear values are shown as dotted lines . ]the purpose of this section is to test our shear measurement method under general conditions , , in the presence of both photon noise and the pixelation effect .fig.[pipeline ] shows the pipeline of the numerical procedures we take in general cases .a detailed explanation of each item in the graph has been given in [ systematics ] . in fig.[overall_test ], we show the shear recovery results for both regular and irregular galaxies with different psf forms . in each panel, we use mock galaxies and scale them to four different sizes ( galaxy radius ranges from 2.5 to 10 times the scale radius of the psf ) to test the accuracy of shear recovery . in all panels ,the psf scale radius ( ) is half of the pixel size , corresponding to roughly the maximum pixelation effect that can be treated by an interpolation method .the red and black data points are from the log - bicubic and log - spline methods respectively .the input shear values are shown by dashed lines .the scale radius of the target isotropic gaussian psf is always . to avoid aliasing powers in fourier transformation , we use eq.([window_noise ] ) to filter the noise near the boundaries of the map . from the top to the bottom panel , the ratio of the mean flux density of the noise to that of the galaxy ( ) is 0.1 , 0.5 , 0.6 , and 0.2 respectively .the figure indicates that our method generally works well on galaxies of sizes that are at least a few times larger than the psf size .small discrepancies between the input shear values and the measured ones do exist when the galaxy size is comparable to the psf size .the residual systematic errors will be studied with a much larger ensemble of galaxies in another work ./ ) as denoted at the top of each panel . for the results in each panel ,we use 10000 mock galaxies , and scale them to four different sizes ( galaxy radius equal to 2.5 , 5 , 7.5 , 10 times the psf scale radius ) to recovery the input shear values . in all panels , the psf scale radius is half of the pixel size .the red and black data points are from the log - bicubic and log - spline interpolation methods respectively .the input shear values are shown by dashed lines . from the top to the bottom panel , the ratio of the mean flux density of the noise to that of the galaxy ( ) is 0.1 , 0.5 , 0.6 , and 0.2 respectively . ]we have discussed how to correct for the systematic errors due to the photon noise and the pixelation effect in cosmic shear measurements .our treatment of photon noise allows us to reliably remove the noise contamination to the cosmic shear even when the noise flux density is comparable with that of the sources . in principle , our method works regardless of the brightness of the noise , though when the noise is much brighter than the sources , one needs to worry about image selections . to deal with pixelated images ,our approach is to reconstruct continuous images by interpolating the natural logarithms of the pixel readouts with either the bicubic or bicubic spline method .this technique is accurate for the purpose of shear recovery as long as the scale radius of the psf is larger than about a half of the pixel size , a condition which is almost always satisfied in practice .despite the fact that our study has been based on the shear measurement method of z08 , a part of our methodology is generally useful for other shear measurement methods , or even other astronomical measurements as well .the most obvious thing to note is that the log - bicubic and log - spline interpolation methods are accurate image reconstruction approaches not only for weak lensing , but also for all kinds of other purposes .the way we remove the noise contamination from the shear signal can in principle also be considered in other shear measurements , in particular those that are based on measuring the multipole moments of the source images ( , and its various extensions ) .so far , our discussion has neglected the `` photon counting '' shot noise , which is always present due to the finite telescope exposure time .indeed , it becomes the dominant source of photon noise for ground - based weak lensing survey because of the large sky background . unlike the astronomical photon noise that we have discussed ,the photon shot noise varies from pixel to pixel independent of the size of the psf .furthermore , the fluctuation amplitude of the shot noise is also dependent on the photon flux of the source .therefore , it is hard to cleanly remove the contamination from the photon counting shot noise in our shear measurement .we do not intend to deal with this problem in this paper .alternatively , we argue that the systematic shear measurement error due to the shot noise can be suppressed by simply increasing the telescope exposure time .more specifically , we argue that for any given tolerance level of the systematic error , there is a critical exposure time , beyond which the contamination from the shot noise is adequately suppressed . to demonstrate this statement , in fig.[poisson ] , we plot the accuracy of shear recovery under three different conditions : high , medium , and low noise level , which correspond to noise - to - source mean surface brightness ratio of 100 , 10 , and 1 , or the signal - to - noise ratio of 0.01 , 0.1 , 1 respectively . the x axis in each plotis the mean number of background noise photons recorded on each pixel , which is proportional to the exposure time .the accuracy of shear recovery is expressed in terms of the multiplicative and additive bias parameters ( commonly used by people in the weak lensing community ) , which are defined as : for a perfect shear measurement , one should have . to produce each ( , ) pair in fig.[poisson ] ,we use five different input shear values , , ( 0.04 , 0.04 ) , ( 0.02 , 0.02 ) , ( 0 , 0 ) , ( -0.02 , -0.02 ) , ( -0.04 , -0.04 ) , for ( , ) ( which are reported using red and blue colors respectively ) , and 10000 mock irregular galaxies for each input ( , ) . note that to see the dependence of the shear recovery accuracy on the exposure time more clearly , we repeatedly use the same set of galaxies for different exposure times . in all cases, we set the galaxy radius to be 7.5 times the psf radius .the later is set to be equal to the pixel size .we use log - spline as the interpolation method .the astronomical photon noise is not included , , the photon counting noise is the only photon noise in this test . according to the figure , the systematic errors due to the photon counting noiseis clearly suppressed when exposure time is beyond some threshold .not surprisingly , the low noise case requires the shortest exposure time to achieve the same shear recovery accuracy level .a more comprehensive test with a much larger ensemble of galaxies will be shown in a separate paper . ) .each data point is achieved using five different sets of ( , ) , , ( 0.04 , 0.04 ) , ( 0.02 , 0.02 ) , ( 0 , 0 ) , ( -0.02 , -0.02 ) , ( -0.04 , -0.04 ) .the red and blue data points are for and respectively .for each input shear , 10000 mock galaxies are used to recover the shear .the same galaxies are used repeatedly for different exposure times so that the dependence of the shear recovery accuracy on the exposure time can be more clearly seen .the exposure time is denoted by the mean number of background photons received per pixel ( x - axis ) .the three panels from the top to the bottom correspond to the signal - to - noise ratios ( , the ratio of the mean surface brightness of the source to that of the background noise ) of 0.01 , 0.1 , 1 respectively , referring to the high , medium , and low noise cases as denoted at the top of the panels . ] the other sources of systematic errors that we have neglected include the high order corrections ( , , , ) to our master equation [ eq.([shear12psf ] ) ] , the spatial variations of the cosmic shear , etc .. these factors likely affect the measured shear values at percent levels on cosmic scales , which is important in the era of precision cosmology . for clustering lensing , the high order shear terms are more important because the shear is of order ten percent .this subject will be studied in a companion paper .this paper is a natural continuation of z08 on the methodology of cosmic shear measurement . in another paper, we will further test this method with the data from the shear testing program ( ) and the great08 program ( ) , and also present results measured with real astronomical data .jz would like to thank anthony tyson and the anonymous referee for pointing out the importance of the photon counting shot noise , which was neglected in an earlier version of this paper .jz is currently supported by the tcc fellowship of texas cosmology center of the university of texas at austin .jz was previously supported by the tac fellowship of the theoretical astrophysics center of uc berkeley , where the majority of this work was done .abazajian k. & dodelson s. , 2003 , prl , 91 , 041301 acquaviva v. , baccigalupi c. & perrotta f. , 2004 , prd , 70 , 023515 bacon d. , massey r. , refregier a. , ellis r. , 2003 , mnras , 344 , 673 bacon d. , refregier a. & ellis r. , 2000 , mnras , 318 , 625 bartelmann m. & schneider p. , 2001 ,physics reports , 340 , 291 bernstein g. & jain b. , 2004 , apj , 600 , 17 bernstein g. & jarvis m. , 2002 , aj , 123 , 583 bonnet h. & mellier y. , 1995 , a&a , 303 , 331 bridle s. , gull s. , bardeau s. , kneib j. , 2001 , in scientific n. w. , ed . , proceedings of the yale cosmology workshop bridle s. et al . , 2009 , annals of applied statistics , vol.3 , no.1 , 6 , arxiv : 0802.1214 brown m. , taylor a. , bacon d. , gray m. , dye s. , meisenheimer k. , wolf c. , 2003 , mnras , 341 , 100 dahle h. , 2006 , apj , 653 , 954 hamana t. et al . , 2003 , apj , 597 , 98 hannestad s. , tu h. & wong y. , 2006 , jcap , 0606 , 025 hetterscheidt m. , simon p. , schirmer m. , hildebrandt h. , schrabback t. , erben t. , schneider p. , 2007, a&a , 468 , 859 heymans c. et al . , 2005 , mnras , 361 , 160 heymans c. et al . , 2006 , mnras , 368 , 1323 hoekstra h. , franx m. , kuijken k. , squires g. , 1998 , apj , 504 , 636 hoekstra h. , 2006 , apj , 647 , 116h hoekstra h. , yee h. & gladders m. , 2002 , apj , 577 , 595 hu w. , 2002 , prd , 66 , 083515 hu w. & jain b. , 2004 , prd , 70 , 043009 ishak m. , 2005 , mnras , 363 , 469 ishak m. , upadhye a. & spergel d. , 2006 , prd , 74 , 043513 jain b. & taylor a. , 2003 , prl , 91 , 141302 jarvis m. , bernstein g. , jain b. , fischer p. , smith d. , tyson j. , wittman d. , 2003 , apj , 125 , 1014 jarvis m. , jain b. , bernstein g. , dolney d. , 2006 , apj , 644 , 71 kaiser n. , 2000 , apj , 537 , 555 kaiser n. , squires g. & broadhurst t. , 1995 , apj , 449 , 460 kaiser n. , wilson g. & luppino g. , astro - ph/0003338 kitching t. , miller l. , heymans c. , van waerbeke l. , heavens a. , 2008 , mnras , 390 , 149 , arxiv : 0802.1528 knox l. , song y. & tyson j. , 2006 , prd , 74 , 023512 kratochvil j. , linde a. , linder e. , shmakova m. , 2004 , jcap , 0407 , 001 kuijken k. , 2006 , a&a , 456 , 827k luppino g. & kaiser n. , 1997 , apj , 475 , 20 maoli r. , van waerbeke l. , mellier y. , schneider p. , jain b. , bernardeau f. , erben t. , 2001 , a&a , 368 , 766 massey r. , bacon d. , refregier a. , ellis r. , 2005 , mnras , 359 , 1277 massey r. et al ., 2007 , mnras , 376 , 13 massey r. & refregier a. , 2005 , mnras , 363 , 197 miller l. , kitching t. , heymans c. , heavens a. , van waerbeke l. , 2007 , mnras , 382 , 315 , arxiv : 0708.2340 nakajima r. & bernstein g. , 2007 , aj , 133 , 1763 press w. , flannery b. , teukolsky s. , vetterling w. , 1992 , _ numerical recipes _, cambridge univ .press , 2nd ed .refregier a. , 2003 , ara&a , 41 , 645 refregier a. & bacon d. , 2003 , mnras , 338 , 48 refregier a. , rhodes j. & groth e. , 2002 , apjl , 572 , l131 rhodes j. , refregier a. , collins n. , gardner j. , groth e. , hill r. , 2004 , apj , 605 , 29 rhodes j. , refregier a. & groth e. , 2000 , apj , 536 , 79 rhodes j. , refregier a. & groth e. , 2001 , apjl , 552 , l85 schimd c. et al . , 2007 , a&a , 463 , 405 schrabback t. et al . , 2007 , a&a , 468 , 823 seljak u. & zaldarriaga m. , 1999 , prl , 82 , 2636 semboloni e. et al . , 2006 , a&a , 452 , 51 simpson f. & bridle s. , 2005 , prd , 71 , 083501 song y. & knox l. , 2004 , prd , 70 , 063510 song y. , 2005 , prd , 71 , 024026 takada m. & jain b. , 2004 , mnras , 348 , 897 takada m. & white m. , 2004 , apj , 601 , l1 taylor a. , kitching t. , bacon d. , heavens a. , 2007 , mnras , 374 , 1377 tyson j. , wenk r. & valdes f. , 1990 , apjl , 349 , l1 de vaucouleurs g. , de vaucouleurs a. , corwin h. , buta r. , paturel g. , fouqu p. , 1991, _ third reference catalogue of bright galaxies _ , springer , new york van waerbeke l. et al . , 2000 ,a&a , 358 , 30 van waerbeke l. , mellier y. & hoekstra h. , 2005 , a&a , 429 , 75 van waerbeke l. et al ., 2001 , a&a , 374 , 757 wittman d. , 2002 , _ dark matter and gravitational lensing _ , _ lnp top .courbin f. , minniti d. , springer - verlag . , astro - ph/0208063 wittman d. , tyson j. , kirkman d. , dellantonio i. , bernstein g. , 2000 , nature , 405 , 143 zhan h. , 2006 , jcap , 0608 , 008 zhang j. , hui l. & stebbins a. , 2005 , apj , 635 , 806 zhang j. , 2008 , mnras , 383 , 113in this appendix , we give the mathematical definitions of the three classic 2d interpolation methods : bilinear , bicubic , spline . for their logarithmic extensions ( , log - bilinear , log - bicubic , log - spline ) , we only have one minor point to address at the end of this section .the bilinear method is the simplest of the three .let us write the coordinates of the grid points as ( ) , and the signals as .suppose the point of our interest is , which satisfies and , the bilinear method defines in the following way : where the bicubic method includes higher order terms of and to achieve smoothness of the interpolated function .it requires the user to specify not only the signal , but also the spatial derivatives , , and at every grid point .since the spatial derivatives of the signal are usually not known a priori , we estimate them using the finite - difference method : \\ \nonumber & /&\left[(x_{i+1}-x_{i-1})(y_{j+1}-y_{j-1})\right]\end{aligned}\ ] ] the interpolated function inside each grid square is written in the following polynomial form : the values of the sixteen parameters are constrained using eq.([a_bicubic ] ) and the following three equations at the four corners of the grid square : where and have been defined in eq.([tu ] ) .given the values of a function at a set of points ( ) , the form of the function in the interval between and is written as : where and is the second derivative of the function at . as a consistency check , one can easily show that . the value of the second derivatives are specified by requiring the first derivatives evaluated from the two sides of the grid point to be equal .note that this requirement only provides equations , while there are second derivatives in total .the rest of the constraint comes from the boundary conditions on and . in this paper, we simply set , which yields the so - called _ natural cubic spline_. finally , we note that the `` log '' based interpolation methods are all well defined except when the readouts of some pixels are zero .this is a very rare case in practice due to the presence of noise .however , this situation can in principle exist in simulations . to cure this problem , one can either change the zeros into tiny positive numbers , or simply avoid interpolating the regions with zeros .the second option says that if a grid square ( regarding the log - bilinear and log - bicubic methods ) or a unitary segment ( regarding the log - spline method ) contains any zero readouts in their four corners or two ends , the finer grid points within them are all set to have zero values .the rest of the grid squares / segments are interpolated independently as usual .note that in the log - spline method , this means the spline interpolations are carried out only in those nonzero segments that are isolated by the zeros .these two choices usually work similarly well .however , when there are extended regions of zero readouts , we find that the second option is better , because it avoids introducing artificial high order fluctuations in the zero regions by methods like log - bicubic or log - spline .
|
we propose easy ways of correcting for the systematic errors caused by the photon noise and the pixelation effect in cosmic shear measurements . our treatment of noise can reliably remove the noise contamination to the cosmic shear even when the flux density of the noise is comparable with those of the sources . for pixelated images , we find that one can accurately reconstruct their corresponding continuous images by interpolating the logarithms of the pixel readouts with either the bicubic or the bicubic spline method as long as the pixel size is about less than the scale size of the point spread function ( psf , including the pixel response function ) , a condition which is almost always satisfied in practice . our methodology is well defined regardless of the morphologies of the galaxies and the psf . despite that our discussion is based on the shear measurement method of zhang ( 2008 ) , our way of treating the noise can in principle be considered in other methods , and the interpolation method that we introduce for reconstructing continuous images from pixelated ones is generally useful for digital image processing of all purposes . [ firstpage ] cosmology : gravitational lensing - methods : data analysis - techniques : image processing : large scale structure
|
in the course of signal and/or data processing fast classification of the input data is often helpful as a preprocessing step for decision preparation . assuming that the to be classified data is well defined and it came under a given number of classes or sets , . to perform the classification is in such a way equivalent to a set separation task .the problem of separation could be manifold : sparsely distributed input data makes the determination of the decision lines between the classes to a hard ( often nonlinear ) task , or even the probability distribution of the input data is not known a - priori which is resulted in an unsupervised classification problem also known as clustering . further `` open question '' is to classify input sequences in the case of only the original measurement / information data is known almost sure , but the observed system adds a stochastically changing behavior to it , in this manner the classification becomes a statistical decision problem , which could be extremely hard to solve if the number of `` possibilities '' is increasing . due to this fact to find an optimal solution is time consuming and yields broad ground to suboptimal ones .with assistance of quantum computation we introduce an optimal solution whose computational complexity is much lower contrary to the classical cases .this paper is organized as follows . in sect .[ sec : comp ] .the set separation related quantum computation basics are highlighted .the system model is described in sect .[ sec : system ] . together with the proposed set separation algorithm in sect .[ sec : sep ] .the main achievements are revised in sect .[ sec : conc ] .in this section we give a brief overview about quantum computation which is relevant to this paper . for more detailed description, please , refer to . in the classical information theorythe smallest information conveying unit is the _bit_. the counterpart unit in quantum information is called the _ `` quantum bit '' _ , the qubit. its state can be described by means of the state , , where refers to the complex probability amplitudes and .the expression denotes the probability that after measuring the qubit it can be found in computational base , and shows the probability to be in computational base . in more general description an -bit _ `` quantum register '' _ ( qregister ) is set up from qubits spanned by computational bases , where states can be stored in the qregisters at the same time where denotes the number of states and , , , , respectively .it is worth mentioning , that a transformation on a qregister is executed parallel on all stored states , which is called _quantum parallelizm_. to provide irreversibility of transformation , must be unitary , where the superscript refers to the hermitian conjugate or adjoint of .the quantum registers can be set in a general state using quantum gates which can be represented by means of a unitary operation , described by a quadratic matrix .for the sake of simplicity a 2-dimensional set separation is assumed , where the original source data can take the values } ] , and , means a 100 percent sure decision , following the decision rules in table [ tab:1 ] .this areas are the non - overlapping parts of the sets in fig .[ fig : set2 ] . and the outer parts ( until the vertical dashed black lines ) in fig .[ fig : inter2 ] .however , in the case of non zero and values an accurate prediction can be given relating to the maximum likelihood decision rule ..set separation decision rules [ cols="^,^,^",options="header " , ] all the possible states from the qregister will be evaluated by the function ( [ eq:2 ] ) for and also for , simultaneously , which will be collated with the system output .if at least one output or with the parameter settings is matched to the system output , it will be put to the set or , respectively . in a more exciting case at least one similarity of and also at least one of to is given , the system output could be classified to the both sets , an intersection is drawn up .this result in a not certainty prediction , which piques our interest and sets our focus not this juncture .we assume no a - priori knowledge on the probability distribution of the input sequence , so it is assumed to be equally distributed .henceforward we suppose that after counting the evaluated values the number of similarity to the system output is higher than in case of , where ] in 2-dimensional case , 3 .count the identical entries in the virtual databases which are equal to the observed data , , ( see fig .[ fig : inter2 ] ) . , 4 .use the decision table table [ tab:1 ] to assign to the sets or .in this paper we showed a connection between _ maximum likelihood _ hypothesis testing and quantum counting used for quantum set separation .we introduced a set separation algorithm based on quantum counting which was employed to estimate the conditional probability density function of the observed data in consideration to the belonging sets . in our casethe _ pdf s _ are estimated fully at a single point by invoking the quantum counting operation only once , that makes the decision facile and sure .in addition one should keep in mind that the qregister have to be set up only once before the separation .the virtual databases are generated once and directly leaded to the oracle of the grover block in the quantum counting circuite , which reduce the computational complexity , substantially .
|
in this paper we introduce a method , which is used for set separation based on quantum computation . in case of no a - priori knowledge about the source signal distribution , it is a challenging task to find an optimal decision rule which could be implemented in the separating algorithm . we lean on the maximum likelihood approach and build a bridge between this method and quantum counting . the proposed method is also able to distinguish between disjunct sets and intersection sets .
|
recent years have seen a tremendous progress in the development of various matter wave interference experiments , using electrons , large ultra - cold atomic ensembles , cold clusters or hot macromolecules .interferometry is also expected to lead to interesting applications in molecule metrology and molecule lithography . in particular quantum interference of complex systemsis intriguing as it opens new ways for testing fundamental decoherence mechanisms .further progress along this line of research requires an efficient source , a versatile interferometer and a scalable detection scheme .scalable sources represent still a significant technological challenge , but an appropriate interferometer scheme has already been suggested and successfully implemented for large molecules .all coherence experiments with clusters or molecules up to date finally employed ion detection .however , most ionization schemes run into efficiency limits when the mass and complexity of the particles increases .surface adsorption in combination with fluorescence detection is therefore a promising alternative .its high efficiency will reduce the intensity constraints on future molecular beam sources for interferometry .it appears to be an important prerequisite not only for experiments exploring molecular coherence beyond 10,000amu but also for decoherence and dephasing experiments with various dyes below 1,000amu . in the present articlewe demonstrate the feasibility of optically detecting matter wave interference fringes of dye molecules .such structures usually have periods between nm and would be hardly resolved in direct imaging .but we show that a mechanical magnification step is a simple and very efficient technique to circumvent the optical resolution limit for interferograms .we can thus combine the high sensitivity of the fluorescence method with the high spatial resolution of our interferometer setup .the idea behind the talbot lau interferometer has been previously described for instance in and the modification by our new detection scheme is shown in fig .[ fig1:setup ] .molecules , which pass the device , reveal their quantum wave nature by forming a regular density pattern at the location of the third grating .the distances between the gratings are equal in the experiment and corresponds to the talbot length m of molecules with a velocity of 250 m / s .it is chosen such that the period of the molecular fringe system is the same as that of the third grating. the regular interference pattern can then be visualized by recording the total transmitted molecular flux as a function of the position of the transversely scanning third grating .the quantum wave nature of various large molecules was already studied in a similar interferometer but using either laser ionization or electron impact ionization in combination with quadrupole mass spectrometry for the detection of the molecules . in spite of their success , both previous methodswill probably be limited to masses below 10,000amu .it is therefore important to develop a scalable detection scheme , such as fluorescence recording , which does not degrade but rather improve with molecular mass .molecule interferometry often operates with low particle numbers .direct molecule counting in free flight therefore typically exhibits too weak signals .in contrast to that , the light - exposure time of surface adsorbed molecules can exceed that of free - flying particles by orders of magnitude .when bound to a surface the molecules may also release part of their internal energy to the substrate .this helps in limiting the internal molecular temperature and in maximizing the number of fluorescent cycles . in our experimentthe molecular beam was collected on a quartz plate behind the third grating .several studies were done with similar aromatic molecules on silica or quartz surfaces . due to their insulating propertiesthe molecular fluorescence yield exceeds that on simple metals or semi - conductors .for demonstrating the feasibility of this novel detection method we chose meso - tetraphenylporphyrin ( tpp , porphyrin systems po890001 ) , a biodye with a mass of 614amu .it exhibits sufficiently strong fluorescence , and a sufficiently high vapor pressure to be evaporated in a thermal source which was set to a temperature of about 420 .moreover it was known to show quantum wave behavior in our setup .quadrupole mass spectroscopy allowed us to determine a mass purity of approximately 93% . a small contribution ( 7% ) of porphyrin moleculeslacked one phenyl ring .smaller contaminations may contribute up to 2% to the mass in the initial powder but not to the fluorescence on the surface .the adsorbing quartz surface was mounted on a motorized translation stage and it was shifted stepwise , parallel to the third grating as shown in fig .[ fig1:setup ] . a fixed slit between the third grating and the quartz plate with a width of 170 m limited the exposed area on the surface .molecules were deposited and accumulated over a time span of eight minutes under stationary conditions .then the third grating , with a grating period of 991 nm , ( about 400 nm open slits and a thickness of 500 nm ) was shifted by 100 nm , and the adsorber plate was simultaneously displaced by 425 m to an unexposed spot . by repeating this process more than 30times the third gratingwas moved over three periods and 30stripes of fluorescent tpp molecules were accumulated on the surface . this way the molecular interference pattern was recorded with a mechanical magnification factor of 4250 .the large ( 260 m ) gap between two stripes prevented any mixing of the molecules which could otherwise be caused by surface diffusion between the stripes . we have verified in independent diffusion experiments with tpp on quartz surfaces that the molecules aggregate andget immobilized on the 300 ... 400 nm scale at room temperature . with our new methodthe resolution is only limited by the dimensions of the gratings in the interferometer .these may have openings down to 50 nm and periods as small as 100 nm as already used in earlier molecule interference experiments .high contrast interferences fringes require that the molecular velocity spread be not too large . actually for tpp and our present grating period of 990 nm a width of is sufficient . as in earlier experiments ,this is done by selecting certain free - flight parabola of the molecules in the earth s gravitational field using three horizontally oriented slits .the first slit is provided by the oven aperture of 200 m , another slit of 150 m width is placed 1.2 m away from the oven .the third point of the parabola is given by the vertical position on the detecting surface , which is located 2.9 m behind the oven .fast molecules arrive at the top of the plate .slow molecules , with a longer falling time , reach the surface at a lower position . in principleour method therefore provides the option to select the longitudinal coherence length a posteriori , after the experiment is already finished .however , in the present configuration the width of the velocity distribution is essentially determined by the size of the first two slits , since we integrate typically only over a much smaller position interval of 33 m on the surface .an important requirement for the experiment is to have a perfectly clean substrate of low self - fluorescence .we used fused silica ( suprasil i ) of 500 m thickness .it was cleaned from dust and organic solvents using the rca-1 cleaning procedure followed by methanol sonication and rinsing with ultrapure water .the clean surface was then bleached by an expanded 16w argon ion laser beam for 30minutes with an intensity of about 3w/ . owing to this preparation ,no further bleaching could be observed and the background fluorescence was correspondingly low .after depositing the molecules , the quartz plate was removed from the vacuum chamber and put under a fluorescence microscope ( zeiss ; axioskop 2 mot plus ) in air , where a picture of each stripe was taken with an optical magnification factor of 20 and an integration time of 20s .the irradiation intensity was 1.5w/ .tpp absorbs well in the blue and emits in the red .correspondingly , we used a standard mercury lamp ( hbo 100 ) with an excitation filter transmitting wavelengths between 405 and 445 nm , a dichroic beam splitter with a pass band above 460 nm and an emission filter which transmitted above 600 nm .m ( v m / s ) , b=371 m ( v m / s ) , c=938 m ( v m / s ) and d=1234 m ( v m / s ) . ] at the chosen optical magnification and because of the limited size of the ccd camera ( 1megapixels ) a single microscope image covers a height of 340 m . as the molecular beamis spread over about 3000 m due to its velocity distribution , a whole picture matrix had to be recorded to image all velocity classes .four rows of this 9 x 30 matrix selected around the positions with the highest molecular coverage are shown in fig .[ fig2:stripes ] .the high quality of the data could be obtained because of the initial surface preparation and no smoothing was needed .the single images of one matrix row were arranged a bit closer to each other than they lie on the surface .most of the empty gap between the stripes was removed for presentation purposes and their upper and lower ends were clipped to avoid regions of optical aberration . from available vapor pressure data for porphyrins , we estimate that around 0.1monolayers of tpp reach the surface in eight minutes . based on the work by assume that the fluorescence signal grows linearly with the deposition time within our experimental parameter range .hence the fluorescence signal is proportional to the incident intensity , , the molecular fluorescence efficiency , a geometrical collection factor and to the molecular surface number density , which we want to determine : \cdot k(x , y)i_{i}(x , y)+i_{c}(x , y ) \label{fl}\]]where is the background fluorescence emitted by the illuminated substrate and represents the intrinsic detector noise .the surface sticking coefficient is assumed to be independent of the molecular surface coverage in our density regime and is included in n. the intensity of a reference image on a clean portion of the surface without molecules is eq . ( [ fl ] ) and eq .( [ ref ] ) we can evaluate from the experimental data the molecular surface density up to a constant factor fig . [ fig2:stripes ] shows the corrected intensity distribution . for each vertical stripethe total signal is computed by integrating over a rectangle centered at position in the middle of the stripe .the integration height is m and the width is m .the resulting intensity cross sections for four heights selected in fig .[ fig2:stripes ] ( a , b , c and d ) are shown in fig .[ fig3:fits ] .an evaluation of altogether 43 such interference curves allows to create a smooth plot of the interference fringe visibility versus the molecular velocity , as shown in fig .[ fig4:vvsv ] .note , that for tpp the velocity class with the highest contrast ( b in fig.2 at h=350 m ) is very close but not equal to the most probable velocity ( h=500 m ) .the experimental fringe visibility ( full squares in fig . [ fig4:vvsv ] ) clearly varies in a non - monotonic and quasi - periodic way with the vertical molecular position on the detector , i.e. with the velocity or the de broglie wavelength .the classical model , shown as the falling green line , can not even qualitatively reproduce the velocity dependence of the fringe contrast even if we take into account the van der waals interaction with the grating walls as done here .the quantum prediction ( dashed curve ) also includes the molecule - grating interaction .it is computed by averaging the theoretical visibility over the velocity distribution , which is obtained from the geometry of our setup .the experimental contrast is well reproduced for fast molecules ( about 250 m/s ) and falls below the quantum model for velocities around 130 m/s .slower molecules are more sensitive to both laboratory noise causing interferometer vibrations and to collisional decoherence .the observed contrast reduction for porphyrins at about 130 m/s is consistent with these effects .this is however not a fundamental limit , as in future experiments the present base pressure of mbar in the interferometer chamber can certainly be improved by about two orders of magnitude . and also mechanical vibrations should be suppressed by a factor of ten in future experiments with additional passive damping systems .the deviation at medium velocities is ascribed to molecules which do not follow a perfect free fall trajectory , because of scattering at edges along the beam path . in independent velocity measurementswe have already observed before the effect of scattering , which deflects molecules of a given speed into the trajectory of another free - fall velocity class .the red continuous line shows the results of a model , which allows about 20% of the molecules at the most probable velocity to be spread out over the whole detector area .the resulting curve then fits indeed all experimental points , except those at low velocities , as discussed above .our accumulation and imaging method requires a good mechanical stability of the whole setup . from the good reproducibility of the expected and observed fringe period we derive an upper limit for the slow grating drift of 50 nm over four hours .a drift of 10 nm over this period is realistic in a second generation experiment .a clear advantage of our new detection scheme is that all velocity classes are simultaneously recorded and encoded in the vertical position on the screen .this ensures utmost mechanical stability between the interferograms belonging to different velocities .the simultaneous recording can therefore be used to measure a possible phase shift between these interference fringes .ideally , we should not expect any velocity dependent phase shift in a symmetric talbot lau interferometer , where the distance between the first and second grating equals the distance between the second and the third one .but an evaluation of fig .[ fig2:stripes ] yields a phase variation in the vertical direction of about 0.4 mm in our experiment .this effect can be traced back to a small angular misalignment between the gratings around the molecular beam axis .our observation is for example consistent with a tilt as small as 200 between the second and the third grating .the present detection scheme is therefore a very sensitive method for identifying the presence of such tilts , which will be important for interferometry with very massive molecules .generally , the alignment requirements increase critically with increasing mass of the interfering particles .in contrast to the present setup , other grating configurations may show additional non - classical effects , for instance the fractional talbot effect .in particular we do expect a phase jump of between fringes of certain velocity classes in an asymmetric talbot lau configuration .this is a non - classical feature which can still be observed in a regime where the classical moir effect and quantum interference are expected to yield comparable fringe visibilities .and the present experiment indicates that such features should be stably recorded using our new detection method , even in the presence of overall drifts of the interferometer .mechanically magnified fluorescence imaging offers several advantages for future experiments aiming at recording interferograms of nanometer - sized objects .the demonstrated method scales favorably with the complexity of the observed particles : organic molecules can be tagged with several dye molecules or semiconductor nanocrystals and large proteins , such as gfp , or again nanocrystals will even exhibit a much higher fluorescence quantum yield and a significantly smaller bleaching rate than the molecules in our current experiments . at present ,the smallest commercially available fluorescent nanocrystals have a mass around 3000amu in the core and roughly the same mass in the ligand shell .the high efficiency of our optical detection method will also allow to study the relevance of different electric and magnetic dipole moments in interference with molecules of rather similar masses , such as for example various porphyrin derivatives .some of them have too low vapor pressures for experiments with ionization detectors , but will still be detectable in fluorescence .mechanically magnified fluorescence imaging is therefore expected to be a scalable method for exploring the wave - particle duality of a large class of nanosized materials .it is an enabling technique for a range of dephasing and decoherence studies , which will also be useful in molecule metrology .this work has been supported by by the austrian science fund ( fwf ) , within the projects start y177 and f1505 and by the european commission under contract no.hprn-ct-2002-00309 ( quacs ) .we acknowledge fruitful discussions with klaus hornberger , lucia hackermller and sarayut deachapunya .10 freimund d l , aflatooni k and batelaan h 2001 _ nature _ * 413 * 142
|
imaging of surface adsorbed molecules is investigated as a novel detection method for matter wave interferometry with fluorescent particles . mechanically magnified fluorescence imaging turns out to be an excellent tool for recording quantum interference patterns . it has a good sensitivity and yields patterns of high visibility . the spatial resolution of this technique is only determined by the talbot gratings and can exceed the optical resolution limit by an order of magnitude . a unique advantage of this approach is its scalability : for certain classes of nanosized objects , the detection sensitivity will even increase significantly with increasing size of the particle .
|
over the past decades , battery - powered devices have been deployed in many wireless communication networks . however , since batteries have limited energy storage capacity and their replacement can be costly or even infeasible , harvesting energy from the environment provides a viable solution for prolonging the network lifetime .although conventional natural energy resources , such as solar and wind energy , are perpetual , they are weather - dependent and location - dependent , which may not suitable for mobile communication devices .alternatively , background radio frequency ( rf ) signals from ambient transmitters are also an abundant source of energy for energy harvesting ( eh ) . unlike the natural energy sources ,rf energy is weather - independent and can be available on demand . nowadays , eh circuits are able to harvest microwatt to milliwatt of power over the range of several meters for a transmit power of watt and a carrier frequency less than ghz .thus , rf energy can be a viable energy source for devices with low - power consumption , e.g. wireless sensors .moreover , rf eh provides the possibility for simultaneous wireless information and power transfer ( swipt ) since rf signals carry both information and energy .the integration of rf eh into communication systems introduces a paradigm shift in system and resource allocation algorithm design . a fundamental tradeoff between information and energy transfer rateswas studied in . however , current practical rf eh circuits are not yet able to harvest energy from an rf signal which was already used for information decoding ( i d ) . to facilitate simultaneous i d and eh, a power splitting receiver was proposed in and .the energy efficiency of a communication system with power splitting receivers was investigated in .in addition , a simple time - switching receiver has been proposed which switches between i d and eh in time .furthermore , multiuser multiple input single output swipt systems were studied in , where beamformers were optimized for maximization of the sum harvested energy under minimum required signal - to - interference - plus - noise ratio constraints for multiple i d receivers . in ,the optimal energy transfer downlink duration was optimized to maximize the uplink average information transmission rate . in - , beamforming design was studied for secure swipt networks with different system configurations . in , a multiuser time - division - multiple - access system with energy transfer in the downlink ( dl ) and information transfer in the uplink was studied .the authors proposed a protocol for sum - throughput maximization and enhanced it by fair rate allocation among users with different channel conditions .nevertheless , multiuser scheduling , which exploits multiuser diversity for improving the system performance of multiuser systems , has not been considered in - .recently , simple suboptimal order - based schemes were proposed to balance the tradeoff between the users ergodic achievable rates and their average amounts of harvested energy in . however , the scheduling schemes proposed in are unable to guarantee quality of service with respect to the minimum energy transfer .in fact , optimal multiuser scheduling schemes that guarantee a long - term minimum harvested energy for swipt systems have not been considered in the literature so far .motivated by the above observations , we study optimal scheduling schemes for long - term optimization which control the rate - energy ( r - e ) tradeoff under the consideration of proportional fairness and equal throughput fairness .we consider a swipt system that consists of one access point ( ap ) with a fixed power supply and battery - powered user terminals ( uts ) , see fig .[ fig : fig4 ] . the ap and the utsare equipped with single antennas .besides , we adopt time - switching receivers at the uts to ensure low hardware complexity . [ fig : fig4 ] we study the user scheduling for dl transmission .we assume that the transmission is divided into time slots and in each time slot perfect csi is available at the ap .also , the data buffer for the users at the ap is always full such that enough data packets are available for transmission for every scheduled ut . in each time slot , the ap schedules one user for i d , while the remaining users opportunistically harvest energy from the received signal .we assume block fading channels .in particular , the channels remain constant during a time slot and change independently over different time slots . besides, the users are physically separated from one another such that they experience independent fading .furthermore , we adopt the eh receiver model from . the rf energy harvested by user in time slot given by where is the constant ap transmit power , is the rf - to - direct - current ( dc ) conversion efficiency of the eh receiver of user , and is the channel power gain between the ap and user in time slot .in the following , we propose three optimal multiuser scheduling schemes that control the r - e tradeoff under different fairness considerations .first , we consider a scheduling scheme which maximizes the average sum rate subject to a constraint on the minimum required average aggregate harvested energy .we note that this scheme aims to reveal the best system performance , and fairness in resource allocation for uts is not considered .to facilitate the following presentation , we introduce the user selection variables , where and . in timeslot , if user is scheduled to perform i d , , whereas , i.e. , all the remaining idle users harvest energy from the transmitted signal .now , we formulate the mt optimization problem as follows .maximum throughput optimization : where here , is the additive white gaussian noise power at ut . in the considered problem ,we focus on the long - term system performance for . constraints c1 and c2 ensure that in each time slot only one user is selected to receive information .c3 ensures that the average amount of harvested energy is no less than the minimum required amount .since the user selection variables , are binary , problem is non - convex . in order to handle the non - convexity, we adopt the time - sharing relaxation .in particular , we relax the binary constraint c2 such that is a continuous value between zero and one . then , the relaxed version of problem ( [ eq : mtbin ] ) can be written in minimization form as : now , we introduce the following theorem that reveals the tightness of the binary constraint relaxation . problems ( [ eq : mtbin ] ) and ( [ eq : mtrel ] ) are equivalent . ] with probability one , when are independent and continuously distributed . in particular , the constraint relaxation of c2 is tight , i.e. , theorem 1 will be proved in the following based on the optimal solution of ( [ eq : mtrel ] ) . in other words , we can solve ( [ eq : mtbin ] ) via solving ( [ eq : mtrel ] ) .it can be verified that the relaxed problem is convex with respect to the relaxed optimization variables and satisfies the slater s constraint qualification .therefore , strong duality holds and the optimal solution of is equal to the optimal solution of its dual problem .thus , we solve via the dual problem .to this end , we first define the lagrangian function for the above optimization problem as , and are the lagrange multipliers corresponding to constraints c1 , , and c3 , respectively .thus , the dual problem of ( [ eq : mtrel ] ) is given by in order to determine the optimal user selection policy , we apply standard convex optimization techniques and the karush - kuhn - tucker ( kkt ) conditions .thereby , we differentiate the lagrangian in with respect to and set it to zero which yields : we define as the optimal user selection index for i d at time slot , i.e. , and . from the complementary slackness condition ,we obtain and .now , we denote the optimal dual variable for constraint c3 as and substitute it into .then , the selection metric for ut is given as [ eq : metricall ] subtracting from yields since from the dual feasibility conditions , we obtain .furthermore , , are continuous random variables , therefore , where denotes the probability of an event .thus , and the optimal selection criterion for the mt scheme in time slot reduces to in other words , the solution of the relaxed problem is itself of the boolean type .therefore , the adopted binary relaxation is tight .besides , depends only on the statistics of the channels .hence , it can be calculated offline , e.g. using the gradient method , and then used for online scheduling as long as the channel statistics remain unchanged .we emphasize that although the original problem in considers infinite number of time slots and long - term averages for the sum rate and the total harvested energy , the optimal scheduling rule in depends only on the current time slot , i.e. , online scheduling is optimal . in the mt scheme ,uts with weak channel conditions may be deprived from gaining access to the channel which leads to user starvation . in order to strike a balance between system throughput and fairness ,we introduce proportional fairness into our scheduler , which aims to provide each ut with a performance proportional to its channel conditions .this is achieved by allowing all uts to access the channel with equal chances . in this case , the optimization problem with the relaxed binary constraint on the user selection variables is formulated as : optimal proportional fair optimization : where c4 specifies that each ut has to access the channel for number of time slots . for the tightness of the binary relaxation , please refer to theorem 1 .now , we solve ( [ eq : pfrel ] ) via convex optimization techniques by following a similar approach as in the previous section .the lagrangian function for problem is given by where and are the lagrange multipliers corresponding to constraints c1 , , c3 , and c4 , respectively . by using the kkt conditions , we obtain the following ut selection metric : where the optimal lagrange multipliers ensure that each user accesses the channel on average an equal number of times .thus , the optimal selection criterion for the pf scheme is we note that the optimal pf scheduling rule is similar to the mt scheduling rule in , but the pf seelction metric in ( [ eqn : ut_metric ] ) contains an additional term that provides proportional fairness .also , and can be calculated offline using the gradient method .although the pf scheduler enables equal channel access probability for all uts , it does not provide any guaranteed minimum data rate to them . on the contrary ,the et criterion is more fair from the users prospective compared to the pf criterion , as all the uts achieve the same average throughput asymptotically for .therefore , in this section , we design a scheduler which achieves et fairness .thus , the objective is to maximize the minimum average achievable rates among all the uts , i.e. , maximize where . using theorem 1 , we formulate our equivalent convex optimization problem in its hypograph form .optimal equal throughput optimization : where is an auxiliary variable .the lagrangian function for problem in is given by where , and are the lagrange multipliers corresponding to constraints c1 , , c3 , and c5 , respectively . by using the kkt conditions, we obtain the ut selection metric for et scheduling : where the optimal lagrange multipliers ensure that all users have et .thus , the optimal selection criterion for the et scheme is given by again , the gradient method can be used to obtain the optimal values for and offline by utilizing the channel statistics .we note that the above considered problems can be formulated as markov decision process ( mpd ) or solved via lyapunov optimization approach , please refer to for details .in this section , we evaluate the performance of the proposed scheduling schemes using simulations .the important simulation parameters are summarized in table 1 .we adopt the path loss model from and the uts are randomly and uniformly distributed between the reference distance and maximum service distance . for comparison , we also show the performance of the following suboptimal scheduling schemes from : .simulation parameters . [ cols="<,<",options="header " , ] [ tab : res ] [ fig : mt2 ] 1 .order - based mt scheduler : the scheduling rule is where is defined as the argument of a certain selection order . in other words ,the user whose channel power gain has order is scheduled for i d .order - based pf scheduler : the scheduling rule is where denotes the mean channel power gain of ut .order - based et scheduler : the scheduling rule is where is the order of the instantaneous normalized signal - to - noise - ratio of user , is a predefined set of orders , where only the users set fall into are eligible for being scheduled , and is the throughput of user averaged over all previous time slots up to time slot .[ fig : mt2 ] shows the average sum rate ( bits/(channel use ) ) versus the average sum harvested energy ( watts ) of the mt schemes for different numbers of users .we note that the suboptimal order - based scheme can only achieve discrete points on the r - e curves , corresponding to the selection orders . on the contrary, the proposed optimal mt scheduling scheme can achieve any feasible point on the r - e curve , which provides a higher flexibility for the system designer to strike a balance between average sum rate and average harvested energy .besides , as expected , the average system sum rate increases with the number of uts as the proposed scheme is able to exploit multiuser diversity .furthermore , the average sum harvested energy also increases with the number of uts since more idle users participate in energy harvesting in any given time slot .[ fig : pf2 ] and fig .[ fig : et2 ] depict the average sum rate ( bits/(channel use ) ) versus the average sum harvested energy ( watts ) for the pf and et schemes , respectively .it can be seen that the feasible r - e region of all schemes decreases compared to the mt scheduler in fig .[ fig : mt2 ] .this is because both the pf and the et schedulers take fairness into account in the resource allocation and , as a result , can not fully exploit the multiuser diversity for improving the average system sum rate . on the other hand, it can be seen that our proposed optimal schemes provide a substantial average sum rate gain compared to the corresponding suboptimal order - based schemes , especially for a high amount of average harvested energy in the system .in fact , the proposed optimization framework provides more degrees of freedom across different time slots in resource allocation compared to the suboptimal scheduling schemes .this allows the system to exploit the multiuser diversity to some extent for resource allocation even if fairness is taken into consideration .in this paper , we have proposed optimal multiuser scheduling schemes for swipt systems considering different notions of fairness in resource allocation .the designed schemes enable the control of the tradeoff between the average sum rate and the average amount of sum harvested energy .our results reveal that for the maximization of the system sum rate with or without fairness constraints , the optimal scheduling algorithm requires only causal instantaneous and statistical channel knowledge .simulation results revealed that substantial performance gains can be achieved by the proposed optimization framework compared to existing suboptimal scheduling schemes .
|
in this paper , we study the downlink multiuser scheduling problem for systems with simultaneous wireless information and power transfer ( swipt ) . we design optimal scheduling algorithms that maximize the long - term average system throughput under different fairness requirements , such as proportional fairness and equal throughput fairness . in particular , the algorithm designs are formulated as non - convex optimization problems which take into account the minimum required average sum harvested energy in the system . the problems are solved by using convex optimization techniques and the proposed optimization framework reveals the tradeoff between the long - term average system throughput and the sum harvested energy in multiuser systems with fairness constraints . simulation results demonstrate that substantial performance gains can be achieved by the proposed optimization framework compared to existing suboptimal scheduling algorithms from the literature . 9.9 in -0.0 in rf energy harvesting , wireless information and power transfer , optimal multiuser scheduling .
|
communications often suffer from severe inter - symbol interference ( isi ) due to doubly selective fading . in order to suppress the channel distortion ,channel equalization techniques are essential , and indeed have received considerable attention for many years .maximum a priori probability ( map ) equalization is the optimum equalization procedure in terms of minimum symbol error rate ( ser ) , but requires a prohibitive computational complexity for many applications , being exponential in the channel length and constellation size .maximum likelihood sequence estimation ( mlse ) can obtain ser performance very close to map , but its complexity is still extremely high . as a result , many sub - optimal , low - complexity equalization techniques have been proposed , such as the popular minimum mean square error decision - feedback equalizer , which is very effective in certain multipath environments and has a complexity that is only dependent on forward and backward filter lengths . however , there is a non - negligible performance loss of mmse based equalizers in comparison to mlse .further still , while lots of research have been conducted on the time - domain equalization , few works take the special form of the channel representation into good account .two properties of the channel matrix in time domain are effectively utilized in this paper : 1 ) the toeplitz - like channel matrix significantly contributes to the equalizer design ; 2 ) the large number of zero elements reduces the computational complexity . as a result, we propose a robust _ approximate ml based decision feedback block equalizer _( a - ml - dfbe ) to combat isi over doubly selective fading channels with low computational complexity .the proposed equalizer exploits substantial benefit from the special time domain representation of the multipath channels by using a _ matched filter _, a _ sliding window _ ,a _ gaussian approximation _ , and a _ decision feedback_. the main ideas are firstly to subtract the effect of the already - detected signals obtained from past decisions .this can be treated as a decision feedback process .secondly we apply gaussian approximation to realize near maximum likelihood detection .the accuracy of this procedure can be improved by adjusting the length of the sliding window due to the central limit theorem .consequently , a complexity and performance trade - off can be realized , and a convergence in ser performance can also be obtained by adjusting the length of the sliding window . note that and can be used only for frequency flat fading channels , and aims to recover signals for multiuser systems .although in a probabilistic data association ( pda ) based equalizer is reported , there are several major differences compared to the proposed approach : in , it requires to update the mean and the variance for all detected symbols ; many iterations have to be used in order to make the performance converge ; there is no feedback process ; and no matched filter is employed . in ,bidirectional arbitrated decision - feedback equalization ( bad ) algorithm was presented which has complexity at least two times of the mmse - dfe but can achieve better performance . in ,a class of block dfe is presented for frequency domain equalization , but it assumes that the length of the channel , forward filter , and backward filter are infinitely long which is not practical . besides , it requires large number of iterations to make the performance converge , which increases the system delay and the computational complexity .the rest of the paper is organized as follows : in section ii , we present the channel and signal models . the proposed a - ml - dfbe scheme and complexity comparisons are discussed in section iii .the performance is analyzed in section iv .simulation results are presented in section v. in section vi , we draw the main conclusions . the proof is given in the appendix ._ * notation * _ : boldface upper - case letters denote matrices , boldface lower - case letters denote vectors , and denote the set of complex and real matrices , respectively , stands for transpose , denotes complex conjugate , represents conjugate transpose , stands for an identity matrix , is used for expectation , is used for variance , and .the doubly selective fading channel can be modeled using a finite impulse response ( fir ) filter where denotes the transformation at time , represents the -th path s channel coefficient , and the length of the fir filter is . for simplicity, we only consider a single input and single output system .the received signals can be written in vector form as ( for convenience , we drop the time index for each transmission frame ) where the received signals ^\emph{t} ] , and ^\emph{t} ] , in which ] . the time domain representation of the doubly selective fading channel , can be written as .\label{eq : slicej}\ ] ] note that has a structure similar to toeplitz form , and some form of guard interval is necessary to avoid inter - block interference between the received signals . the symbols in ( [ eq : recmodel ] ) can be recovered by mlse .alternatively , they can also be decoded in complex form using standard zero forcing ( zf ) or mmse approaches , linear or decision feedback equalization .the proposed equalization algorithm can be summarized into three steps : 1 ) forward process , which builds up the forward filter by a temporal sub matched filter ; 2 ) decision feedback process , which cancels the interference by a fixed length backward filter , and 3 ) approximate ml process , which realizes the final signal detection by the aid of gaussian approximation . the detailed description of each step is given below . supposing we start decoding , a temporal sub matched filter ( forward filter ) is applied to ( [ eq : recmodel ] ) where denotes the matrix of size , which is made of the entries in , from the -th column to the -th column and from the -st row to the -th . ( ) is the length of the sliding window that must be equal or larger than for smaller inter - symbol interference and larger diversity gain , and smaller than or equal to . when , the matched filter becomes . for simplicity , we may rewrite ( [ eq : hrec ] ) as where , , and .we call this process _horizonal slicing _ , since it takes rows of . is given by , \label{eq : partj}\ ] ] where denotes the -th column of matrix .the length of the forward filter has been defined as in ( [ eq : hrec ] ) .the function of this step is to suppress the effects of the detected terms . in order to further decrease the complexity of ( [ eq : party ] ) , we can just consider a certain number of the transmitted symbols , and have where can be constructed by taking the first column to the -th column of in ( [ eq : partj ] ) , and ^\emph{t} ] has size . with respect to the diagonal element in , when , we can find that and thus , has the following form ,\ ] ] which has size .we can observe that there are only non - zero elements in so that the reconstruction of the detected terms can be further simplified .similarly , in ( [ eq : a_ml_decoder ] ) , the calculation of and can be simplified as well .now , we discuss the complexity of the a - ml - dfbe , linear - mmse , mmse - dfe , and bad detectors in terms of the number of additions and multiplications .the resulting values are given in table [ tab : complexity ] , obtained by inspection of the relevant algorithms in table [ tab : algorithm ] , , and .details of the computation of complexity , for example the matrix inversion , can be found in .the computational complexity of the a - ml - dfbe algorithm is a function of the frame length ( ) , the impulse response length ( ) , and the length of the forward filter ( ) , which is obtained on the basis of table .[ tab : algorithm ] . from the table , we observe that a - ml - dfbe has the same order of complexity as the linear - mmse and mmse - dfe .but a - ml - dfbe is less complex than mmse - dfe since the a - ml - dfbe requires smaller value , and it does not require to build up the backward filter . in comparison to linear - mmse , the a - ml - dfbe needs relatively even shorter forward filter and thus has lower complexity . the relation between the filter length and the performancecan be clearly observed in the simulation results section .bad requires complexity at least double of mmse - dfe .note that with regard to computational complexity , we focus on time - domain implementation even though a low - complexity frequency - domain implementation is also possible by making use of the block - circulant structure that can be created by the guard interval .in addition , note that the matrix inversion lemma can be used to reduce the complexity from cubic to quadratic order , but it does not affect the above conclusions .in this subsection , we analyze the symbol error rate ( ser ) as well as the bit error rate ( ber ) performance of the a - ml - dfbe .note that the tail detection only contains the operation of very few symbols , and thus , the performance is dominated by step 2 of the a - ml - dfbe process in table [ tab : algorithm ] , which will now be analyzed .we assume that all the decisions are accurate for analysis , which is a normal assumption in decision feedback theory .in ( [ eq : a_ml ] ) , which contains correlated noise , , the pre - whitening filter , , can be applied to make the variance of the noise uncorrelated where with size has a gaussian distribution with zero mean and all components have unit variance .since the noise now has become white gaussian , the matched filter , , can be employed and we have the following received signal equation in scalar form where , , and , which is a scalar with zero mean and variance .the ser for -psk constellation is given by where , , and denotes the constellation size .the average ber for -`psk ` can be written as : where for high snr and gray mapping . since the tail is normally short , which has length , in comparison to the whole frame length , hence itseffects can be neglected .note that in time - invariant channel , due to the property of ( ) by assuming perfect decision feedback at high snr .next , we analyze further the behavior of the proposed a - ml - dfbe at high snr . assuming perfect channel estimation at the receiver , and taking ( [ eq : serbpskkth ] ) as an example, it can be upper bounded by where at high snr ( refer to appendix i for the derivation ) . in order to obtain good performance in terms of multipath combining and inter - symbol interference suppression , we should choose . then , by averaging ( [ eq : upperbound ] ) over the rayleigh pdf , equation ( [ eq : upperbound ] ) becomes which indicates that the a - ml - dfbe achieve the maximum multipath diversity order .it has been shown that the forward filter length , , is a very important parameter in the proposed a - ml - dfbe . in this subsection, we discuss the behaviors of : 1 ) increasing the value of can improve the robustness of ( [ eq : a_ml_decoder ] ) due to the following reasons : firstly , as shown in ( [ eq : partj ] ) , larger value of can incorporate more received signals as well as channel information in the forward filter ; secondly , indicated by ( [ eq : gaussaprox ] ) , increasing can make the gaussian assumption more accurate ; 2 ) while the performance can be enhanced , as shown in table [ tab : complexity ] , the complexity will correspondingly go up . hence , for a - ml - dfbe , a complexity and performance tradeoff can be realized by adjusting ; 3 ) performance gets converged by increasing the value of as the gaussian assumption becomes accurate enough .this implies that moderate length of the forward filter can deliver good performance ; 4 ) given by subsection - iv - b , should be equal or larger than for maximum diversity order ; 5 ) the length of the backward , , always equals due to the special structure of . note that the matched filter in ( [ eq : hrec ] ) can obtain some additional information from the received signals outside the slicing window . recalling ( [ eq : party])([eq : recforwardbackward ] ) , can be written as ^t.\ ] ] although some information is lost after horizonal and vertical slicing , some gains can be still realized by considering the whole received signal , .supposing the matched filter is removed , the detection procedures in table [ tab : algorithm ] can be used but it will lead to performance degradation since only the received signals inside the sliding window will be considered , where ^t$ ] . as a result ,the length of the forward filter has to be increased to make up the performance loss caused by the slicing processes in order to obtain the same performance .note also if the length of the forward filter is equal to , the a - ml - dfbe directly enters the tail detection step ( step * 3 * ) in table [ tab : algorithm ] , which will make no difference in performance whether or not the matched filter is used since there is no slicing operations at all .however , the value of is normally much less than . theoretically , using the same methods as shown in appendix i , it is easy to obtain the snr for the a - ml - dfbe when the matched filter is removed . due to the space limitation , we drop the detailed derivation part .but we can conclude that the performance of a - ml - dfbe can be upper - bounded by the same equalize without using the matched filter .in all simulations , bpsk constellation is used to generate a rate 1bps / hz transmission .we plot the ber versus the signal - to - noise ratio ( snr ) . for analytical results ,we assume perfect decision feedback , but for simulated results we use the feedback decisions .the performance is determined over doubly selective rayleigh fading channels .the impulse response length is , and , thus , the length of the backward filter of the a - ml - dfbe can be fixed as .jakes model is applied to construct time - selective rayleigh fading channel for each subpath .the carrier frequency ghz and the symbol period , where is the speed of light .the simulation results are plotted with two speeds : and ( corresponding to and , where doppler frequency ) .the frame length is 128 . in fig .[ fig : analyticalber ] and fig .[ fig : analyticalberdifftaps ] , we examine the analytical ber performance obtained in ( [ eq : serbpskvblast ] ) assuming that the channel estimation is perfect .the simulations are plotted with the vehicle speed : . in fig .[ fig : analyticalber ] , we compare the analytical ber with the simulated ber .it can be observed that the analytical ber is close and asymptotically converges to the simulated curves at high snr . in fig .[ fig : analyticalberdifftaps ] , the analytical ber for a - ml - dfbe is plotted employing different forward filter lengths . as discussed earlier, the length of the forward filter , , should be at least equal to in order to realize good performance . from fig .[ fig : analyticalberdifftaps ] , we can see that the proposed a - ml - dfbe with provides much better performance than that with , and as the value of increases , the performance begins to converge .it can be also seen that for a - ml - dfbe , ( two times ) is enough to obtain good ber performance . in fig .[ fig : randomh ] , simulation results for the a - ml - dfbe detector are illustrated in comparison with conventional linear mmse , mmse - dfe , bad , and mlse decoders .the simulations are plotted with the vehicle speed : .least square ( ls ) channel estimation is used . from fig .[ fig : randomh ] , it can be observed that at ber= , the performance of a - ml - dfbe with is far better than the linear mmse and the mmse - dfe equalizers .there is only 2 db loss compared to the mlse decoder at ber= . at ,there is about 0.8 db loss compared to mlse .almost no difference can be observed for a - ml - dfbe when is increased to 15 since is sufficient to make the performance converge .note that when , a - ml - dfbe gives almost the same performance as , which demonstrates that only a small value of is required to achieve good performance .we can also see that a - ml - dfbe with can provide much better performance than bad with .note that our a - ml - dfbe has lower complexity than mmse - dfe , and thus , lower than bad . clearly , from fig .[ fig : analyticalber ] to fig .[ fig : randomh],we can see that there exists a complexity and performance tradeoff in terms of .performance can be improved by increasing the length of the forward filter ( slicing window ) .in addition , performance convergence can be also observed , which indicates that limited value of is enough to deliver most of the performance gain . in fig .[ fig : matchedfilter ] , simulation comparisons are made for a - ml - dfbe without using the matched filter .perfect channel estimation is assumed .vehicle speed , , is adopted .we choose different values for the no matched filter case : 5 , 10 , and 15 and remains the same : 4 .it is shown that at , the performance without the matched filter is worse than with it .we can also observe the significant performance loss due to the small value of .it is shown that must be 15 for the system with no matched filter to provide the same performance as the matched filter system with .hence , from the simulation results we can see that the matched filter is very important for system performance . note that as discussed in the complexity analysis part , subsection iii - b , the forward and backward filter taps are actually fixed and can be obtained before the a - ml - dfbe detection .the complexity increase by the use of the matched filter is much more worthwhile than to increase the length of the forward filter without using the matched filter . in fig .[ fig : timevariantche ] , simulation results for the a - ml - dfe detector are illustrated in comparison with conventional linear mmse , mmse - dfe , bad , and mlse decoders using ls channel estimation and the vehicle speed is . here , we choose different values for for a - ml - dfbe . from the simulation results, we can still observe that the performance of a - ml - dfbe converged at , and no gain can be obtained at . due to the time - variant effects , the performance is degraded compared to the results in fig .[ fig : randomh ] .we can see about 1 db loss between mlse and a - ml - dfbe with when ber= .however , the proposed equalizer can still substantially outperform linear mmse and mmse - dfe in all snr regime . around8 db performance gain can be obtained by the proposed scheme with compared to the bad at ber= .in this paper , we have proposed a simple approximate ml decision feedback equalizer for doubly selective fading environment . from the analytical and simulation results , we conclude that the a - ml - dfbe significantly outperforms the linear mmse , mmse - dfe , and bad detectors , and provides performance very close to mlse .we have shown that when is large enough , further increases in do not improve performance much .this implies that the proposed equalizer is quite robust against isi . a tradeoff in terms of the complexity and the performancecan be achieved by adjusting the value of .computational complexity comparison has demonstrated that the a - ml - dfbe requires fewer additions and multiplications than mmse based schemes .in addition , the implementation of the matched filter is very important and the a - ml - dfbe obtains maximum diversity order when . due to the dfe processing ,parallel computing is difficult to achieve for the proposed equalizer .however , by adjusting the size of the data block or the filters ( back and forward ) , or both , the latency can be reduced .the proposed equalizer can be easily used for radar communication systems as it is suitable to solve time - domain equalization problems . in current wireless systems like umts , hsdpa or hsupa, the a - ml - dfbe can be used to recover signals similar to mmse or mmse - dfe .for lte or lte advance , the proposed algorithm can be extended to realize frequency - domain equalizations .now , the closed - form expression of at high snr is derived in terms of and . from subsectioniv - a , can be written as where for convenience has size , and . by using the kailath variant , the inversion term on the right side of ( [ eq : distance1kk1 ] )can be further written as at high snr , as , the effect of is comparatively small , which can be ignored from an asymptotic point of view .hence , we have the following approximation for the second term in ( [ eq : distance1kk ] ) where with size , and represents the unique positive definite hermitian root .let be the moore - penrose inverse of matrix , and of size .note that and has size . by eigenvalue decomposition, we can get where is the unitary eigenvector matrix and . from the definition of , we have .therefore , is idempotent , and any idempotent matrix has eigenvalue 1 or 0 , and thus .we can then get from ( [ eq : distance1kk1 ] ) , ( [ eq : distance1kk ] ) , ( [ eq : distance1kk2 ] ) , and ( [ eq : distance1kk3 ] ) , at high snr , we can obtain from ( [ eq : partj ] ) , we can get where and has size and rank .since has the same structure as in ( [ eq : distance1kk3 ] ) , we can get the corresponding eigenvalues as finally , combining ( [ eq : fin1 ] ) and ( [ eq : fin2 ] ) , at high snr as , finally we have y. jia , c. andrieu , r. j. piechocki , and m. sandell , `` gaussian approximation based mixture reduction for near optimum detection in mimo systems , '' _ ieee commun . letters _, vol . 9 , no . 11 , pp .997999 , nov .j. luo , k. r. pattipati , p. k. willett , and f. hasegawa , `` near - optimal multiuser detection in synchronous cdma using probabilistic data association , '' _ ieee commun . letters _ , vol . 5 , no .9 , pp . 361363 , sep . 2001 . a. m. chan and g. w. wornell , `` a class of block - iterative equalizers for intersymbol interference channels : fixed channel results , '' _ ieee trans . on commun . _ ,19661976 , nov .2001 v. tarokh , n. seshadri , and a. r. calderbank , `` space - time codes for high data rate wireless communication : performance criterion and code construction , '' _ ieee trans .inform . theory _2 , pp . 744765 , mar .1998 ..[tab : complexity ] computational complexity of various schemes for one sliding window with length ; is the number of paths ; bpsk constellations ; and stand for the length of the forward and backward filters , respectively . [cols="^,^,^",options="header " , ]
|
in order to effetively suppress intersymbol interference ( isi ) at low complexity , we propose in this paper an approximate maximum likelihood ( ml ) decision feedback block equalizer ( a - ml - dfbe ) for doubly selective ( frequency - selective , time - selective ) fading channels . the proposed equalizer design makes efficient use of the special time - domain representation of the multipath channels through a matched filter , a sliding window , a gaussian approximation , and a decision feedback . the a - ml - dfbe has the following features : 1 ) it achieves performance close to maximum likelihood sequence estimation ( mlse ) , and significantly outperforms the minimum mean square error ( mmse ) based detectors ; 2 ) it has substantially lower complexity than the conventional equalizers ; 3 ) it easily realizes the complexity and performance tradeoff by adjusting the length of the sliding window ; 4 ) it has a simple and fixed - length feedback filter . the symbol error rate ( ser ) is derived to characterize the behaviour of the a - ml - dfbe , and it can also be used to find the key parameters of the proposed equalizer . in addition , we further prove that the a - ml - dfbe obtains full multipath diversity . doubly selective fading channels , equalization , matched filter , linear mmse , mmse - dfe , maximum likelihood sequence estimation .
|
distributed energy resources ( ders ) have the capability of assisting consumers is reducing their dependence on the main grid as their primary source of electricity , and thus , lowering their costs of energy purchase .they are also critical to the reduction of green house emissions and alleviation of climate change . as a result, there has been an increasing interest in deploying ders in the smart grid .the majority of recent works in managing energy using ders have mainly focussed on two areas : 1 ) the study of feasibility and control of ders for their use in designing efficient micro - grids , e.g. , see and the references therein ; and 2 ) scheduling energy consumption of household equipment by exploiting the use of ders to optimize different grid operational objectives such as minimizing the energy consumption costs of users . in most casesit is assumed that the users with ders also possess storage devices . however , there are also some cases in which users might not want to store energy .rather , they are more inclined to consume or trade energy as soon as it is generated , e.g. , as in a grid - tie solar system without battery back up .furthermore , the majority of research on energy management emphasizes energy trading between two energy entities , i.e. , two - way energy flow .for example , a considerable number of references that use such models can be found in . in this paper ,a three party energy management scheme is proposed for a smart community that consists of multiple residential units ( rus ) , a shared facility controller ( sfc ) and the main grid . to the best of our knowledge ,this paper is the first that introduces the idea of a shared facility and considers a 3-party energy management problem in smart grid . with the development of modern residential communities ,shared facilities provide essential public services to the rus , e.g. , maintenance of lifts in community apartments .hence , it is necessary to study the energy demand management of shared facilities for expediting effective community work .in particular , for the considered setting , as will be seen shortly , energy trading of rus with the grid and the sfc constitutes an important energy management problem for both the sfc and rus . on the one hand ,each ru is interested in selling its energy either to the sfc or to the grid at a higher price to increase revenue . on the other hand, the sfc wants to minimize its cost of energy purchased by making a price offer to rus to encourage them to sell their energy to the sfc instead of the grid .this enables the sfc to be less dependent on expensive electricity from the grid . as an energy management tool ,the framework of a noncooperative stackelberg game ( nsg ) is considered .in fact , nsgs have been used extensively in designing different energy management solutions .for example , maximizing revenues of multiple utility companies and customers , minimizing customers bills to retailers while maximizing retailers profits , prioritizing consumers interests in designing energy management solutions , and managing energy between multiple micro - grids in the smart grid , among many others .however , the choice of players and their strategies significantly differ from one game to another based on the system model , the objective of energy management design and the use of algorithms . to that end, an nsg is proposed for the considered scenario to capture the interaction between the sfc and rus and it is shown that the maximum benefits to the sfc and rus are achieved at the se of the game .the properties of the game are studied , and it is proven that there exists a unique se . finally , a novel algorithm , which is guaranteed to reach the se , and can be implemented in a distributed fashion by the sfc and the rus is introduced .the effectiveness of the proposed scheme is confirmed by numerical simulations .consider a smart grid network consisting of the main grid and a smart community with rus and an sfc , which are connected to one another via communication and power lines .each ru , which is equipped with ders such as solar panels or wind turbines , can be a single residential unit or group of units connected via an aggregator that acts as a single entity .all rus are considered to belong to the set . here ,on the one hand , the sfc does not have any electricity generation capacity . hence , at any time of the day , it needs to rely on the grid and rus for required energy to run equipment and machines in the shared facility such as lifts , water pumps , parking gates and lights that are shared and used by the residences on daily basis . on the other hand ,each ru is considered to have no storage capability , and therefore , wants to consume or sell its generated energy either to the main grid or to the sfc to raise revenue .it is assumed that each ru can manage its consumption , and thus sell the rest of the generated energy to the sfc or to the grid . clearly ,if , where is the base load for ru , the ru can not take part in the energy management . otherwise , which is the considered case , the ru sells after controlling its consumption amount . in general ,the buying price of a grid is noticeably lower than its selling price . to this end, it is assumed that the price per unit of energy that the sfc pays to each ru is set between the buying and selling price of the grid .therefore , each ru can sell its energy at a higher price and the sfc can buy at a lower price by trading energy among themselves rather than trading with the grid . under this condition ,it is reasonable to assume that the ru would be more inclined to sell to the sfc instead of to the grid . to that end, the amount of utility that an ru achieves from its energy consumption and trading the rest with the sfc can be modeled as in , is the utility that the ru achieves from consuming , and is a preference parameter . is the revenue that the ru receives from selling the rest of its energy to the sfc .please note that the natural logarithm has been used extensively for utility functions , and has particularly been shown to be suitable for modeling the utility for power consumers . from, the ru would be interested in selling more energy to the sfc , e.g. , by scheduling its use of devices at a later time , if the values of and are high and vice - versa .the effect of on the achieved utility by an ru is illustrated in fig .[ fig : utilityvsprice ] .the figure clearly shows that at a higher maximum utility is achieved by an ru when it consumes less , i.e. , it sells more to the sfc . on the other hand, the sfc buys all its required energy from rus and the grid . due to the choice of price , i.e. , , the sfc is more interested in buying its energy from rus and then procuring the rest , if there is any , from the grid at a price . to this end, a cost function for the sfc is defined as to capture its total cost of buying energy from rus and the grid . in, is the amount of energy that the sfc buys from ru .now if is too low it might cause an ru to refrain from selling its energy to the sfc . as a result ,the sfc would need to buy all its from the grid at a higher rate .on the contrary , if is very high , it will increase the cost to the sfc significantly .hence , should be within a legitimate range to encourage the rus to sell their energy to the sfc , while at the same time , keeping the cost to the sfc at a minimum .now , to decide on the energy trading parameters and , on the one hand , the sfc interacts with each ru to minimize by choosing a suitable price to pay to each . on the other hand , each ru decides on the amount of energy that it wants to consume and thus maximize . to capture this interaction , an nsg between the sfc and rusis proposed in the next section .first , the objective of each ru is to decide on the amount of energy that it wants to consume , and thus to determine based on the offered price to sell to the sfc such that possesses the maximum value .mathematically , .\label{eqn:3}\end{aligned}\ ] ] conversely , having the offered energy from all rus , i.e. , , the sfc determines the price so as to minimize the cost captured via .therefore , the objective of the sfc is .\label{eqn:4}\end{aligned}\ ] ] here , and are concave and convex functions respectively , and are coupled via common parameters and . therefore , it would be possible to solve the problem in an optimal centralized fashion if private information such as and were available to the central controller . however , to protect the privacy of each ru as well as to reduce the demand on communications bandwidth , it is useful to develop a distributed mechanism . with these considerations in mind , we study the problem using an nsg . a stackelberg game , also known as a leader - follower game , studies the multi - level decision making processes of a number of independent players , i.e. , followers , in response to the decision made by the leader ( or , leaders ) of the game . in the proposed nsg ,the sfc and each ru are modeled as the leader and a follower respectively .formally , the nsg can be defined by its strategic form as which has following components : a. the set of all followers in the game .b. the set of leaders in the game that has only one element in our case , i.e. , a single leader . c. the strategy set of each ru to choose an amount of energy to be consumed during the game .d. the utility function of each ru to capture the benefit from consuming , and the utility from selling to the sfc .e. the price set by the sfc to buy its energy from rus .f. the cost function of the sfc that quantifies the total cost of energy purchase from rus and the grid . through ,all rus that want to trade their energy and the sfc interact with each other and decide on the decision vector ] , that maximizes .it is also noted that reaches se when all players including the sfc and each ru have their best cost and utilities respectively with respect to the strategies chosen by all players in the game .thereby , it is indisputable that the proposed game would find an se as soon as the sfc is able to find an optimal price while all rus play their unique strategy vector .now the second derivative of with respect to is which is greater than .therefore , is strictly convex with respect to .consequently , the sfc is able to find a unique price in response to the strategy vector . thus , there exists a unique se in the proposed nsg , and theorem [ thm:1 ] is proved .initialization : ru adjusts its energy consumption according to .\ ] ] + the sfc computes the cost according to + the sfc keeps records of the optimal price and minimal cost + * the se is achieved .* in this section , an iterative algorithm that the sfc and rus can implement in a distributed fashion is proposed to reach the se of the game . in order to attain the unique se, the sfc needs to communicate with each ru . at each iteration , on the one hand , the ru chooses its best energy consumption amount in response to the price set by the sfc , calculates and sends it to the sfc .on the other hand , having the information on the choice of energy , the sfc derives its price to minimize its cost in and resends it to each ru .the interaction between the sfc and all rus continues iteratively until and are satisfied .as soon as these conditions are met , the proposed nsg reaches the se .details are given in algorithm [ alg:1 ] .the proposed algorithm [ alg:1 ] is always guaranteed to reach the se of the game . according to the proposed algorithm , the conflict between rus choices of strategies stem from their impact on the choice of by the sfc . due to the strict convexity of ,the choice of lowers the cost of the sfc to the minimum .now , as the algorithm is designed , in response to the , each ru chooses its strategy from the bounded range ] for this case study , such that and in are always positive .the grid s per unit selling price is assumed to be cents / kwh whereby the sfc sets its initial price equal to the grid s buying price of cents / kwh to pay to each ru .nonetheless , it is very important to highlight that all parameter values are particular to this study and may vary according to the need of the sfc , power generation of the grid and ders , and the energy policy of a country . in fig .[ fig : convergence ] , the sfc s total cost is shown to converge to the se by following algorithm [ alg:1 ] for a network with five rus .it can be seen that although the sfc wants to minimize its total cost , it can not do so with its initial choice of price for payment to the rus .in fact , through interaction with each ru of the network the sfc eventually increases its price in each iteration to encourage the rus to sell more , and consequently the cost continuously reduces . as can be seen from fig .[ fig : convergence ] , the sfc s choice of equilibrium price and consequently also the minimum total cost reach their se after the iteration . next , the effectiveness of the proposed scheme is demonstrated by comparing its performance with a standard baseline scheme that does not contain any der facility , i.e. , the sfc depends on the grid for all its energy . in thisregard , considering rus in the system , the total cost of energy trading that is incurred by the sfc is plotted in fig .[ fig : costvsreq ] for both the proposed and baseline approaches as the amount of energy required by the sfc increases . as shown in the figure , the cost to the sfc increases for both cases as the energy requirement increases from to kwh .in fact , it is a trivial result that a greater energy requirement leads the sfc to spend more money on buying energy , which consequently increases the cost .nonetheless , the proposed scheme needs to spend significantly less to buy the same amount of energy due to the presence of the ders of the rus , and thus noticeably benefits from its energy trading in terms of total cost compared to the baseline scheme .as shown in fig [ fig : costvsreq ] , the sfc s cost is , on average , lower than that of the baseline approach for the considered change in the sfc s energy requirement . nevertheless , as mentioned in section [ sec : game - formulation ] , it is also possible to optimally manage energy between rus and the sfc via a centralized control system to minimize the social cost if private information such as and is available to the controller . in this regard , the performance in terms of social cost for both the centralized and proposed distributed schemes is observed in fig . [fig : centralvspropose ] .as can be seen from the figure , the social cost attained by adopting the distributed scheme is _ very close _ to the optimal scheme at the se of the game .however , the centralized scheme has access to the private information of each ru .hence , the controller can optimally manage the energy , and as a result shows better performance in terms of reducing the sfc s cost compared to the proposed scheme . according to fig .[ fig : centralvspropose ] , as the number of rus changes in the network from to , the average social cost for the proposed distributed scheme is only higher than that obtained via the centralized scheme .this is a promising result considering the distributed nature of the system .in this paper , a user interactive energy management scheme has been proposed for a smart grid network that consists of a shared facility , the main grid and a large number of residential units ( rus ) .a noncooperative stackelberg game ( nsg ) has been proposed that captures the interaction between the shared facility controller ( sfc ) and each ru and it has been shown to have a unique stackelberg equilibrium ( se ) . it has been shown that the use of ders for each ru is beneficial for both the sfc and rus in terms of their incurred cost and achieved utilities respectively .further , a distributed algorithm has been proposed , which is guaranteed to reach the se and can be implemented by the players in a distributed fashion .significant cost savings have been demonstrated for the sfc by comparing the proposed scheme with a standard baseline approach without any ders .the proposed work can be extended in different directions .an interesting extension would be to examine the impact of discriminate pricing among the rus on the outcome of the scheme .another compelling augmentation would be to determine how to set the threshold on the grid s price .further , quantifying the inconvenience that the sfc / rus face during their interaction and quantifying the effect of the inclusion of storage devices could be other potential future extensions of the proposed work .p. s. georgilakis and n. hatziargyriou , `` optimal distributed generation placement in power distribution networks : models , methods , and future research , '' _ ieee transactions on power systems _ , vol .28 , no . 3 , pp .34203428 , 2013 .j. j. justo , f. mwasilu , j. lee , and j .- w .jung , `` ac - microgrids versus dc - microgrids with distributed energy resources : a review , '' _ renewable and sustainable energy reviews _ , vol .24 , pp . 387405 , 2013 .n. u. hassan , m. a. pasha , c. yuen , s. huang , and x. wang , `` impact of scheduling flexibility on demand profile flatness and user inconvenience in residential smart grid system , '' _ energies _ , vol . 6 , no . 12 , pp . 66086635 , 2013 .y. liu , n. u. hassan , s. huang , and c. yuen , `` electricity cost minimization for a residential smart grid with distributed generation and bidirectional power transactions , '' in _ proc .ieee pes innovative smart grid technologies ( isgt ) _ , washington , dc , feb 2013 , pp .n. u. hassan , x. wang , s. huang , and c. yuen , `` demand shaping to achieve steady electricity consumption with load balancing in a smart grid , '' in _ proc .ieee pes innovative smart grid technologies ( isgt ) _ , washington , dc , feb 2013 , pp . 16 .s. maharjan , q. zhu , y. zhang , s. gjessing , and t. baar , `` dependable demand response management in the smart grid : a stackelberg game approach , '' _ ieee transactions on smart grid _ , vol . 4 , no . 1 ,pp . 120132 , 2013 .e. mckenna and m.thomson , `` photovoltaic metering configurations , feed - in tariffs and the variable effective electricity prices that result , '' _ iet renewable power generation _ , vol . 7 , no . 3 , pp .235245 , 2013 .p. samadi , a .-mohsenian - rad , r. schober , v. wong , and j. jatskevich , `` optimal real - time pricing algorithm based on utility maximization for smart grid , '' in _ proc .ieee international conference on smart grid communications ( smartgridcomm ) _ , gaithersburg , md , oct .2010 , pp .. t. forsyth , `` small wind technology , '' website , national renewable energy laboratory of us department of energy , 2009 , http://ww2.wapa.gov/sites/western/renewables/documents/webcast\/2smallwindtech.pdf .
|
this paper studies a three party energy management problem in a user interactive smart community that consists of a large number of residential units ( rus ) with distributed energy resources ( ders ) , a shared facility controller ( sfc ) and the main grid . a stackelberg game is formulated to benefit both the sfc and rus , in terms of incurred cost and achieved utility respectively , from their energy trading with each other and the grid . the properties of the game are studied and it is shown that there exists a unique stackelberg equilibrium ( se ) . a novel algorithm is proposed that can be implemented in a distributed fashion by both rus and the sfc to reach the se . the convergence of the algorithm is also proven , and shown to always reach the se . numerical examples are used to assess the properties and effectiveness of the proposed scheme . smart grid , distributed energy resources , game theory , energy management .
|
current decision - making systems face high levels of uncertainty resulting from data , which is either missing or untrustworthy .these systems usually turn to probability theory as a mathematical framework to deal with uncertainty .one problem , however , is that it is hard for these systems to make reliable predictions in situations where the laws of probability are being violated . these situations happen quite frequently in systems which try to model human decisions .uncertainty in decision problems arises , because of limitations in our ability to observe the world and in limitations in our ability to model it .if we could have access to all observations of the world and extract all the information it contained , then one could have access to the full joint probability distribution describing the relation between every possible random variable .this knowledge would eliminate uncertainty and would enable any prediction .this information , however , is not available and not possible to obtain as a full , leading to uncertainty .a formal framework capable of representing multiple outcomes and their likelihoods under uncertainty is probability theory . in an attempt to explain the decisions that people make under risk , cognitive scientists started to search for other mathematical frameworks that could also deal with uncertainty .recent literature suggests that quantum probability can accommodate these violations and improve the probabilistic inferences of such systems .quantum cognition is a research field that aims at using the mathematical principles of quantum mechanics to model cognitive systems for human decision making .given that bayesian probability theory is very rigid in the sense that it poses many constraints and assumptions ( single trajectory principle , obeys set theory , etc . ), it becomes too limited to provide simple models that can capture human judgments and decisions , since people are constantly violating the laws of logic and probability theory .recent literature suggests that quantum probability can be used as a mathematical alternative to the classical theory and can accommodate these violations .it has been showed that quantum models provide significant advantages towards classical models . in this work ,we explore the implications of causal relationships in quantum - like probabilistic graphical models and also the implications of semantic similarities between quantum events .these semantic similarities provide new relationships to the graphical models and enables the computation of quantum parameters through vector similarities .this work is organised as follows . in sections 2 and 3 ,we address two types of relationships , respectively : cause / effect and acausal relationships . in section 4 ,we describe a quantum - like bayesian network that takes advantages of both cause / effect relationships and semantic similarities ( acausal events ) . in section 5 ,we show and analyse the applications of the proposed model in current decision problems . finally ,in section 6 , we conclude with some final remarks regarding the application of quantum - like bayesian networks to decision problems .most events are reduced to the principle of causality , which is the connection of phenomena where the cause gives rise to some effect .this is the philosophical principle that underlies our conception of natural law . under the principle of causality, some event can have more than one cause , in which none of them alone is sufficient to produce .causality is usually : ( 1 ) transitive , if some event is a cause of and is a cause of , then is also a cause of ; ( 2 ) irreflexible , an event can not cause itself ; and ( 3 ) antisymmetric , if is a cause of , then is not a cause of .the essence of causality is the generation and determination of one phenomenon by another .causality enables the representation of our knowledge regarding a given context through_ experience_. by experience , we mean that the observation of the relationships between events enables the detection of irrelevancies in the domain .this will lead to the construction of causal models with minimised relationships between events .bayesian networks are examples of such models . under the principle of causality ,two events that are not causally connected should not produce any effects .when some acausal events occur by producing an effect , it is called a coincidence .carl jung , believed that nothing happens by chance and , consequently , all events had to be connected between each other , not in a causal setting , but rather in a meaningful way . under this point of view , jung proposed the synchronicity principle .the synchronicity principle may occur as a single event of a chain of related events and can be defined by a significant coincidence which appears between a mental state and an event occurring in the external world .jung believed that two acausal events did not occur by chance , but rather by a shared meaning .therefore , in order to experience a synchronised event , one needs to extract the meaning of its symbols for the interpretation of the synchronicity .so , the synchronicity principle can be seen as a correlation between two acausal events which are connected through meaning .jung defended that the connection between a mental state and matter is due to the energy emerged from the emotional state associated to the synchronicity event .this metaphysical assertion was based on the fact that it is the person s interpretation that defines the meaning of a synchronous event .this implies a strong relation between the extraction of the semantic meaning of events and how one interprets it .if there is no semantic extraction , then there is no meaningful interpretation of the event , and consequently , there is no synchronicity .it is important to mention that the synchronicity principle is a concept that does not question or compete with the notion of causality .instead , it maintains that just as events may be connected by a causal line , they may also be connected by meaning .a grouping of events attached by meaning do not need to have an explanation in terms of cause and effect . in this work, we explore the consequences of the synchronicity principle applied to quantum states with high levels of uncertainty as a way to provide additional information to quantum - like probabilistic graphical models , which mainly contain cause / effect relationships .although the principles of probability are well established , such that synchronicity might be seen as the occurrence of coincidences , in the quantum mechanics realm , given the high levels of uncertainty that describe the quantum states , the coincidences or improbable occurrences happen quite often .the reason why we are turning to bayesian networks is because they are inspired in human cognition . it is easier for a person to combine pieces of evidence and to reason about them , instead of calculating all possible events and their respective beliefs . in the same way, bayesian networks also provide this link between human cognition and rational inductive inference . instead of representing the full joint distribution ,bayesian networks represent the decision problem in small modules that can be combined to perform inferences .only the probabilities which are actually needed to perform the inferences are computed .a classical bayesian network is a directed acyclic graph structure .each node represents a different random variable from a specific domain and each edge represents a direct influence from the source node to the target node .the graph also represents independence relationships between random variables and is followed by a conditional probability table which specifies the probability distribution of the current node given its parents .suppose that we have a bayesian network with three random variables with the following structure : . in order to determine the probability of node b , we would need to make the following computation based on equation [ eq : inference ] . a classical probability can be converted into a quantum probability amplitude in the following way .suppose that events form a set of mutually disjoint events , such that their union is all in the sample space , , for any other event .the classical law of total probability can be formulated like in equation [ eq : law_total_prob_c ] . the quantum law of total probability can be derived through equation [ eq : law_total_prob_c ] by applying born s rule : returning to our example , in order to convert the real probabilities in equation [ eq : bn_1 ] into quantum amplitudes , one needs to apply born s rule . in equation [ eq : qbn_1 ], the term corresponds to the quantum probability amplitude of the term ; the term corresponds to the quantum probability amplitude of the term and so on . expanding equation [ eq : qbn_1 ] , knowing that , , then equation [ eq : qbn_2 ] becomes : equation [ eq : qbn_2 ] can be rewritten as : a quantum - like bayesian network can be defined in the same way as a classical bayesian network with the difference that real probability numbers are replaced by quantum probability amplitudes .the quantum counterpart of the full joint probability distribution corresponds to the application of born s rule to equation [ eq : joint ] .this results in equation [ eq : joint_q ] , where corresponds to a quantum amplitude . when performing probabilistic inferences in bayesian networks , the probability amplitude of each assignment of the network is propagated and influences the probabilities of the remaining nodes . in order to perform inferences on the network, one needs to apply born s rule to the classical marginal probability distribution , just like in was presented in equation [ eq : qbn_3 ] .if we rewrite this equation with the notation presented in equation [ eq : bn_1 ] , then the quantum counterpart of the classical marginalization formula for inferences in bayesian networks becomes : in classical bayesian inference , normalisation of the inference scores is necessary due to the independence assumptions made in bayes rule . in quantum - like inferences ,we need to normalize the final scores , not only because of the asme independence assumptions , but also because of the quantum interference term .if the conditional probability tables of the proposed quantum - like bayesian network were double stochastic , then this normalization would not be necessary .but , since in the proposed model we do not have this constraint , then a normalization is required after the computation of the probabilistic inference .following equation [ eq : final1 ] , when equals zero , then it is straightforward that quantum probability theory converges to its classical counterpart , because the interference term will be zero . for non - zero values , equation [ eq : final1 ] will produce interference effects that can affect destructively the classical probability ( when the interference term in smaller than zero ) or constructively ( when it is bigger than zero )additionally , equation [ eq : final1 ] will lead to a large amount of parameters when the number of events increases . for binary random variables, we will end up with parameters .a semantic network is often used for knowledge representation .it corresponds to a directed or undirected graph in which nodes represent concepts and edges reflect semantic relations .the extraction of the semantic network from the original bayesian network is a necessary step in order to find variables that are only connected in a meaningful way ( and not necessarily connected by cause / effect relationships ) , just like it is stated in the synchronicity principle .consider the bayesian network in figure [ fig : structure ] . in order to extract its semantic meaning , we need to take into account the context of the network .suppose that you have a new burglar alarm installed at home .it can detect burglary , but also sometimes responds to earthquakes .john and mary are two neighbours , who promised to call you when they hear the alarm .john always calls when he hears the alarm , but sometimes confuses telephone ringing with the alarm and calls too .mary likes loud music and sometimes misses the alarm .represents quantum amplitudes . corresponds to the real classical probabilities . ] from this description , we extracted the semantic network , illustrated in figure [ fig : semantic_web ] , which represents the meaningful connections between concepts .the following knowledge was extracted .it is well known that catastrophes cause panic among people and , consequently , increase crime rates , more specifically burglaries .so , a new pair of synchronised variables between _ earthquake _ and _ burglar _ emerges .moreover , _ john _ and _ mary _ derive both from the same concept ,so , these two nodes will also be synchronised .these synchronised variables mean that , although there is no explicit causal connection between these nodes in the bayesian network , they can become correlated through their meaning . ] in section [ sec : qlbn ] , it was presented that equation [ eq : final1 ] generates an exponential number of quantum parameters according to the number of unknown variables .if nothing is told about how to assign these quantum parameters , then we end up with an interval of possible probabilities .for instance , figure [ fig : interval ] shows that , the probabilities for the different random variables of the quantum - like bayesian network from can range from an interval of possible probability values .this means that one needs some kind of heuristic function that is able to assign these quantum parameters automatically .we define the synchronicity heuristic in a similar way to jung s principle : two variables are said to be synchronised , if they share a meaningful connection between them .this meaningful connection can be obtained through a semantic network representation of the variables in question .this will enable the emergence of new meaningful connections that would be inexistent when considering only cause / effect relationships .the quantum parameters are then tuned in such a way that the angle formed by these two variables , in a hilbert space , is the smallest possible , this way forcing acausal events to be correlated . for the case of binary variables ,the synchronicity heuristic is associated with a set of two variables , which can be in one of four possible states .the hilbert space is partitioned according to these four states , as exemplified in figure [ fig : synchr ] .the angles formed by the combination of these four possible states are detailed in the table also in figure [ fig : synchr ] . between them ( right ) .] in the right extreme of the hilbert space represented in figure [ fig : synchr ] , we encoded it as the occurrence of a pair of synchronised variables .so , when two synchronised variables occur , the smallest angle that these vectors make between each other corresponds to .the most dissimilar vector corresponds to the situation where two synchronised variables do not occur .so , we set to be the largest angle possible , that is .the other situations correspond to the scenarios where one synchronised variable occurs and the other one does not . in figure[ fig : synchr ] , the parameter is chosen according to the smallest angle that these two vectors , and , make between each other , that is .we are choosing the smallest angle , because we want to correlate these two acausal events by forcing the occurrence of _ coincidences _ between them , just like described in the synchronicity principle .the axis corresponding to and were ignored , because they correspond to classical probabilities ( ) .we are taking steps of inspired by the quantum law of interference proposed by , in which the authors suggest to replace the quantum interference term by .we queried each variable of the network in figure [ fig : structure ] without providing any observation .we performed the following queries : , , , and .we then extracted both classical and quantum inferences and represented the results in the graph in figure [ fig : results1 ] .figure [ fig : results1 ] , shows that , when nothing is known about the state of the world , quantum probabilities tend to increase and overcome their classical counterpart . in quantum theory , when nothing is observed , all nodes of the bayesian network are in a superposition state . for each possible configuration in this superposition state, a probability amplitude is associated to it . during the superposition state ,the amplitudes of the probabilities of the nodes of the bayesian network start to be modified due to the interference effects .if one looks at the nodes as waves crossing the network from different locations , these waves can crash between each other , causing them to be either destroyed or to be merged together .this interference of the waves is controlled through the synchronicity principle by linking acausal events . when one starts to provide information to the bayesian network , then the superposition state collapses into another quantum state , affecting the configuration of the remaining possible states of the network . moreover , by making some observation to the network , we are reducing the total amount of uncertainty and , consequently , the reduction of the waves crossing the network ( table [ tab : results_one_ev ] ) . in table [ tab : results_one_ev ] there are two pairs of synchronised variables : ( earthquake , burglar ) and ( marycalls , johncalls ) .the quantum probability of has increased almost the same quantity as for the probability ( 56.37% for earthquake and 59.34% for burglar ) . in the same way , when we observe that , then the percentage of a burglary increased 11.38% , whereas earthquake increased a percentage of 10.13% towards its classical counterpart .in this work , we analysed a quantum - like bayesian network that puts together cause / effect relationships and semantic similarities between events .these similarities constitute acausal connections according to the synchronicity principle and provide new relationships to the graphical models . as a consequence , events can be represented in vector spaces , in which quantum parameters are determined by the similarities that these vectors share between them . in the realm of quantum cognition , quantum parameters might represent the correlation between events ( beliefs ) in a meaningful acausal relationship .the proposed quantum - like bayesian network benefits from the same advantages of classical bayesian networks : ( 1 ) it enables a visual representation of all relationships between all random variables of a given decision scenario , ( 2 ) can perform inferences over unobserved variables , that is , can deal with uncertainty , ( 3 ) enables the detection of independent and dependent variables more easily .moreover , the mapping to a quantum - like approach leads to a new mathematical formalism for computing inferences in bayesian networks that takes into account quantum interference effects .these effects can accommodate puzzling phenomena that could not be explained through a classical bayesian network .this is probably the biggest advantage of the proposed model . a network structure that can combine different sources of knowledge in order to model a more complex decision scenario and accommodate violations to the sure thing principle . with this work , we argue that , when presented with a problem , we perform a semantic categorisation of the symbols that we extract from the given problem through our thoughts .since our thoughts are abstract , cause / effect relationships might not be the most appropriate mechanisms to simulate interferences between them . the synchronicity principle seems to fit more in this context , since our thoughts can relate to each other from meaningful connections , rather than cause / effect relationships .we end this work with some reflections . over the literature of quantum cognition , quantum modelshave been proposed in order to explain some paradoxical findings .these decision problems , however , are very small .they are modelled with at most two random variables .decision problems with more random variables suffer from the problem of the exponential generation of quantum parameters ( like in burglar / alarm bayesian network ) . for more complex problems , how can one model them , since the only apparent way to do so , is through the usage of heuristic functions that can assign values to the quantum parameters ? but even through this method , given the lack of experimental data , how can one validate such functions ? is the usage of these functions a correct way to tackle this problem , or is it wrong to proceed in this direction ?how can such experiment be conducted ?is it even possible to show violations on the laws of probability theory for more complex problem ?
|
we analyse a quantum - like bayesian network that puts together cause / effect relationships and semantic similarities between events . these semantic similarities constitute acausal connections according to the synchronicity principle and provide new relationships to quantum like probabilistic graphical models . as a consequence , beliefs ( or any other event ) can be represented in vector spaces , in which quantum parameters are determined by the similarities that these vectors share between them . events attached by a semantic meaning do not need to have an explanation in terms of cause and effect . cognition ; quantum - like bayesian networks ; synchronicity principle
|
this is a reaction to leslie lamport s `` processes are in the eye of the beholder '' .lamport writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a concurrent algorithm is traditionally represented as the composition of processes .we show by an example that processes are an artifact of how an algorithm is represented .the difference between a two - process representation and a four - process representation of the same algorithm is no more fundamental than the difference between and . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to demonstrate his thesis , lamport uses two different programs for a first - in , first - out ring buffer of size .he represents the two algorithms by temporal formulas and proves the equivalence of the two temporal formulas .we analyze in what sense the two algorithms are and are not equivalent . there is no one notion of equivalence appropriate for all purposes and thus the `` insubstantiality of processes '' may itself be in the eye of the beholder .there are other issues where we disagree with lamport .in particular , we give a direct equivalence proof for two programs without representing them by means of temporal formulas .this paper is self - contained . in the remainder of this section, we explain the two ring buffer algorithms and discuss our disagreements with lamport . in section [ eaintro ] ,we give a brief introduction to evolving algebras . in section [ ringeas ], we present our formalizations of the ring buffer algorithms as evolving algebras . in section [ equivpf ] , we define a version of lock - step equivalence and prove that our formalizations of these algorithms are equivalent in that sense .finally , we discuss the inequivalence of these algorithms in section [ inequiv ] .the ring buffer in question is implemented by means of an array of elements . the input ( starting with )is stored in slot until it is sent out as the output .items may be placed in the buffer if and only if the buffer is not full ; of course , items may be sent from the buffer if and only if the buffer is not empty. input number can not occur until ( 1 ) all previous inputs have occurred and ( 2 ) either or else output number has occurred .output number can not occur until ( 1 ) all previous outputs have occurred and ( 2 ) input number has occurred .these dependencies are illustrated pictorially in figure [ equivsketch1 ] , where circles represent the actions to be taken and arrows represent dependency relationships between actions .lamport writes the two programs in a semi - formal language reminiscent of csp which we call pseudo - csp . the first program , which we denote by , is shown in figure [ rpcsp ] .it operates the buffer using two processes ; one handles input into the buffer and the other handles output from the buffer .it gives rise to a row - wise decomposition of the graph of moves , as shown in figure [ equivsketch2 ] .the second program , which we denote by , is shown in figure [ cpcsp ] .it uses processes , each managing input and output for one particular slot in the buffer .it gives rise to a column - wise decomposition of the graph of moves , as shown in figure [ equivsketch3 ] .in pseudo - csp , the semicolon represents sequential composition , represents parallel composition , and represents iteration .the general meanings of ? and !are more complicated ; they indicate synchronization . in the context of and ,`` in ? '' is essentially a command to place the current input into the given slot , and `` out ! ''is essentially a command to send out the datum in the given slot as an output . in section [ ringeas ] , we will give a more complete explanation of the two programs in terms of evolving algebras .after presenting the two algorithms in pseudo - csp , lamport describes them by means of formulas in tla , the temporal logic of actions , and proves the equivalence of the two formulas in tla . he does not prove that the tla formulas are equivalent to the corresponding pseudo - csp programs .the pseudo - csp presentations are there only to guide the reader s intuition .as we have mentioned , pseudo - csp is only semi - formal ; neither the syntax nor the semantics of it is given precisely .however , lamport provides a hint as to why the two programs themselves are equivalent .there is a close correspondence of values between and , and between and .figure [ pptable ] , taken from , illustrates the correspondence between and for .the row describes the values of variables and after inputs .the predicate isnext(pp , i ) is intended to be true only for one array position at any state ( the position that is going to be active ) ; the box indicates that position .there are three issues where we disagree with lamport .[ [ issue-1-the - notion - of - equivalence . ] ] issue 1 : the notion of equivalence .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + what does it mean that two programs are equivalent ? in our opinion , the answer to the question depends on the desired abstraction .there are many reasonable definitions of equivalence . hereare some examples . 1 .the two programs produce the same output on the same input .the two programs produce the same output on the same input , and the two programs are of the same time complexity ( with respect to your favorite definition of time complexity ) .[ ordertime ] 3 .given the same input , the two programs produce the same output and take _ precisely _ the same amount of time .[ sametime ] 4 .no observer of the execution of the two programs can detect any difference .[ obs ] the reader will be able to suggest numerous other reasonable definitions for equivalence .for example , one could substitute space for time in conditions ( [ ordertime ] ) and ( [ sametime ] ) above . the nature of an `` observer '' in condition ( [ obs ] ) admits different plausible interpretations , depending upon what aspects of the execution the observer is allowed to observe .let us stress that we do not promote any particular notion of equivalence or any particular class of such notions .we only note that there are different reasonable notions of equivalence and there is no one notion of equivalence that is best for all purposes .the two ring - buffer programs are indeed `` strongly equivalent '' ; in particular , they are equivalent in the sense of definition ( [ sametime ] ) above .however , they are not equivalent in the sense of definition ( [ obs ] ) for certain observers , or in the sense of some space - complexity versions of definitions ( [ ordertime ] ) and ( [ sametime ] ) .see section [ inequiv ] in this connection .[ [ issue-2-representing - programs - as - formulas . ] ] issue 2 : representing programs as formulas .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + again , we quote lamport : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will not attempt to give a rigorous meaning to the program text .programming languages evolved as a method of describing algorithms to compilers , not as a method for reasoning about them .we do not know how to write a completely formal proof that two programming language representations of the ring buffer are equivalent . in section 2, we represent the program formally in tla , the temporal logic of actions . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we believe that it is not only possible but also beneficial to give a rigorous meaning to one s programming language and to prove the desired equivalence of programs directly .the evolving algebra method has been used to give rigorous meaning to various programming languages . in a similar way, one may try to give formal semantics to pseudo - csp ( which is used in fact for describing algorithms to humans , not compilers ) .taking into account the modesty of our goals in this paper , we do not do that and represent and directly as evolving algebra programs and then work with the two evolving algebras .one may argue that our translation is not perfectly faithful .of course , no translation from a semi - formal to a formal language can be proved to be faithful .we believe that our translation is reasonably faithful ; we certainly did not worry about the complexity of our proofs as we did our translations .also , we do not think that lamport s tla description of the pseudo - csp is perfectly faithful ( see the discussion in subsection 3.2 ) and thus we have two slightly different ideals to which we can be faithful .in fact , we do not think that perfect faithfulness is crucially important here .we give two programming language representations and of the ring buffer reflecting different decompositions of the buffer into processes .confirming lamport s thesis , we prove that the two programs are equivalent in a very strong sense ; our equivalence proof is direct .then we point out that our programs are inequivalent according to some natural definitions of equivalence .moreover , the same inequivalence arguments apply to and as well .[ [ issue-3-the - formality - of - proofs . ] ] issue 3 : the formality of proofs .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + continuing , lamport writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we now give a hierarchically structured proof that and [ the tla translations of and gh ] are equivalent .the proof is completely formal , meaning that each step is a mathematical formula .english is used only to explain the low - level reasoning .the entire proof could be carried down to a level at which each step follows from the simple application of formal rules , but such a detailed proof is more suitable for machine checking than human reading . our complete proof , with `` q.e.d . ''steps and low - level reasoning omitted , appears in appendix a. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we prefer to separate the process of explaining a proof to people from the process of computer - aided verification of the same proof .a human - oriented exposition is much easier for humans to read and understand than expositions attempting to satisfy both concerns at once .writing a good human - oriented proof is the art of creating the correct images in the mind of the reader .such a proof is amenable to the traditional social process of debugging mathematical proofs .granted , mathematicians make mistakes and computer - aided verification may be desirable , especially in safety - critical applications . in this connectionwe note that a human - oriented proof can be a starting point for mechanical verification .let us stress also that a human - oriented proof need not be less precise than a machine - oriented proof ; it simply addresses a different audience .[ [ revisiting - lamports - thesis ] ] revisiting lamport s thesis + + + + + + + + + + + + + + + + + + + + + + + + + + + these disagreements do not mean that our position on `` the insubstantiality of processes '' is the direct opposite of lamport s .we simply point out that `` the insubstantiality of processes '' may itself be in the eye of the beholder .the same two programs can be equivalent with respect to some reasonable definitions of equivalence and inequivalent with respect to others .evolving algebras were introduced in ; a more detailed definition has appeared in . since its introduction , this methodology has been used for a wide variety of applications : programming language semantics , hardware specification , protocol verification , _etc._. it has been used to show equivalences of various kinds , including equivalences across a variety of abstraction levels for various real - world systems , _e.g. _ .see for numerous other examples .we recall here only as much of evolving algebra definitions as needed in this paper . evolving algebras ( often abbreviated _ ealgebras _ or _ ea _ ) have many other capabilities not shown here : for example , creating or destroying agents during the evolution . those already familiar with ealgebras may wish to skip this section .states are essentially logicians structures except that relations are treated as special functions .they are also called _ static algebras _ and indeed they are algebras in the sense of the science of universal algebra .a _ vocabulary _ is a finite collection of function names , each of fixed arity .every vocabulary contains the following _ logic symbols _ : nullary function names _ true , false , undef _ , the equality sign , ( the names of ) the usual boolean operations and ( for convenience ) a unary function name bool .some function symbols are tagged as relation symbols ( or predicates ) ; for example , bool and the equality sign are predicates . a _ state _ _ of vocabulary _ is a non - empty set ( the _ basic set _ or _ superuniverse _ of ) , together with interpretations of all function symbols in over ( the _ basic functions _ of ) .a function symbol of arity is interpreted as an -ary operation over ( if , it is interpreted as an element of ) .the interpretations of predicates ( the _ basic relations _ ) and the logic symbols satisfy the following obvious requirements .the elements ( more exactly , the interpretations of ) _ true _ and _ false _ are distinct .these two elements are the only possible values of any basic relation and the only arguments where bool produces _true_. they are operated upon in the usual way by the boolean operations .the interpretation of _ undef _ is distinct from those of _true _ and _ false_. the equality sign is interpreted as the equality relation .we denote the value of a term in state by . domains .let be a basic function of arity and range over -tuples of elements of . if is a basic relation then the _ domain of _ at is .otherwise the _ domain of _ at is . universes .a basic relation may be viewed as the set of tuples where it evaluates to _true_. if is unary it can be viewed as a _universe_. for example , bool is a universe consisting of two elements ( named ) _ true _ and _ false_. universes allow us to view states as many - sorted structures. types .let be a basic function of arity and be universes .we say that is _ of type _ in the given state if the domain of is and for every in the domain of .in particular , a nullary is of type if ( the value of ) belongs to .consider a directed ring of nodes with two tokens ; each node may be colored or uncolored .we formalize this as a state as follows .the superuniverse contains a non - empty universe nodes comprising the nodes of the ring . also presentis the obligatory two - element universe bool , disjoint from nodes .finally , there is an element ( interpreting ) _ undef _ outside of bool and outside of nodes .there is nothing else in the superuniverse .( usually we skip the descriptions of bool and _ undef _ ) .a unary function next indicates the successor to a given node in the ring .nullary functions token1 and token2 give the positions of the two tokens .a unary predicate colored indicates whether the given node is colored .there is a way to view states which is unusual to logicians .view a state as a sort of memory .define a _ location _ of a state to be a pair , where is a function name in the vocabulary of and is a tuple of elements of ( the superuniverse of ) whose length equals the arity of .( if is nullary , is simply . ) in the two - token ring example , let be any node ( that is , any element of the universe nodes ). then the pair ( next, ) is a location .an _ update _ of a state is a pair , where is a location of and is an element of . to at , put into the location ; that is , if , redefine to interpret as ; nothing else ( including the superuniverse ) is changed .we say that an update of state is _ trivial _ if is the content of in .in the two - token ring example , let be any node .then the pair ( token1 , ) is an update . to fire this update , move the first token to the position .remark to a curious reader . if = ( next, ) , then ( ) is also an update .to fire this update , redefine the successor of ; the new successor is itself .this update destroys the ring ( unless the ring had only one node ) . to guard from such undesirable changes ,the function next can be declared static ( see ) which will make any update of next illegal .an _ update set _ over a state is a set of updates of .an update set is _ consistent _ at if no two updates in the set have the same location but different values . to fire a consistent set at , fire all its members simultaneously ; to fire an inconsistent set at , do nothing .in the two - token ring example , let be two nodes. then the update set is consistent if and only if .we introduce rules for changing states .the semantics for each rule should be obvious . at a given state whose vocabulary includes that of a rule , gives rise to an update set ; to execute at , one fires .we say that is _ enabled _ at if is consistent and contains a non - trivial update .we suppose below that a state of discourse has a sufficiently rich vocabulary .an _ update instruction _ has the form = = where is a function name of arity and each is a term .( if we write `` '' rather than `` '' . )the update set contains a single element , where is the value of at and with . in other words , to execute at , set to and leave the rest of the state unchanged . in the two - token ring example , `` token1 : = next(token2 ) '' is an update instruction . to execute it , move token 1 to the successor of ( the current position of ) token 2 .a _ block rule _ is a sequence of transition rules . to execute at , execute all the constituent rules at simultaneously .more formally , .( one is supposed to write `` * block * '' and `` * endblock * '' to denote the scope of a block rule ; we often omit them for brevity . ) in the two - token ring example , consider the following block rule : = = token1 : = token2 + token2 : = token1 to execute this rule , exchange the tokens .the new position of token1 is the old position of token2 , and the new position of token2 is the old position of token1 .a _ conditional rule _ has the form = = * if * * then * * endif * where ( the _ guard _ ) is a term and is a rule . if holds ( that is , has the same value as _ true _ ) in then ; otherwise .( a more general form is `` * if * * then * * else * * endif * '' , but we do not use it in this paper . ) in the two - token ring example , consider the following conditional rule : = = * if * token1 = token2 * then * + colored(token1 ) : = true + * endif * its meaning is the following : if the two tokens are at the same node , then color that node .basic rules are sufficient for many purposes , e.g. to give operational semantics for the c programming language , but in this paper we need two additional rule constructors .the new rules use variables .formal treatment of variables requires some care but the semantics of the new rules is quite obvious , especially because we do not need to nest constructors with variables here .thus we skip the formalities and refer the reader to . as above is a state of sufficiently rich vocabulary . a _ parallel synchronous rule _( or _ declaration rule _, as in ) has the form : = = = * var * * ranges over * + + * endvar * where is a variable name , is a universe name , and can be viewed as a rule template with free variable . to execute at , execute simultaneously all rules where ranges over . in the two - token ring example , ( the execution of ) the following rule colors all nodes except for the nodes occupied by the tokens .= = = * var * * ranges over * nodes + * if * token1 * and * token2 * then * + colored(x ) : = true + * endif * + * endvar * a _ choice rule _ has the form = = = * choose * * in * + + * endchoose * where , and are as above .it is nondeterministic . to execute the choice rule ,choose arbitrarily one element in and execute the rule .in the two - token ring example , each execution of the following rule either colors an unoccupied node or does nothing .= = = * choose * * in * nodes + * if * token1 * and * token2 * then * + colored(x ) : = true + * endif * + * endchoose * let be a vocabulary that contains the universe _agents _ , the unary function _ mod _ and the nullary function _me_. a _ distributed ea program _ of vocabulary consists of a finite set of _ modules _ , each of which is a transition rule with function names from .each module is assigned a different name ; these names are nullary function names from different from _me_. intuitively , a module is the program to be executed by one or more agents .a ( global ) _ state _ of is a structure of vocabulary \{me } where different module names are interpreted as different elements of and the function _ mod _ assigns( the interpretations of ) module names to elements of _ agents _ ; _ mod _ is undefined ( that is , produces _ undef _ ) otherwise .if _ mod _ maps an element to a module name , we say that is an _ agent _ with program . for each agent , view is the reduct of to the collection of functions mentioned in the module mod( ) , expanded by interpreting _ me _ as . think about view as the local state of agent corresponding to the global state .we say that an agent is _ enabled _ at if mod( ) is enabled at view ; that is , if the update set generated by mod( ) at view is consistent and contains a non - trivial update .this update set is also an update set over . to at , execute that update set . in this paper , agents are not created or destroyed . taking this into account , we give a slightly simplified definition of runs . a _ run _ of a distributed ealgebra program of vocabulary from the initial state is a triple satisfying the following conditions . 1 .: : , the set of _ moves _ of , is a partially ordered set where every is finite .+ intuitively , means that move completes before move begins .if is totally ordered , we say that is a _ sequential _ run .: assigns agents ( of ) to moves in such a way that every non - empty set is linearly ordered .+ intuitively , is the agent performing move ; every agent acts sequentially .3 . : : maps finite initial segments of ( including ) to states of .+ intuitively , is the result of performing all moves of ; is the initial state .states are the _ states of . 4 . : : _ coherence_. if is a maximal element of a finite initial segment of , and , then is enabled at and is obtained by firing at . it may be convenient to associate particular states with single moves .we define .the definition of runs above allows no interaction between the agents on the one side and the external world on the other . in such a case ,a distributed evolving algebra is given by a program and the collection of initial states . in a more general case , the environment can influence the evolution .here is a simple way to handle interaction with the environment which suffices for this paper .declare some basic functions ( more precisely , some function names ) _ external_. intuitively , only the outside world can change them .if is a state of let be the reduct of to ( the vocabulary of ) non - external functions .replace the coherence condition with the following : 4 .: : _ coherence_. if is a maximal element of a finite initial segment of , and , then is enabled in and is obtained by firing at and forgetting the external functions . in applications ,external functions usually satisfy certain constraints .for example , a nullary external function input may produce only integers . to reflect such constraints ,we define _ regular runs _ in applications .a distributed evolving algebra is given by a program , the collection of initial states and the collection of regular runs .( of course , regular runs define the initial states , but it may be convenient to specify the initial states separately . )the evolving algebras , our `` official '' representations of and , are given in subsections [ officialrea ] and [ officialcea ] ; see figures [ rea ] and [ cea ] .the reader may proceed there directly and ignore the preceding subsections where we do the following .we first present in subsection [ r1sect ] an elaborate ealgebra r1 that formalizes together with its environment ; r1 expresses our understanding of how works , how it communicates with the environment and what the environment is supposed to do .notice that the environment and the synchronization magic of csp are explicit in r1 . in subsection [ r2sect ], we then transform r1 into another ealgebra r2 that performs synchronization implicitly .we transform r2 into by parallelizing the rules slightly and making the environment implicit ; the result is shown in subsection [ officialrea ] .( in a sense , r1 , r2 , and are all equivalent to another another , but we will not formalize this . )we performed a similar analysis and transformation to create ; we omit the intermediate stages and present directly in subsection [ officialcea ] .the program for r1 , given in figure [ r1 ] , contains six modules .the names of the modules reflect the intended meanings .in particular , modules bufffrontend and buffbackend correspond to the two processes receiver and sender of .= = = ' '' '' + module inputenvironment + * if * mode(me ) = work * then * + * choose * * in * data + inputdatum : = + * endchoose * + mode(me ) : = ready + * endif * + ' '' '' + module outputenvironment + * if * mode(me ) = work * then * mode(me ) : = ready * endif * + ' '' '' + module inputchannel + * if * mode(sender(me ) ) = ready * and * mode(receiver(me ) ) = ready * then * + buffer( ) : = inputdatum + mode(sender(me ) ) : = work + mode(receiver(me ) ) : = work + * endif * + ' '' '' + module outputchannel + * if * mode(sender(me ) ) = ready * and * mode(receiver(me ) ) = ready * then * + outputdatum : = buffer( ) + mode(sender(me ) ) : = work + mode(receiver(me ) ) : = work + * endif * + ' '' '' + module bufffrontend + rule frontwait + * if* mode(me ) = wait * and * * then * mode(me ) : = ready * endif * + rule frontwork + * if * mode(me ) = work * then * : = , mode(me ) : = wait * endif * + ' '' '' + module buffbackend + rule backwait + * if * mode(me ) = wait * and * * then * mode(me ) : = ready * endif * + rule backwork + * if * mode(me ) = work * then * : = , mode(me ) : = wait * endif * + ' '' '' comment for ealgebraists . in terms of ,the inputchannel agent is a two - member team comprising the inputenvironment and the bufffrontend agents ; functions sender and receiver are similar to functions member and member .similarly the outputchannel agent is a team .this case is very simple and one can get rid of unary functions sender and receiver by introducing names for the sending and receiving agents .comment for csp experts .synchronization is implicit in csp .it is a built - in magic of csp .we have doers of synchronization .( in this connection , the reader may want to see the ea treatment of occam in . )nevertheless , synchronization remains abstract . in a sensethe abstraction level is even higher : similar agents can synchronize more than two processes .the nondeterministic formalizations of the input and output environments are abstract and may be refined in many ways . [ [ initial - states . ] ] initial states .+ + + + + + + + + + + + + + + + in addition to the function names mentioned in the program ( and the logic names ) , the vocabulary of r1 contains universe names data , integers , , , modes and a subuniverse senders - and - receivers of agents .initial states of r1 satisfy the following requirements . 1 .the universe integers and the arithmetical function names mentioned in the program have their usual meanings .the universe consists of integers modulo identified with the integers .the universe is similar .buffer is of type data ; inputdatum and outputdatum take values in data .2 . the universe agents contains six elements to which mod assigns different module names .we could have special nullary functions to name the six agents but we do nt ; we will call them with respect to their programs : the input environment , the output environment , the input channel , the output channel , buffer s front end and buffer s back end respectively . sender(the input channel ) = the input environment , receiver(the input channel ) = buffer s front end , sender(the output channel ) = buffer s back end , and receiver(the output channel ) = the output environment .the universe senders - and - receivers consists of the two buffer agents and the two environment agents .nullary functions ready , wait and work are distinct elements of the universe modes .the function mode is defined only over senders - and - receivers . for the sake of simplicity of exposition, we assign particular initial values to mode : it assigns wait to either buffer agent , work to the input environment agent , and ready to the output environment agent . [ [ analysis ] ] analysis + + + + + + + + in the rest of this subsection , we prove that r1 has the intended properties . in every state of any run of r1 ,the dynamic functions have the following ( intended ) types . 1 .mode : senders - and - receivers modes . 2 .inputdatum , outputdatum : data .3 . : integers .buffer : data . by induction over states .let be an arbitrary run of r1 . in every state of , .furthermore , if then mode(buffer s back end ) = wait , and if then mode(buffer s front end ) = wait .an obvious induction .see lemma [ lem1 ] in this regard .[ orderingr1 ] in any run of r1 , we have the following . 1 .if is a move of the input channel and is a move of buffer s front end then either or .if is a move of the output channel and is a move of buffer s back end then either or .3 . for any buffer slot ,if is a move of the input channel involving slot and is a move of the output channel involving slot then either or .let be a run of r1 . 1 .[ orderingpt1 ] suppose by contradiction that and are incomparable and let so that , by the coherence requirements on the run , both agents are enabled at , which is impossible because their guards are contradictory .+ since the input channel is enabled , the mode of buffer s front end is ready at . but then buffer s front end is disabled at , which gives the desired contradiction .similar to part ( [ orderingpt1 ] ) .3 . suppose by contradiction that and are incomparable and let so that both agents are enabled at . since involves , mod in .similarly , mod in .hence mod in . by the p and g lemma , either or in . in the first case ,the mode of buffer s back end is wait and therefore the output channel is disabled . in the second case ,the mode of buffer s front end is wait and therefore the input channel is disabled .in either case , we have a contradiction .recall that the state of move is . by the coherence requirement ,the agent is enabled in .consider a run of r1 .let ( respectively , ) be the move of the input channel ( respectively , the output channel ) .the value of inputdatum in ( that , is the datum to be transmitted during ) is the _ input datum _ , and the sequence is the _ input data sequence_. ( it is convenient to start counting from rather than . ) similarly , the value of outputdatum in is the _ output datum of _ and the sequence is the _ output data sequence_. lamport writes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to make the example more interesting , we assume no liveness properties for sending values on the _ in _ channel , but we require that every value received in the buffer be eventually sent on the _ out _ channel . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ with this in mind , we call a run _ regular _ if the output sequence is exactly as long as the input sequence . for a regular run ,the output sequence is identical with the input sequence .let be the moves of the input channel and be the moves of the output channel .a simple induction shows that stores the input datum at slot and at .similarly , sends out the output datum from slot and at . if , then . we show that , for all , . by the and lemma , in for any , and in for any . 1 .suppose . taking into account the monotonicity of , we have the following at : , and therefore which is impossible .2 . suppose .taking into account the monotonicity of , we have the following at : , , and therefore which is impossible . by the ordering lemma , is order - comparable with both and .it follows that .one obvious difference between and r1 is the following : r1 explicitly manages the communication channels between the buffer and the environment , while does not . by playing with the modes of senders and receivers , the channel modules of r1provide explicit synchronization between the environment and the buffers .this synchronization is implicit in the `` ? '' and `` ! ''operators of csp . to remedy this, we transform r1 into an ealgebra r2 in which communication occurs implicitly .r2 must somehow ensure synchronization .there are several options . 1 .allow bufffrontend ( respectively , buffbackend ) to modify the mode of the input environment ( respectively , the output environment ) to ensure synchronization .+ this approach is feasible but undesirable .it is unfair ; the buffer acts as a receiver on the input channel and a sender on the output channel but exerts complete control over the actions of both channels .imagine that the output environment represents another buffer , which operates as our buffer does ; in such a case both agents would try to exert complete control over the common channel .2 . assume that bufffrontend ( respectively , buffbackend ) does not execute until the input environment ( respectively , the output environment ) is ready .+ this semantical approach reflects the synchronization magic of csp .it is quite feasible .moreover , it is common in the ea literature to make assumptions about the environment when necessary .it is not necessary in this case because there are very easy programming solutions ( see the next two items ) to the problem .3 . use an additional bit for either channel which tells us whether the channel is ready for communication or not .+ in fact , a state of a channel comprises a datum and an additional bit in the tla part of lamport s paper .one can avoid dealing with states of the channel by requiring that each sender and receiver across a channel maintains its own bit ( a well - known trick ) which brings us to the following option .4 . use a bookkeeping bit for every sender and every receiver .it does not really matter , technically speaking , which of the four routes is chosen . to an extent ,the choice is a matter of taste .we choose the fourth approach .the resulting ealgebra r2 is shown in figure [ r2 ] .= = = ' '' '' + module inputenvironment + * if * insendbit = inreceivebit then + * choose * * in * data + inputdatum : = + * endchoose * + insendbit : = 1 insendbit + * endif * + ' '' '' + module outputenvironment + * if * outsendbit outreceivebit * then * + outreceivebit : = 1 outreceivebit + * endif * + ' '' '' + module bufffrontend + rule frontwait + * if * mode(me ) = wait * and * * then * mode(me ) : = ready * endif * + + rule frontcommunicate + * if * mode(me ) = ready * and * insendbit inreceivebit * then * + buffer( ) : = inputdatum + mode(me ) : = work + inreceivebit : = 1 inreceivebit + * endif * + + rule frontwork + * if * mode(me ) = work * then * : = , mode(me ) : = wait * endif * + ' '' '' + module buffbackend + rule backwait + * if * mode(me ) = wait * and * * then * mode(me ) : = ready * endif * + + rule backcommunicate + * if * mode(me ) = ready * and * outsendbit = outreceivebit * then * + outputdatum : = buffer( ) + mode(me ) : = work + outsendbit : = 1 outsendbit + * endif * + + rule backwork + * if * mode(me ) = work * then * : = , mode(me ) : = wait * endif * + ' '' '' notice that the sender can place data into a channel only when the synchronization bits match , and the receiver can read the data in a channel only when the synchronization bits do not match .the initial states of r2 satisfy the first condition on the initial states of r1 .the universe agents contains four elements to which mod assigns different module names ; we will call them with respect to their programs : the input environment , the output environment , buffer s front end , and buffer s back end , respectively . the universe bufferagents contains the buffer s front end and buffer s back end agents .nullary functions insendbit , inreceivebit , outsendbit , outreceivebit are all equal to .nullary functions ready , wait and work are distinct elements of the universe modes .the function mode is defined only over bufferagents ; it assigns wait to each buffer agent .inputdatum and outputdatum take values in data .define the input and output sequences and regular runs as in r1 .let be the vocabulary of r1 and be the vocabulary of r2 .every run of r1 induces a run of r2 where : 1 . if and is not a channel agent , then . if = the input channel , then = buffer s front end . if = the output channel , then = buffer s back end .2 . let be a finite initial segment of . is the unique state satisfying the following conditions : 1 . 2 .inreceivebit = if the mode of buffer s front end is wait or ready , and otherwise .outsendbit = if the mode of buffer s back end is wait or ready , and otherwise .insendbit = inreceivebit if the mode of the input environment is work , and inreceivebit otherwise .outreceivebit = outsendbit if the mode of the output environment is ready , and outsendbit otherwise .we check that is indeed a run of r2 . by the ordering lemma for r1 ,the moves of every agent of r2 are linearly ordered .it remains to check only the coherence condition ; the other conditions are obvious .suppose that is a finite initial segment of with a maximal element and .using the facts that is enabled in and is the result of executing in , it is easy to check that is enabled in and is the result of executing at .conversely , every run of r2 is induced ( in the sense of the preceding lemma ) by a unique run of r1 .the proof is easy and we skip it . after establishing that and before executing the frontcommunicate rule , buffer s front end goes to mode ready .this corresponds to nothing in which calls for merging the frontwait and frontcommunicate rules . on the other hand, augments _ after _ performing an act of communication .there is no logical necessity to delay the augmentation of . for aesthetic reasons we merge the frontwork rule with the other two rules of bufffrontend .then we do a similar parallelization for buffbackend .finally we simplify the names bufffrontend and buffbackend to frontend and backend respectively .a certain disaccord still remains because the environment is implicit in . to remedy this ,we remove the environment modules , asserting that the functions inputdatum , insendbit , and outreceivebit which were updated by the environment modules are now external functions .the result is our official ealgebra , shown in figure [ rea ] .= = = ' '' '' + module frontend + * if * * and * insendbit inreceivebit * then * + buffer( ) : = inputdatum + inreceivebit : = 1 - inreceivebit + : = + * endif * + ' '' '' + module backend + * if * * and * outsendbit outreceivebit * then * + outputdatum : = buffer( ) + outsendbit : = 1 - outsendbit + : = + * endif * + ' '' '' the initial states of satisfy the first condition on the initial states of r1 : the universe integers and the arithmetical function names mentioned in the program have their usual meanings ; the universe consists of integers modulo identified with the integers ; the universe is similar ; ; buffer is of type data ; inputdatum and outputdatum take values in data . additionally , the universe agents contains two elements to which mod assigns different module names .insendbit , inreceivebit , outsendbit , and outreceivebit are all equal to .inputdatum and outputdatum take values in data .the definition of regular runs of is slightly more complicated , due to the presence of the external functions inputdatum , insendbit , and outreceivebit .we require that the output sequence is at least as long as the input sequence , inputdatum is of type data , and insendbit and outreceivebit are both of type .we skip the proof that is faithful to r2 . the evolving algebra is shown in figure [ cea ] below .it can be obtained from in the same way that can be obtained from ; for brevity , we omit the intermediate stages . = = = ' '' '' + module slot + + rule get + * if * mode(me)=get * and * inputturn(me ) + * and * insendbit inreceivebit * then * + buffer(me ) : = inputdatum + inreceivebit : = 1 - inreceivebit + : = + mode(me ) : = put + * endif * + + rule put + * if * mode(me)=put * and * outputturn(me ) + * and * outsendbit = outreceivebit * then * + outputdatum : = buffer(me ) + outsendbit : = 1 - outsendbit + : = + mode(me ) : = get + * endif * + + inputturn(x ) abbreviates + * or * [ * and * + outputturn(x ) abbreviates + * or * [ * and * + ' '' '' [ [ initial - states ] ] initial states + + + + + + + + + + + + + + the initial states of satisfy the following conditions . 1 . the first condition for the initial states of r1 is satisfied except we do nt have functions and now .instead we have dynamic functions and with domain and for all in .the universe agents consists of the elements of , which are mapped by mod to the module name slot .nullary functions get and put are distinct elements of the universe modes .the dynamic function mode is defined over agents ; mode=get for every in .inputdatum and outputdatum are elements of data .nullary functions insendbit , inreceivebit , outsendbit , outreceivebit are all equal to .regular runs are defined similarly to ; we require that the output sequence is at least as long as the input sequence , inputdatum is of type data , and insendbit and outreceivebit take values in .we define a strong version of lock - step equivalence for ealgebras which for brevity we call _ lock - step equivalence_. we then prove that and are lock - step equivalent .we start with an even stronger version of lock - step equivalence which we call _ strict lock - step equivalence_. for simplicity , we restrict attention to ealgebras with a fixed superuniverse . in other words , we suppose that all initial states have the same superuniverse. this assumption does not reduce generality because the superuniverse can be always chosen to be sufficiently large .let and be ealgebras with the same superuniverse and suppose that is a one - to - one mapping from the states of onto the states of such that if then and have identical interpretations of the function names common to and . call a run of _ strictly -similar _ to a partially ordered run of if there is an isomorphism such that for every finite initial segment of , , where .call and _ strictly -similar _ if every run of is strictly -similar to a run of , and every run of is -similar to a run of . finally call and _ strictly lock - step equivalent _ if there exists an such that they are strictly -similar .ideally we would like to prove that and are strictly lock - step equivalent .unfortunately this is false , which is especially easy to see if the universe data is finite . in this case ,any run of has only finitely many different states ; this is not true for because and may take arbitrarily large integer values .one can rewrite either or to make them strictly lock - step equivalent .for example , can be modified to perform math on and over integers instead of .we will not change either ealgebra ; instead we will slightly weaken the notion of strict lock - step equivalence .if an agent of an ealgebra is enabled at a state , let result be the result of firing at ; otherwise let result .say that an equivalence relation on the states of _ respects _ a function name of if has the same interpretation in equivalent states .the equivalence classes of will be denoted ] , then . call a partially ordered run of _ -similar _ to a partially ordered run of if there is an isomorphism such that , for every finite initial segment of , ) = [ \tau(y)] ] is the state of such that and for all common function names , .thus , relates the counters used in and the counters used in .( notice that by lemma [ lem0 ] , is well - defined . ) we have not said anything about _ mode _ because _ mode _ is uniquely defined by the rest of the state ( see lemma [ modelem ] in section [ proofs ] ) and is redundant .we now prove that and are -similar .we say that is a state of a run if for some finite initial segment of .[ lem1 ] for any state of any run of , . by induction .initially , .let be a run of .let be a finite initial segment of with maximal element , such that holds in .let . *if is the front end agent and is enabled in , then .the front end agent increments but does not alter ; thus , . * if is the back end agent and is enabled in , then .the back end agent increments but does not alter ; thus , .[ lemk ] fix a non - negative integer . for any run of , the k - slot moves of ( that is , the moves of which involve buffer( ) ) are linearly ordered .similar to lemma [ orderingr1 ] .[ lem2 ] for any run of , there is a mapping in from states of to such that if , then : * inputturn(me ) is true for agent and for no other agent .* for all , .* for all , . by inductioninitially , agent ( and no other ) satisfies _inputturn(me ) _ and holds for every agent .thus , if is an initial state , .let be a run of .let be a finite initial segment of with maximal element , such that the requirements hold in .let .if executes rule put , is not modified and .otherwise , if rule get is enabled for , executing rule get increments ; the desired .this is obvious if .if , then all values of are equal in and satisfies the requirements .[ lem3 ] for any run of , there is a mapping out from states of to such that if , then : * outputturn(me ) is true for agent and no other agent .* for all , .* for all , .parallel to that of the last lemma .it is easy to see that every move of involves an execution of rule get or rule put but not both .( more precisely , consider finite initial segments of moves where is a maximal element of .any such is obtained from either by executing get in state , or executing put in state . ) in the first case , call a get move . in the second case , call a put move .[ lem4 ] in any run of , all get moves are linearly ordered and all put moves are linearly ordered .we prove the claim for rule get ; the proof for rule put is similar . by contradiction , suppose that are two incomparable get moves and . by the coherence condition for runs , both rules are enabled in state . by lemma [ lem2 ] , a( ) = a( ) .but all moves of the same agent are ordered ; this gives the desired contradiction .[ lem5 ] [ modelem ] in any state of any run of , for any agent k , we fix a and do induction over runs .initially , and for every agent .let be a finite initial segment of a run with maximal element such that ( by the induction hypothesis ) the required condition holds in .let .if , none of , , and are affected by executing in , so the condition holds in . if , we have two cases . *if agent executes rule get in state , we must have ( from rule get ) and ( by the induction hypothesis ) . firing rule get yields and . *if agent executes rule put in state , we must have ( from rule put ) and ( by the induction hypothesis ) .firing rule get yields and . remark .this lemma shows that function _is indeed redundant .[ leminout ] if ) = c ] of for agent .let , so that _ inputturn(k) _ holds .both frontend and get have _ insendbit inreceivebit _ in their guards .it thus suffices to show that iff = get . by lemma [ lem5 ] , it suffices to show that iff .. there exist non - negative integers such that , , and .( note that by lemma [ leminout ] , . ) by lemma [ lem1 ] , .there are two cases . * and . by definition of , we have that , modulo 2 , and for all , .since , we have that , modulo 2 , , as desired . * and . by definition of , we have that , modulo 2 , and for all , = 1 - . since , we have that , modulo 2 , , as desired .on the other hand , suppose .then and differ by 1 . by definition of , for all , including .[ lem12 ] module backend is enabled in state iff rule put is enabled in state ) ] of for agent .let and .then ) ] .* both agents execute _ inreceivebit : = 1 inreceivebit_. * the front end agent executes _buffer( mod n ) : = inputdatum_. agent executes _buffer(in(c ) ) : = inputdatum_. by lemma [ leminout ] , _in(c ) = _ , so these updates are identical . *the front end agent executes .agent executes .the definition of and the fact that )} ] .* agent executes _mode(in(c ) ) : = put_. by lemma [ lem5 ] , this update is redundant and need not have a corresponding update by the front end agent . [ lem14 ] suppose that module backend is enabled in a state of for the back end agent and rule put is enabled in a state ) ] .parallel to that of the last theorem .[ isequiv ] is lock - step equivalent to .let and .we begin by showing that any run of is -similar to a run of , using the definition of given earlier .construct a run of , where ) ] .then if is the front end agent , and if is the back end agent .we check that satisfies the four requirements for a run of stated in section [ runs ] . 1 .trivial , since is a run .2 . by lemma[ lemk ] , it suffices to show that for any , if , then is a -slot move . by the construction above and lemma [ leminout ], we have modulo n that if is the front end agent and if is the back end agent . in either case, is a -slot move .3 . since , maps finite initial segments of to states of .coherence_. let be a finite initial segment of with a maximal element , and .result(a(),(x ) ) = (y)_. by lemma [ lem11 ] or [ lem12 ] , is enabled in . by lemma [ lem13 ] or [ lem14 ] , _result( ) = . continuing , we must also show that for any run of , there is a run of is -similar to it .we define as follows .consider the action of agent at state .if executes rule get , set to be the front end agent .if executes rule put , set to be the back end agent .we check that the moves of the front end agent are linearly ordered . by lemma [ lem4 ], it suffices to show that if is the front end agent , then executes get in state which is true by construction of .a similar argument shows that the moves of the back end agent are linearly ordered .we define inductively over finite initial segments of . is the unique initial state in .let be a finite initial segment with a maximal element such that is defined at .choose from such that .is it possible to select such a ?yes . by lemma [ lem11 ] or [ lem12 ] , is enabled in iff is enabled in . by lemma [ lem13 ] or [ lem14 ] , _result( ) (result())_. it is easy to check that is a run of which is -similar to .we have proven that our formalizations and of and are lock - step equivalent .nevertheless , and are inequivalent in various other ways . in the following discussionwe exhibit some of these inequivalences .the discussion is informal , but it is not difficult to prove these inequivalences using appropriate formalizations of and .let and . uses unrestricted integers as its counters ; in contrast , uses only single bits for the same purpose .we have already used this phenomenon to show that are not strictly lock - step equivalent .one can put the same argument in a more practical way .imagine that the universe data is finite and small , and that a computer with limited memory is used to execute and . s counters may eventually exceed the memory capacity of the computer . would have no such problem . shares access to the buffer between both processes ; in contrast , each process in has exclusive access to its portion of the buffer .conversely , processes in share access to both the input and output channels , while each process in has exclusive access to one channel .imagine an architecture in which processes pay in one way or another for acquiring a channel . would be more expensive to use on such a system . how many internal locations used by each algorithm must be shared between processes ? shares access to locations : the locations of the buffer and counter variables . shares access to locations : the counter variables .sharing locations may not be without cost ; some provision must be made for handling conflicts ( _ e.g. _ read / write conflicts ) at a given location .imagine that a user must pay for each shared location ( but not for private variables , regardless of size ) .in such a scenario , would be more expensive than to run .these contrasts can be made a little more dramatic .for example , one could construct another version of the ring buffer algorithm which uses processes , each of which is responsible for an input or output action ( but not both ) to a particular buffer position .all of the locations it uses will be shared .it is lock - step equivalent to and ; yet , few people would choose to use this version because it exacerbates the disadvantages of .alternatively , one could write a single processor ( sequential ) algorithm which is equivalent in a different sense to and ; it would produce the same output as and when given the same input but would have the disadvantage of not allowing all orderings of actions possible for and .e. brger and d. rosenzweig , `` the wam - definition and compiler correctness , '' in l.c .beierle and l. pluemer , eds . , _ logic programming : formal methods and practical applications _ , north - holland series in computer science and artificial intelligence , 1994 .y. gurevich , `` evolving algebras : an attempt to discover semantics '' , _ current trends in theoretical computer science _ , eds . g. rozenberg and a. salomaa , world scientific , 1993 , 266292 .( first published in bull .eatcs 57 ( 1991 ) , 264284 ; an updated version appears in . )
|
in a recent provocative paper , lamport points out `` the insubstantiality of processes '' by proving the equivalence of two different decompositions of the same intuitive algorithm by means of temporal formulas . we point out that the correct equivalence of algorithms is itself in the eye of the beholder . we discuss a number of related issues and , in particular , whether algorithms can be proved equivalent directly . 6.5 in 8.5 in
|
the integration of millimeter - wave ( mmwave ) and massive multiple - input multiple - output ( mimo ) has been considered as a key technique for future 5 g wireless communications , since it can achieve significant increase in data rates due to its wider bandwidth and higher spectral efficiency . however , realizing mmwave massive mimo in practice is not a trivial task .one key challenging problem is that each antenna in mimo systems usually requires one dedicated radio - frequency ( rf ) chain ( including digital - to - analog converter , up converter , etc . ) .this results in unaffordable hardware complexity and energy consumption in mmwave massive mimo systems , as the number of antennas becomes huge and the energy consumption of rf chain is high at mmwave frequencies . to reduce the number of required rf chains , the concept of beamspace mimo has been recently proposed in the pioneering work . by employing the lens antenna array instead of the conventional electromagnetic antenna array, beamspace mimo can transform the conventional spatial channel into beamspace channel by concentrating the signals from different directions ( beams ) on different antennas . since the scattering in mmwave communications is not rich ,the number of effective prorogation paths is quite limited , occupying only a small number of beams . as a result ,the mmwave beamspace channel is sparse , and we can select a small number of dominant beams to significantly reduce the dimension of mimo system and the number of required rf chains without obvious performance loss . nevertheless , beam selection requires the base station ( bs ) to acquire the information of beamspace channel of large size , which is challenging , especially when the number of rf chains is limited . to solve this problem, some advanced schemes based on compressive sensing ( cs ) have been proposed very recently .the key idea of these schemes is to utilize the sparsity of mmwave channels in the angle domain to efficiently estimate the mmwave massive mimo channel of large size .however , these schemes are designed for hybrid precoding systems , where the phase shifter network can generate beams with sufficiently high angle resolution to improve the channel estimation accuracy . by contrast , in beamspace mimo systems , although the phase shifter network can be replaced by lens antenna array to further reduce the hardware cost and energy consumption , the generated beams are predefined with a fixed yet limited angle resolution .if we directly apply the existing channel estimation schemes to beamspace mimo systems with lens antenna array , the performance will be not very satisfying .to the best of our knowledge , the channel estimation problem for beamspace mimo systems has not been well addressed in the literature . in this paper , by fully utilizing the structural characteristics of mmwave beamspace channel , we propose a reliable support detection ( sd)-based channel estimation scheme .the basic idea is to decompose the total beamspace channel estimation problem into a series of sub - problems , each of which only considers one sparse channel component ( a vector containing the information of a specific propagation direction ) . for each channel component, we first detect its support ( i.e. , the index set of nonzero elements in a sparse vector ) according to the estimated position of the strongest element .then , the influence of this channel component is removed from the total beamspace channel estimation problem , and the support of the next channel component is detected in a similar method .after the supports of all channel components have been detected , the nonzero elements of the sparse beamspace channel can be estimated with low pilot overhead .simulation results show that the proposed sd - based channel estimation outperforms conventional schemes , especially in the low signal - to - noise ratio ( snr ) region , which is more attractive for mmwave massive mimo systems where low snr is the typical case before beamforming ._ notation _ : lower - case and upper - case boldface letters denote vectors and matrices , respectively ; , , and denote the conjugate transpose , inversion , and trace of a matrix , respectively ; denotes the frobenius norm of a matrix ; denotes the amplitude of a scalar ; denotes the cardinality of a set ; finally , is the identity matrix .in this paper , we consider a typical mmwave massive mimo system working in time division duplexing ( tdd ) model , where the bs employs antennas and rf chains to simultaneously serve single - antenna users . as shown in fig .1 ( a ) , for conventional mimo systems in the spatial domain , the received signal vector for all users in the downlink can be presented by where is the downlink channel matrix , } ] according to the definitions in ( [ eq3 ] ) .therefore , as long as , which can be guaranteed by , we have based on ( [ eq18 ] ) , we can conclude that which verifies the conclusion ( [ eq16 ] ) .* lemma 1 * implies that we can decompose the total beamspace channel estimation problem into a series of independent sub - problems , each of which only considers one specific channel component approximately orthogonal to the others .specifically , we can first estimate the strongest channel component .after that , we can remove the influence of this component from the total estimation problem , and then the channel component with the second strongest power can be estimated .such procedure will be repeated until all channel components have been estimated .next , in the following * lemma 2 * , we will prove another special structural characteristic of mmwave beamspace channel to show how to estimate each channel component in the beamspace .* lemma 2*. _ consider the channel component in the beamspace , and assume is an even integer without loss of generality .the ratio between the power of strongest elements of and the total power of can be lower - bounded by moreover , once the position of the strongest element of is determined , the other strongest elements will uniformly locate around it with the interval . _ . ] _ proof : _ based on ( [ eq2])-([eq4 ] ) , the channel component in the beamspace can be presented as }^{h}}.\ ] ] fig .3 shows the normalized amplitude distribution of the elements in , where the set of red dash lines ( or blue dot dash lines ) presents the set of spatial directions for in ( [ eq4 ] ) pre - defined by lens antenna array . from fig .3 , we can observe that when the practical spatial direction exactly equals one pre - defined spatial direction , there is only one strongest element containing all the power of , which is the best case . in contrast , the worst case will happen when the distance between and one pre - defined spatial direction is equal to . in this case ,the power of strongest elements of is besides , according to ( [ eq21 ] ) , the total power of can be calculated as .therefore , we can conclude that is lower - bounded by ( [ eq20 ] ) .moreover , as shown in fig .3 , once the position of the strongest element of is determined , the other strongest elements will uniformly locate around it with the interval . from * lemma 2* , we can derive two important conclusions .the first one is that can be considered as a sparse vector , since the most power of is focused on a small number of dominant elements .for example , when and , the lower - bound of is about 95% .this means that we can retain only a small number ( e.g. , ) of elements of with strong power and regard other elements as zero without obvious performance loss .the second one is that the support of sparse vector can be uniquely determined by as is odd , the support of should be . ] where , and is the modulo operation with respect to , which guarantees that all indices in belong to .after the support of has been detected , we can extract columns from ( [ eq15 ] ) according to , and use the classical ls algorithm to estimate the nonzero elements of .based on the discussion so far , the pseudo - code of the proposed sd - based channel estimation can be summarized in * * algorithm 1 * * , which can be explained as follows . during the iteration, we first detect the position of the strongest element of in step 1 .then in step 2 , utilizing the structural characteristics of beamspace channel as analyzed above , we can directly obtain according to ( [ eq24 ] ) .after that , the nonzero elements of are estimated by ls algorithm in step 3 , and the influence of this channel component is removed in steps 4 and 5 . such procedure will be repeated ( in step 6 ) until the last channel component is considered . note that for the proposed sd - based channel estimation , we do not directly estimate the beamspace channel as .this is because that most of the elements with small power are regarded as zero , which will lead to error propagation in the influence removal , especially when is large . as a result, will be more and more inaccurate to estimate the nonzero elements in step 3 . to this end, we only utilize to estimate the position in step 1 , which can still guarantee a high recovery probability even if is inaccurate .then , after the iterative procedure , we can obtain the total support of in step 7 . using and , we can alleviate the impact of error propagation and estimate the beamspace channel more accurately in steps 8 and 9 .* initialization * : for , .+ * for * + 1 .detect the position of the strongest element in as + , is the row of ; + 2 .detect according to ( [ eq24 ] ) ; + 3 .ls estimation of the nonzero elements of as + , ; + 4 . form the estimated as ; + 5 . remove the influence of as + 6 . ; + * end for * + 7 . ; + 8 . , ; + 9 . , ; + the key difference between * algorithm 1 * and classical cs algorithms is the step of support detection . in classical cs algorithms ,all the positions of nonzero elements are estimated in an iterative procedure , which may be inaccurate , especially for the element whose power is not strong enough .by contrast , in our algorithm , we only estimate the position of the strongest element .then , by utilizing the structural characteristics of mmwave beamspace channel , we can directly obtain the accurate support with higher probability as illustrated in fig .moreover , we can also observe that the most complicated part of the proposed sd - based channel estimation is the ls algorithm , i.e. , step 3 and step 8 .therefore , the computational complexity of sd - based channel estimation is ( ) , which is comparable with that of ls algorithm , since and are usually small as discussed above .in this section , we consider a typical mmwave massive mimo system , where the bs equips a lens antenna array with antennas and rf chains to simultaneously serve users . for the user , the spatial channel is generated as follows : 1 ) one los component and nlos components ; 2 ) , and for ; 3 ) and follow the i.i.d .uniform distribution within }$ ] . fig .5 shows the normalized mean square error ( nmse ) performance comparison between the proposed sd - based channel estimation and the conventional omp - based channel estimation ( i.e. , using omp to solve ( [ eq15 ] ) ) , where the total number of instants for pilot transmission is ( i.e. , blocks ) . for sd - based channel estimation , we retain strongest elements as analyzed above for each channel component , while for omp , we assume that the sparsity level of the beamspace channel is equal to . from fig . 5 , we can observe that sd - based channel estimation enjoys much better nmse performance than omp - based channel estimation , especially when the uplink snr is low ( e.g. , less than 15 db ) . since low snr is the typical case in mmwave communications before beamforming , we can conclude that the proposed sd - based channel estimation is more attractive for mmwave massive mimo systems .next , we evaluate the impact of different beamspace channel estimation schemes on beam selection .we adopt the interference - aware ( ia ) beam selection proposed in as it can support the scenario , and the dimension - reduced digital precoder in ( [ eq5 ] ) is selected as the zero - forcing ( zf ) precoder . fig .6 provides the sum - rate performance of ia beam selection with different channels .we can observe that by utilizing the proposed sd - based channel estimation , ia beam selection can achieve better performance , especially when the uplink snr is low .more importantly , when the uplink snr is moderate ( e.g. , 10 db ) , ia beam selection with sd - based channel estimation , which only requires 16 rf chains , can achieve the sum - rate performance not far away from the fully digital zf precoder with 256 rf chains and perfect channel state information ( csi ) .this paper investigates the beamspace channel estimation problem for mmwave massive mimo systems with lens antenna array .specifically , we first propose an adaptive selecting network with low cost to obtain the efficient measurements of beamspace channel .then , we propose a sd - based channel estimation , where the key idea is to utilize the structural characteristics of mmwave beamspace channel to reliably detect the channel support .analysis shows that the computational complexity of the proposed scheme is comparable with the classical ls algorithm .simulation results verify that the proposed sd - based channel estimation can achieve much better nmse performance than the conventional omp - based channel estimation , especially in the low snr region .s. han , c .-i , z. xu , and c. rowell , large - scale antenna systems with hybrid precoding analog and digital beamforming for millimeter wave 5 g , " _ ieee commun . mag .186 - 194 , jan . 2015 .j. brady , n. behdad , and a. sayeed , beamspace mimo for millimeterwave communications : system architecture , modeling , analysis , and measurements , " _ ieee trans . ant . and propag .61 , no . 7 , pp . 3814 - 3827 , jul .2013 .a. alkhateeb , o. el ayach , g. leus , and r. w. heath , channel estimation and hybrid precoding for millimeter wave cellular systems , " _ ieee j. sel .top . signal process ._ , vol . 8 , no .831 - 846 , oct . 2014 .x. gao , l. dai , s. han , c .-i , and r. w. heath , energy - efficient hybrid analog and digital precoding for mmwave mimo systems with large antenna arrays , " _ ieee j. sel .areas commun .998 - 1009 , apr .
|
by employing the lens antenna array , beamspace mimo can utilize beam selection to reduce the number of required rf chains in mmwave massive mimo systems without obvious performance loss . however , to achieve the capacity - approaching performance , beam selection requires the accurate information of beamspace channel of large size , which is challenging , especially when the number of rf chains is limited . to solve this problem , in this paper we propose a reliable support detection ( sd)-based channel estimation scheme . specifically , we propose to decompose the total beamspace channel estimation problem into a series of sub - problems , each of which only considers one sparse channel component . for each channel component , we first reliably detect its support by utilizing the structural characteristics of mmwave beamspace channel . then , the influence of this channel component is removed from the total beamspace channel estimation problem . after the supports of all channel components have been detected , the nonzero elements of the sparse beamspace channel can be estimated with low pilot overhead . simulation results show that the proposed sd - based channel estimation outperforms conventional schemes and enjoys satisfying accuracy , even in the low snr region .
|
one of the challenges in the analysis of 1d spectra , 2d images or 3d volumes in astrophysics and cosmology is to overcome the problem of separating a localized signal from a background . in particular , we are interested in localized sources with central symmetry and a background that we shall assume with the properties of homogeneity and isotropy and it will be characterized by a power spectrum .typical cases in the 1d case include : a ) the spectra of qsos , how to detect / extract absorption lines from a noisy spectrum once a profile is assumed , b ) time series analysis where a localized emission is superposed on a background , c ) point and extended sources to be detected in time - ordered data where the scanning strategy ( for satellites like planck ) is affected by the well - known noise . in the 2d case , we mention as typical cases : a ) cleaning of images to detect typical astrophysical sources on a white noise background , b ) the detection / extraction of point sources and extended sources ( clusters of galaxies ) on microwave / ir maps where the background is dominated by white noise or intrinsic cmb signal or galactic emission . in the 3d case , we remark as an example :a ) the detection of clusters and large - scale structure in 3d catalogs . the classical treatment to remove the background has been _ filtering_. low and high - pass filters reduced the contribution from the high and low frequencies present in the spectrum or image . in general , this process is extremely inefficient in dealing with very localized sources .the reason for that is that a very localized source can be decomposed in fourier space but many waves are needed ( infinite for a delta distribution ! ) , so if low / high - pass filters are applied then at the end many artefacts ( rings in 2d images ) usually appear surrounding the sources .a very important application of these principles is the detection of sources in two - dimensional astronomical images .several packages are commonly used for this task , such as daofind ( stetson 1992 ) , used to find stellar objects in astronomical images , and sextractor ( bertin & arnouts 1996 ) . when trying to detect sources , two different problems have to be solved : first , it is necessary to take account for the background variations across the image .in addition , if instrumental noise ( i. e. white noise ) appears , it should be removed as far as possible in order to increase the snr ratio .sextractor estimates the local background on a grid of size set by the user and then approximates it by a low - order polynomial .once such a background is defined the detection of sources at a certain level can be established through sets of connected pixels above certain threshold .daofind implicitly assumes that the background is smooth and its characteristic scale is very much larger than the scale of the stars .but in the case where the characteristic scale of variation of the background is approximately the scale of the structures the previous schemes obviously fail .an incorrect estimation of the background leads to a biased estimation of the amplitude of the sources .an example is a typical image of the cosmic microwave background radiation ( cmb ) at a resolution of several arcmin .if the intrinsic signal is due to a cold dark matter model ( i. e. the characteristic scale of the background is of the order of arcmin ) , then the separation of point sources with the same characteristic scale becomes a very difficult task .to deal with the instrumental noise the traditional procedure is filtering ( e. g. gaussian window ) .daofind filters with an elliptical gaussian that mimics the stellar psf and then it performs the detection looking for peaks above certain threshold .sextractor includes the possibility of filtering with several kind of filters ( top hat , gaussian , mexican hat and even self - made filters for every particular situation ) .an obvious advantage of this procedures ( background estimation plus filtering ) is that no a priori information on the structures is needed .a serious drawback is that the choice of the filter will have a great influence on the final result .the choice of filter depends on many factors , including in most cases personal preferences . in this context , it is necessary to find a systematic way to determine the optimal filter for every case .other methods have been used to separate different components given several microwave maps : wiener filtering ( wf ) and maximum entropy methods ( mem ) . regarding point sources, wf assumes a mean spectral flux dependence together with other assumptions for the other components ( tegmark & efstathiou , 1996 ; bouchet et al .1999 ) whereas mem assumes that the point sources are distributed like noise ( hobson et al .these methods are powerful regarding extended sources like clusters of galaxies because they use the concrete spectral dependence for the sunyaev - zeldovich effect .it is clear that the unknown spectral dependence for point sources remark the inefficiency of the previous methods . a possible solution to overcome this problem came with the usage of _ wavelets_. these are localized bases that allow a representation of a local object due , in general , to their basic properties : spatial and frequency localization ( as opposed to the trigonometric functions appearing in fourier decomposition ) .we remark at this point the success of such a technique dealing with simulated microwave maps : the `` mexican hat '' wavelet can be used in a nice way ( no extra assumptions about the background ) to detect / extract point sources and create a simulated catalog ( cayn et al .2000 , vielva et al . 2000 ) .two advantages emerge : on the one hand , one localizes the structures associated to the maxima of the wavelet coefficients and , what is more remarkable , we gain in the detection ( as compared to real space ) due to the amplification effect because at the scale of the source the background is not contributing to the dispersion .one relevant question concerns the possibility to find _ optimal _ filters .tegmark & oliveira - costa ( 1998 ) introduced a filter that minimizes the variance in the map . with this methodone can identify a big number of point sources in cmb maps .however , they failed to introduce the appropriate constraints in the minimization problem , i. e. the fact that we have a maximum at the source position at the scale defined by the source in order not to have spurious identifications .thus , this type of analysis lead us to the following questions : is there an optimal filter ( or better pseudo - filter ) given the source profile and the power spectrum of the background ?, is the `` mexican hat '' wavelet the optimal pseudo - filter dealing with point sources ? in order to answer these questions , we will assume that the sources can be approximated by structures with central symmetry given by a profile , with a characteristic scale ( e.g. a single maximum at its center and rapid decay at large distances ) .if the nd image contains different types of sources , a number of pseudo - filters adapted to each profile must be used to detect them . a possible generalization to include more general profiles is under study .the background will be modelled by a homogeneous and isotropic random field given in terms of the power spectrum , . in particular, we shall explore a scale - free spectrum that includes the cases of white noise ( ) , noise ( ) , etc .moreover , any spectrum of physical interest often can be locally approximated by a power - law .if the characteristics of the noise are not known a priori it can be always estimated directly from the nd image .we consider the n - dimensional case and make special emphasis on the analysis of spectra ( ) , 2d images ( ) and 3d volumes ( ) .in all the calculations we assume that the overlapping by nearby sources is negligible and also that their contribution to the total power spectrum is also negligible .all of this is a very good approximation at least above a certain flux level .an analytical approach to get the optimal pseudo - filter is presented in section [ optfilter ] .section [ realspace ] deals with properties of the optimal pseudo - filters on real space .section [ detection ] introduces the concepts of detection level and gain .sections [ gaussian ] and [ exponential ] are dedicated to the important cases of sources with profiles described by a gaussian and an exponential , respectively .an example of the performance of optimal pseudo - filters applied to simulated one - dimensional data is presented in section [ simulations ] . section [ extraction ] deals with the extraction of the sources .conclusions are summarized in section [ conclusions ] .let us consider an n - dimensional ( ) image with data values defined by where is the spatial coordinate ( in the 1d case can be also time , when we are dealing with time - ordered data sets ) and , represents a source with central symmetry placed at the origin with a characteristic scale ( e. g. a single maximum at its center and rapid decay at large distances ) and a homogeneous & isotropic background ( random field ) with mean value and characterized by the power spectrum ( this can represent instrumental noise and/or a real background ) , where is the nd fourier transform ( ) , symbol represents the complex conjugate of , is the wave vector and is the nd dirac distribution .let us introduce a spherical ( centrally symmetric ) pseudo - filter , , dependent on parameters . where defines a translation whereas defines a scaling .then , we define the pseudo - filtered field we do not assume a priori the positiveness of .the previous convolution can be written as a product in fourier space , in the form where and are the fourier transforms of and , respectively .because of the central symmetry assumed for the pseudo - filter , depends only on the modulus of . a simple calculation -taking into account eqs .( 1 ) and ( 2)- gives the average at the origin , , and the variance , , of the pseudo - filtered field where , for , respectively , ( for n - dimensions ) and the limits in the integrals go from to .now , we are going to express the conditions in order to obtain an optimal pseudo - filter for the detection of the source at the origin .one basic idea is to find a pseudo - filter such that when the original image is filtered with a scale -being the characteristic scale of the source- one obtains the maximum _ detection level _ taking into account the fact that the source is characterized by a single scale , other basic idea is to generate a filter giving the maximum contribution at the center of the source at a filtering scale . finally , we would like to estimate directly the amplitude of the source by the previous number . therefore , taking into account these basic ideas we will introduce from the mathematical point of view the optimal pseudo - filters . by definition a pseudo - filter will be called _optimal _ if the following conditions are satisfied : \i ) there exists a scale such that has a maximum at that scale , ii ) , i. e. is an unbiased estimator of the amplitude of the source and iii ) the variance of has a minimum at the scale , i. e. we have an efficient estimator . as a by - product , the ratio given by eq .( 7 ) will be maximum .we remark that no other information about the source profile is assumed , so `` optimal '' must be understood in the previous sense . by introducing the profile of the source , ,the condition ii ) and the equation ( 6 ) give the constraint whereas the condition i ) gives the constraint = 0.\ ] ] so , the problem is reduced to the functional minimization ( with respect to ) of given by equation ( 6 ) with the constrains given by equations ( 8) and ( 9 ) .this minimization incorporate these constraints through a couple of lagrangian multipliers .the solution ( optimal pseudo - filter ) is found to be , \ \ \\delta = ac - b^2,\ ] ] }^2.\ ] ] therefore , we have obtained analytically the functional form of the pseudo - filter ( its shape and characteristic scale are associated to the source profile and power spectrum ) .it is clear that assuming an adimensional dependence , where is the characteristic scale of the source , then such scale will appear explicitly in the form . obviously , we assume all the differentiable and regularity conditions at for and in order to have finite expressions for .generically , is a pseudo - filter , i. e. it is not positive ( filter ). let us remark that if we assume the behavior , for then and is a `` compensated '' filter , i. e. .strictly speaking there is another condition to be satisfied to get the reconstruction of the image and thus to have a wavelet : ( the admissibility condition ) .taking into account eq .( 5 ) the amplitude will be estimated as where is given by eq.(10 ) .the equation ( 10 ) can be written on real space as follows ,\ ] ] where and are the inverse fourier transform of and , respectively . for a flat background , i. e. , and assuming the behaviour when , one obtains .\ ] ] if we also assume a gaussian profile , i. e. , one finds the pseudo - filter ,\ ] ] that is a useful formula to be used for the detection of nd - gaussian structures on nd - images ( e.g. spectra , 2d images or 3d volumes ) on a flat background . on the other hand , if one assumes a gaussian profile but a non - flat spectrum one can easily find ,\ ] ] being the fourier transform of .taking into account the previous expression ( 10 ) , one can calculate the detection level ( see equation(7 ) ) }^{1/2}.\ ] ] on the other hand , we can calculate the dispersion of the field that allows to define a detection level on real space as the _ gain _ going from real space to pseudo - filter space is defined by if the background has a characteristic scale different from the scale of the structures ( sources ) to be detected , it is obvious that , so that we have a real gain going from real space to pseudo - filter space .the identification of sources as peaks above a high threshold ( e. g. ) in pseudo - filter space gives a low probability of false detections ( reliability ) because if the background has a characteristic scale different from the sources then everything detected with our method is real , but if both scales are comparable one can give an estimate based on the fluctuations of the background .for instance , in the case of a gaussian background , false detections above , due to the gaussian background , appear with a probabilty based on the formula to select the false detections from the real ones one can study the pseudo - filter profile nearby any real source ( see the last paragraph of section 8) . regarding the completeness ( i. e. how many real sources we miss with our method ) , this is a complicated topic because the background can slightly modify the location of the peaks and their amplitude .we will address this problem via numerical simulations ( see section 7 ) .in many physical applications the standard response of the instruments can be approximated by a gaussian function .in particular , the point spread function ( psf ) for many telescopes is of gaussian type .other more specific applications are related to the cosmic microwave background ( cmb ) , where the antennas used are well approximated by a gaussian .dealing with absorption systems associated to qsos , if the absorption line is not saturated and is dominated by thermal motions , then the line is usually approximated by an inverted gaussian inserted in a continuum plus noise .let us assume that the source and the background can be represented by the structure to be detected could have an intrinsic gaussian profile or it could be a point source in n - dimensions observed with an instrument that can be modelled through a gaussian pattern with a beam size .the background can be described by a scale - free spectrum . in this case : , and equations ( 11 ) give and the pseudo - filter is .\ \ \\ ] ] taking into account the behaviour in this formula , one obtains a compensated filter ( i. e. ) if or . in figure[ fig1 ] appear the optimal pseudo - filters for the 1d , 2d and 3d cases , respectively , and scale - free power spectrum with indexes .there is a degeneration in the case , where the line overlaps with the line , and in the case , where the line overlaps with the one .this degeneration can be deduced directly from equation ( 23 ) . on the other hand ,the detection level in pseudo - filter space is given by equation ( 18 ) }^{1/2}{\theta}^{\frac{n - \gamma } { 2}}.\ ] ] finally , it is interesting to remark that the cases and give the same pseudo - filter therefore , in the cases the mexican hat is found to be the optimal pseudo - filter .this justify the use of this wavelet to detect point sources in cmb maps ( cayn 2000 , vielva 2000 ) .\a ) gaussian source and white noise : this subcase corresponds to or , and the pseudo - filter is ,\ \ \\ ] ] that gives a pseudo - filter for the analysis in the different dimensions except for that gives the mexican hat wavelet ( ) .the detection level is given by equation ( 24 ) : }^{1/2}{\theta}^{n/2}.\ ] ] we have calculated the contribution of sources to the power spectrum in order to estimate their influence in calculating the pseudo - filter .we arrive to the conclusion that if the signal / noise ratio ( i. e. dispersion associated to the sources over dispersion associated to the background ) is , being the pixel scale and the width of the source , then the extra contribution to the pseudo - filter coefficients is less than a .\b ) gaussian source and noise : let us assume a source with a gaussian profile and a background represented by noise , i.e. or . in this case : , and equation ( 22 ) gives the pseudo - filter ,\ \ \\ ] ] for instance , in the case one has the wavelet , that is the optimal pseudo - filter to be used to detect a gaussian signal on 1d spectra . in this casethe detection level is ( see equation(24 ) ) .typical example in astrophysics is the exponential disk associated to spiral galaxies. another interesting application could be in some areas of physics where the profile expected for the signal associated to the detection of some particles could be of exponential type .let us assume that the source and background can be represented by in this case : }^{-\frac{n + 1}{2}},\ ] ] where for , respectively , and equations ( 11 ) give and the pseudo - filter is }^{-\frac{n + 1}{2 } } [ 1 + \frac{\gamma - n}{2}(n + 1 ) + m\frac{{(q\lambda)}^2}{1 + { ( q\lambda)}^2}],\ \ \\ ] ] in figure [ fig2 ] appear the optimal pseudo - filters for the 1d , 2d and 3d cases , respectively , and power spectrum with indexes .the filter profiles are more extended than in the gaussian source , as one can expect from the more gentle fall of the exponential source .an interesting case is , then the pseudo - filter is }^{-\frac{n + 1}{2}}.\ ] ] \a ) exponential source and white noise : for this subcase , then equation ( 36 ) leads to the pseudo - filter }^{-\frac{n + 1}{2 } } [ 1 + \frac{n}{2}(n + 1 ) + \frac{(n + 1)(n + 2){(q\lambda)}^2}{1 + { ( q\lambda)}^2}].\ \ \\ ] ] for instance , for an exponential structure to be optimally detected in a 1d spectrum , we must use }^2}.\ ] ] \b ) exponential source and noise : an interesting case is , equation ( 36 ) gives order to test some of the ideas proposed in previous sections , we simulated the case of one - dimensional gaussian sources on a background .the kind of background simulated is the well - known noise .this kind of noise appears very often in many devices in experimental physics .further simulations with 2-dimensional data and realistic realizations of noise will be carried on in future work . for the sake of simplicity , all the simulated sources have the same amplitude and that is set to be 1 ( in arbitrary units ) .100 of these sources were deployed over a 32768 pixel field. the number of sources and the size of the field were selected in order to have enough sources for statistical studies , to avoid ( as far as possible ) the overlapping of the sources and to minimize the contribution of the sources to the total dispersion of the simulations .the width of the gaussian profiles was chosen to be : this is the case for a pixel of and a gaussian with a fwhm of .noise was added so that the signal - to - noise ratio of the sources , defined as the ratio ( where is the amplitude of the source and is the standard deviation of the noise ) , assumes values of and . finally , the optimal filter , given by eq .( 29 ) with , was applied to the image . to compare with a more traditional filtering scheme, we filtered the images also with a gaussian of width equal to and a mexican hat wavelet of width equal to .this is a rather naive usage of the mexican hat wavelet and the gaussian source because the optimal width for these filters in the general case is not the source scale ( cayn 2000 , vielva 2000 ) , but it serves us well because what we intend is to compare how do filters work when we have no further information about the data ( i.e. , the optimal scale , which is different for each background ) .the result of these simulations is shown in tables [ tb1 ] and [ tb2 ] .table [ tb1 ] refers to the original simulations .it shows the original signal - to - noise ratio of each simulation as well as statistical quantities of interest such as the dispersion of the map and the mean amplitude of the sources in it .finally , it shows the number of sources directly detected from the simulations above and thresholds and the number of spurious detections above these tresholds . as expected ,only a few sources are detected , except for the most favorable cases ( high original signal - to - noise rato and low detection threshold ) .the small bias in the mean measured amplitude is due to pixelization effects .table [ tb2 ] refers to the simulations in table [ tb1 ] after filtering with a gaussian of width , a mexican hat wavelet of width and the optimal pseudo - filter .each row in table [ tb2 ] corresponds to the same row in table [ tb1 ] .a gaussian filter smoothes the image , removing small - scale noise .it also smoothes the source peaks , thus lowering the amplitude of detected sources . for the case of noisethe dominant fluctuations appear at large scales and are not affected by the gaussian filter .the large - scale features may contribute to contamination in two different ways : they can conceal sources in large valleys and can produce spurious peaks. none of these effects can be avoided with a gaussian filter . on the other hand ,the smoothing effect of the gaussian filter takes place normally and lowers the amplitude of the sources .therefore , the number of true detections is smaller than in the non - filtered image , and the spurious detections are not removed even in the highest case .the gains , indicated in column 6 , clearly reflects this situation ( ) .the mexican hat wavelet has a better performance under conditions .the mexican hat removes large - scale fluctuations , allowing the hidden sources to arise above the detection threshold .for example , in the case of original there were 47 sources above the level and only 1 above the level .after filtering with the mexican hat , there are 93 detections above level and 64 above the level , a significant improvement .the number of spurious sources remains almost untouched .the optimal pseudo - filter also deals with the large - scale structure .it is constructed to enhance all fluctuations in the source scale , while removing fluctuations that arise in other scales .in addition , it is required to be unbiased with respect to the amplitude . in practice, the amplitude is slightly underestimated due to the propagation of pixelization effects .this small bias is lower than a and can be calibrated in any case . in the casethe number of true detections is higher than in the mexican hat case and the number of spurious sources is comparable or slightly reduced . only in the case of low initial signal - to - noise ratiothe number of spurious detections is greater .this is due to the fact that this pseudo - filter enhances all fluctuations in the source scale .future work will take care of this weakness of the method including more information about the shape of the sources . for the case of initial signal - to - noise ratio of 2.95we find 94 sources ( of 100 ) and 8 spurious detections ( a reliability close to ) above the level , a result very similar to the obtained with the mexican hat . above the detection levelthe optimal pseudo - filter finds 79 sources where the mexican hat found only 64 .the number of spurious sources have not increased significantly ( from 4 to 5 ) . for higher initial signal - to - noise ratiosthe completeness and reliability quickly improve .the gain obtained with the optimal pseudo - filter is greater than the one obtained with the mexican hat .it can be analytically calculated , using eqs .( 6,17 ) for the pseudo - filter and its equivalents for the mexican hat , leading to : ^{1/2}\ ] ] for the one - dimensional case .this formula holds while . according to equation ( 40 ) ,the ratio is 1.41 in the case and 1.13 in the case .the mean observed ratio in the simulations is 1.31 and 1.08 respectively , and fits well with our expectations .as a conclusion we have that optimal filter gives higher gains than the classical mexican hat filter . in figure [ fig4 ]an example of the simulations is shown . in the top panelthere is a 500 pixel wide subsection of the , simulation .this subsection corresponds to a region in which the large - scale noise has a positive value .four sources are present in this area , all of them arising above the level ( indicated with a dotted line ) .the position of the sources are marked with an asterisk in the lower panel . additionally , there are three peaks , corresponding to background fluctuations , that arise above the level .the second panel from the top shows the image after filtering with the optimal filter .the large - scale features have been removed and also the small - scale noise is reduced .the sources have been amplified with respect to the original map and now all of them reach the level but the spurious peaks have been removed .the amplitudes of the sources remain unbiased and close to the true value of 1 . in the third panel from the topthere is the image after filtering with a gaussian .the whole image has been smoothed and now one of the sources barely reaches the level .the large - scale fluctuations remain untouched and all the spurious peaks remain in the filtered image . in the bottom panelwe see the image after filtering with the mexican hat .the large - scale fluctuations are also removed as well as the small - scale noise , as in the case of the optimal pseudo - filter .nonetheless , the gain is lower and only three of the sources reach the level .additionally , it is found that the small - scale noise removal is less efficient in the case of the mexican hat .the optimal pseudo - filter gives the position and an unbiased estimator of the amplitude of the source .we propose to make the extraction of the source on real space , i. e. one subtracts the function , being the given profile and the estimated amplitude , at the position of the source . from the practical point of view , in order to select the appropriate sources ( with a given scale and avoiding to select spurious detections if the background and/or noise are manifest at scales comparable to the sources ) we can operate with the optimal pseudo - filter at other different scales as given by equation ( 10 ) but with . if the scale that gives the maximum do not correspond to the scale we are looking forthen this is a spurious source ( or another type of source with a different scale ) . as an additional check, we can calculate the source profile in the pseudo - filter space nearby any real source , e. g. for a gaussian profile the behaviour around the maximum must be ,\ \ x\equiv \frac{r}{r_o},\ \ \m\equiv \frac{n + \gamma } { 2},\ ] ] and an analogous behaviour can be found for the exponential profile .if a detected source do not follows such a behaviour then it would be consider as a false detection and must be deleted from the initial catalog .we have introduced for the first time ( as far as the authors know ) the concept of _ optimal _ pseudo - filter to detect / extract spherical sources on a background modelled by a ( homogeneous & isotropic ) random field characterized by its power spectrum .we have obtained a generic analytical formula that allows to calculate such a pseudo - filter either in fourier or real space as a function of the source profile and power spectrum of the background .the psesudofilter is an unbiased an efficient estimator of the amplitude of the sources .we have applied the previous formula to the cases of a gaussian and an exponential profile and studied scale - free spectra . in particular , we have remarked the interesting cases of white noise and noise .we have calculated the detection level for the physically interesting cases of spectra , images and volumes .for some particular cases , the optimal pseudo - filters are wavelets ( e. g. a gaussian source embedded in white noise in the 2d case ). we have simulated gaussian sources embedded in a noise in order to see the performance of the optimal filter against the mexican hat wavelet . in the last casethe gain is lower , the noise removal is less efficient and the number of real detections is smaller .we also remark that filtering with a gaussian window is not the optimal procedure .the extraction of the sources identified at a certain scale is proposed to be done directly on real space . at the location of the source one subtracts the function , being the given profile .all the calculations assume that the overlapping of nearby sources is negligible and the contribution of the sources to the background is also negligible .this is a very good approximation in many cases of interest at least above a certain flux level .we remark the advantages of our method over traditional ones ( daofind , sextractor ) : we do not need to assume a smooth background and/or some filters ( e.g. gaussian ) in order to detect the sources . for some astrophysical cases ( cmb ) the background can be complex so a smooth surface could be not a reasonable assumption .however , we need to assume the profile of the source and statistical properties of the background in order to get the optimal filter .the main advantage of our method is the amplification effect ( gain ) going to pseudo - filter space .generalization of these studies , considering different kind of sources ( including non - centrally symmetric ones ) , are now being undertaken .the applications of this type of methodology is without any doubt of interest not only for astrophysics / cosmology but for other sciences .this work has been supported by the comision conjunta hispano - norteamericana de cooperacin cientfica y tecnolgica ref. 98138 , spanish dgesic project no .pb98 - 0531-c02 - 01 , feder project no .1fd97 - 1769-c04 - 01 and the eec project intas - open-97 - 1992 for partial financial support .j. l. s. acknowledges partial financial support from spanish mec and thanks cfpa and berkeley astronomy dept .hospitality during year 1999 .d. h. acknowledges a spanish m.e.c .phd . scholarship .c c c c c c c c c c & & & & & & & & & + 1 & 2 & 0.5085 & 1.0670 & 2.0983 & 0.5075 & 19 & 12 & 0 & 0 + 2 & 3 & 0.3454 & 1.0195 & 2.9514 & 0.3432 & 47 & 9 & 1 & 0 + 3 & 4 & 0.2656 & 0.9994 & 3.7629 & 0.2582 & 72 & 6 & 10 & 0 + 4 & 5 & 0.2189 & 0.9890 & 4.5174 & 0.2070 & 90 & 9 & 36 & 1 + c c c c c c c c c c & & & & & & & & & + + 1 & 0.4816 & 0.7466 & 1.5504 & 0.4822 & 0.7389 & 8 & 3 & 0 & 0 + 2 & 0.3260 & 0.7237 & 2.2204 & 0.3216 & 0.7523 & 27 & 4 & 0 & 0 + 3 & 0.2494 & 0.7134 & 2.8602 & 0.2411 & 0.7601 & 46 & 3 & 0 & 0 + 4 & 0.2045 & 0.7076 & 3.4604 & 0.1929 & 0.7660 & 67 & 2 & 5 & 0 + + 1 & 0.2507 & 1.0070 & 4.0169 & 0.2475 & 1.9144 & 77 & 31 & 17 & 2 + 2 & 0.1798 & 0.9762 & 5.4298 & 0.1665 & 1.8397 & 93 & 16 & 64 & 4 + 3 & 0.1471 & 0.9642 & 6.5548 & 0.1250 & 1.7419 & 96 & 5 & 94 & 4 + 4 & 0.1292 & 0.9587 & 7.4214 & 0.1008 & 1.6429 & 98 & 2 & 98 & 2 + + 1 & 0.2288 & 1.0159 & 4.4408 & 0.2217 & 2.1164 & 87 & 13 & 26 & 2 + 2 & 0.1673 & 0.9919 & 5.9305 & 0.1496 & 2.0094 & 94 & 8 & 79 & 5 + 3 & 0.1394 & 0.9825 & 7.0474 & 0.1131 & 1.8728 & 97 & 3 & 97 & 3 + 4 & 0.1244 & 0.9778 & 7.8592 & 0.0918 & 1.7398 & 99 & 1 & 99 & 1 +
|
this paper introduces the use of pseudo - filters that optimize the detection / extraction of sources on a background . we assume as a first approach that such sources are described by a spherical ( central ) profile and that the background is represented by a homogeneous & isotropic random field . we make an n - dimensional treatment making emphasis in astrophysical applications for spectra , images and volumes , for the cases of exponential and gaussian source profiles and scale - free power spectra to represent the background .
|
evolutionary games in complex networks have recently attracted attention in evolutionary biology , behavioral science and statistical physics .one of the most important questions in these fields is how network structure affects the evolution of cooperative behavior .nowak and may noted that the lattice structure enhances cooperative behavior in the prisoner s dilemma game .currently lattice structure is considered one of the mechanisms that support cooperation .however , hauert and doebeli found that lattice structure often inhibits cooperative behavior in the snowdrift game .thus , it is not clear how lattice structure affects the evolution of cooperation in general situation .lattice structure are characteristically predisposed to high clustering .the purpose of this study was to establish a theoretical formula that describes the influence of the clustering coefficient in evolutionary games .moreover , the effects of the lattice structure . have been clarified the pair approximation technique was applied to obtain an analytical solution .the clustering coefficient is used to measure the tendency of nodes in a network to cluster together .the clustering coefficient of a single node is defined as the probability that two randomly selected neighbors are connected to each other .the clustering coefficient of the entire network is determined by averaging the clustering coefficients of all nodes . for many social networks , such as file actor collaborations ,telephone calls , e - mails , sexual relationships , and citation networks , the clustering coefficients are greater than those of randomly established networks .although several studies have examined the effects of clustering on the organization of cooperation , there is little agreement as to whether clustering promotes or inhibits the evolution of cooperation .this study considers models with four different strategy - updating rules and presents analytical predictions .consider a static network with nodes .an individual occupies each node .individuals play games with all neighbors and their reproduction depends on the average payoff of a sequence of games .the snowdrift game is considered as an example .an individual chooses one of the two strategies : cooperation ( c ) or defection ( d ) .the payoff matrix is given by where the positive parameters and represent the benefit and cost of cooperation , respectively .the cost - to - benefit ratio of mutual cooperation is defined by . when ( i.e. , ) , the snowdrift game has an inner nash equilibrium , where the cooperator frequency is . in this case , the two strategies coexist in a well - mixed population .this type of game is also known as the hawk - dove or chicken game .next , the networks on which this evolutionary game is performed are defined .all notes were assumed to have same degree ( the number of neighbors ) to focus on the network cluster coefficient effects .we used a random regular graph with a high clustering coefficient .the edge exchange method that selects two links randomly and repetitively was used to construct the graph .the links were rewired only when the new network configuration was connected and had a larger clustering coefficient .in addition , three types of regular lattices with periodic boundary conditions were used : square lattice with von neumann neighborhood ( ) , hexagonal lattice ( ) , and square lattice with moore neighborhood ( ) .the clustering coefficient is calculated as zero for the von neumann lattice , although it is highly clustered .the strategy was assumed to be updated stochastically and asynchronously .these are natural assumptions because strategy selection is not deterministic and occurs simultaneously in the population .four different strategy - updating rules were selected . 1 .birth - death ( bd ) .choose an individual with probability proportional to its fitness . then , choose another individual among the neighbors of individual .individual adopts the strategy of individual .2 . death - birth ( db ) .choose an individual at random .then , choose another individual among the neighbors of individual with probability proportional to fitness .individual adopts the strategy of individual .3 . imitation ( i m ) .choose an individual at random . then , choose another individual among individual and its neighbors with probability proportional to its fitness .individual adopts the strategy of individual .4 . local competition ( lc ) .choose an individual at random and then choose another individual among its neighbors randomly .individual adopts the strategy of individual with probability .the fitness of individual is given by , where is the average payoff of all its neighbors .the parameter is the intensity of selection .we assumed a weak selection with small .this weak selection assumption allowed the pair approximation to be performed analytically within reason of what occurs in the biological world .a pair approximation was used to calculate the equilibrium state .let and be the densities of cooperators ( c ) and defectors ( d ) , respectively .the pair densities taken into consideration , and represented the frequency that two neighboring pairs were cc , cd or dd .pairs cd and dc were not distinguished from each other .thus , .the conditional probabilities and is given by considering and , the densities , , , and are represented as functions of and : a triplet , which includes three nodes , can be one of two different configurations .the triad has a node connected with two other nodes that do not connect with each other .the triangle configuration is when all three nodes connected .the standard pair approximation for a triad leads to however , the use of an extended pair approximation for a triangle provides the first approximate equality was calculated from the kirkwood superposition approximation .therefore , we obtained where represents the clustering coefficient , is the probability that a neighbor of the end cooperator of a cd pair is a cooperator , and is the probability that a neighbor of the end defector of a cd pair is a defector . a cooperator can become a defector only when at least one defector exists in the neighborhood of the cooperator , and vice versa for all four strategy - updating rules thus , a strategy can be replaced only in cd pairs . in the strategy - updating cases of bd and lc ,the probability to choose c among a cd pair is proportional to the average fitness , \label{ec1c}\ ] ] while the probability to choose d is proportional to .\label{ec1d}\ ] ] the necessary condition for equilibrium is that ( [ ec1c ] ) equals to ( [ ec1d ] ) .this condition is simplified as if eq .( [ ec1 ] ) is correct , the strategy changing rates coincide with each other : in addition , in the equilibrium state , the rate at which cd pairs become cc needs to equal the rate at which cc pairs become cd .the rate of change of the doublet density is given by {d\to c}\\ \label{ec2a } p_{cc\to cd}&=&(z-1)p_{c|cd}p_{c\to d}. \label{ec2b}\end{aligned}\ ] ] since and eq .( [ ec1all ] ) , another condition is solving the system of eqs .( [ ec1 ] ) and ( [ ec2 ] ) , yields the equilibrium solutions of and .using ( [ eqx ] ) and ( [ epa ] ) , the cooperator equilibrium density was calculated as the result ( [ sol_bd ] ) is valid for bd and lc . of cooperators plotted as a function of the cost - to - benefit ratio for four different updating rules .the clustering coefficient was set to , , and for fixed and .the system size was 10,000 . in all simulations , was obtained by averaging the last 10,000 time steps after the first 10,000 ones , and each point resulted from 10 different realizations .the lines represent the predictions ( [ sol_bd ] ) for bd and lc , ( [ sol_db ] ) for db , and ( [ sol_im ] ) for i m ., scaledwidth=70.0% ] it is more complicated to obtain a pair approximation that involves the effect of triangles in the case of db and i m updating the condition ( [ ec2 ] ) is also valid in these cases .the following are conjecture equations (z-1)}{[z-2 -c ( z-1 ) ] ( z+1)}\left(r- \frac{1}{2}-\frac{1}{z-1}\right)+\frac{1}{2 } \label{sol_db}\ ] ] for db updating , and ( z-1)}{[z-2 -c ( z-1 ) ] ( z+1)}\left[r- \frac{1}{2}-\frac{z}{(z+2)(z-1)}\right]+\frac{1}{2 } \label{sol_im}\ ] ] for i m updating .although eqs .( [ sol_db ] ) and ( [ sol_im ] ) can be calculated by analogy with ( [ sol_bd ] ) .proper deviation do not exist yet .of cooperators plotted for four different updating rules .the degree was set to , and for fixed and .lines represent predictions ( [ sol_bd ] ) for bd and lc , ( [ sol_db ] ) for db , and ( [ sol_im ] ) for i m . other parameter values are the same as in fig .[ fig_1 ] , scaledwidth=70.0% ] . of cooperators plotted for four different updating rules .the simulations were performed for the von neumann square lattice ( ) , hexagonal lattice ( ) and moore lattice ( ) .reference lines are superimposed ( [ sol_bd ] ) for bd and lc , ( [ sol_db ] ) for db , and ( [ sol_im ] ) for i m , where the clustering coefficient was set to for the von neumann square lattice , for the hexagonal lattice , and for the moore lattice .other parameter values are the same as in fig .[ fig_1 ] , scaledwidth=70.0% ] to confirm the predictions presented in the previous section , numerical results were performed for random networks with high clustering coefficients ( figs .[ fig_1 ] and [ fig_2 ] ) .the predictions agree well with the numerical results .figure [ fig_1 ] shows that when the clustering coefficient increases , the frequency of the major strategy increases for all four updating rules .predictions ( [ sol_bd ] ) , ( [ sol_db ] ) , and ( [ sol_im ] ) suggest the interval of the coexisting region is /[z - c(z-1)] ] for db and i m .thus , if the only one strategy can survive for all four updating rules .figure [ fig_2 ] shows that the frequency of the majority increases when the degree decreases for bd and lc updating rules .cooperation is enhanced for small for db and i m updating rules .figure [ fig_3 ] shows numerical results for three types of two - dimensional lattices .the predictions ( [ sol_bd ] ) for bd and lc , ( [ sol_db ] ) for db , and ( [ sol_im ] ) for i m , were superimposed for reference where the parameters were set as and for the von neumann lattice , and for the hexagonal lattice , and and for the moore lattice in fig.[fig_3 ] .this result suggests that the ` effective ' cluster coefficients are approximately 0.5 , 0.65 , and 0.7 rather than the `` nominal '' values 0 , 0.4 , and 0.43 .the clustering coefficient measures the density of triangles in a network .this deviation is because of the effect of loops of length four and above .cooperator density increases when the degree decreases for db and i m updating rules .in conclusion , the frequency of the majority increases with the clustering coefficient . in situations where cooperators and defectors coexist and cooperators are the majority, clustering enhances cooperative behavior .when cooperators are the minority where cooperators and defectors coexist , clustering inhibits cooperative behavior .these results are independent of the strategy - updating rule .we can explain this tendency intuitively by using the heterophilicity as follows . from eqs .( [ eqx ] ) , ( [ epa ] ) and ( [ ec2 ] ) , the heterophilicity is calculated as for all four updating rules .it is obvious , meaning that c have more connections to d than expected randomly .when the clustering coefficient increases , heterophilicity decreases . in this case , the population is more exclusive , and it is more difficult for strategies to coexist .consequently , the parameter region where two strategies can coexist becomes narrow .we performed numerical simulations for a small world network and a scale - free network on geographical space ( not shown ) to confirm the generality of this result , essentially the same results were obtained .unfortunately , a rigorous derivation of ( [ sol_db ] ) and ( [ sol_im ] ) is not provided and it remains an open problem .lastly , we considered the prisoner s dilemma game , where the payoff matrix is in this case , mutual defection is the only strong nash equilibrium , regardless of the values of the parameters and .thus , only defectors can survive in well - mixed population .in addition , cooperators can not survive for bd and lc updating rules . the standard pair approximation for the db updating rule showsif , only cooperators can survive ; conversely , if , only defectors can survive .the result is the same in the case of i m updating , except the threshold is . in any case, there is no parameter region where the two strategies can coexist .thus , the clustering coefficient has no influence on the density of cooperators in the prisoner s dilemma game . in conclusion ,the assertion that lattice structure enhances cooperative behavior is misleading .w. aiello , f. chung , and l. lu , `` a random graph model for massive graphs in proceedings , '' the 32nd annual acm symposium on theory of computing , pp . 171180 , association of computing machinery , new york , 2000 .
|
this study investigates the influence of lattice structure in evolutionary games . the snowdrift games is considered in networks with high clustering coefficients , that use four different strategy - updating . analytical conjectures using pair approximation were compared with the numerical results . results indicate that general statements asserting that the lattice structure enhances cooperation are misleading .
|
energy efficiency is an important issue in wbans , because sensor nodes damage human body tissue .more importantly sensor nodes connected to body are battery operated devices , they have limited life time .so , mac protocols of wbans need to be energy efficient and supports medical applications .it allows integration of low power intelligent sensor nodes .they are used to stream biological information from human body and transmit it to a coordinator .this procedure is very helpful while monitoring health of a person and in case of emergency providing proper medication .mac protocol plays an important role in determining the energy efficiency of a protocol in wbans .traditional mac protocols focus on improving throughput and bandwidth efficiency . however , the most important thing is that they lack in energy conserving mechanisms .the main source of energy wastage are idle listening , overhearing and packet overhead .controlling these energy waste sources maximizes network lifetime . + wbans have many advantages like mobility of patient and independent monitoring of patient. it can work on wireless local area networks ( wlans ) , worldwide interoperability for microwave access ( wimax ) or internet to reliably transmit data to a server which is monitoring health issues .there are some requirements for the mac protocol design to be used in wbans .firstly all of protocols must have high qos ( quality of service ) , it must be reliable , it needs to support different medical applications . + by using different medium access techniques ,different low power and energy efficient protocols for mac are proposed .the most important attributes of wbans are low power consumption and delay .different techniques are used with different protocol to control the delay and to improve the efficiency of mac protocol .techniques like energy efficient low duty cycle mac protocol [ 1 ] , traffic adaptive mac protocol [ 3 ] , energy efficient tdma based mac protocol [ 4 ] are used to improve energy efficiency and to control delay .+ the important techniques of mac protocol for wbans are time division multiple access ( tdma ) and carrier sense multiple access with collision avoidance ( csma / ca ) .frequency division multiple access ( fdma ) is very close to tdma .pure aloha and slotted aloha are not used due to collision problems and high packet drop rates as well as low energy efficiency .there are several challenges in realization of the perfect multiple access technique for mac protocol design .authors in [ 1 ] state that the ieee 802.15.4 standard is designed as a low power and low data rate protocol with high reliability .they analyze unslotted version of protocol with maximum throughput and minimum delay .the main purpose of 802.15.4 is to give low power , low cost and reliability .this standard defines a physical layer and a mac sub layer .it operates in either beacon enabled or non beacon mode .physical layer specifies three different frequency ranges : 2.4 ghz band with 16 channels , 915 mhz with 10 channels and 868 mhz with 1 channel .calculations are performed by considering only beacon enabled mode and with only one sender and receiver .however , it is high power consumed standard . as number of sender increases , efficiency of 802.15.4 decreases .throughput of 802.15.4 declines and delay increases when multiple radios are used because of increase in number of collisions .+ energy efficient tdma based mac protocol is described in [ 2 ] .protocol in this paper minimizes the amount of idle listening by sleep mode this is to reduce extra cost for synchronization .it listens for synchronization messages after a number of time frames which results in extremely low communication power .however , this protocol lacks wake - up radio mechanism for on demand traffic and emergency traffic .+ in paper [ 3 ] , authors propose using a wake - up radio mechanism mac protocol for wireless body area network .comparison of tdma with csma / ca is also done in this paper .proposed mac protocol save energy by node going to sleep when there is no data and can be waked up on - demand by wake - up radio mechanism .this protocol works on principle of on - demand data .it reduces the idle time consumption of a node to a great extent .however , emergency traffic are not discussed in this paper , which is a major issue in wbans .+ an ultra low power and traffic adaptive protocol designed for wbans is discussed in [ 4 ] .they used a traffic adaptive mechanism to accommodate on - demand and emergency traffic through wake - up radio .wake - up radio is low power consumption technique because it uses separate control channel with data channel .comparison of power consumption and delay of ta - mac with ieee 802.15.4 , wise mac , smac are done in this paper . + authors describe energy efficient low duty cycle mac protocol for wbans in paper [ 5 ] .tdma are compared with csma / ca .tdma based protocol outperforms csma / ca in all areas . collision free transfer , robustness to communication errors , energy efficiency and real time patient monitoring are the flaws that are overcome in this paper .however , synchronization is required while using tdma technique . with increase in data, tdma energy efficiency decreases due to queuing . as network topology changes tdma experiences degradation in performance .+ in this paper [ 6 ] , authors introduce a context aware mac protocol which switch between normal state and emergency state resulting in dynamic change in data rate and duty cycle of sensor node to meet the requirement of latency and traffic loads .also they use tdma frame structure to save power consumption .additionally a novel optional synchronization scheme is propose to decrease the overhead caused by traditional tdma synchronization scheme .however , throughput in this paper is not addressed .+ in paper [ 7 ] , authors propose technique for mechanism of low power for wban , that defines traffic patterns of sensor nodes to ensure power efficient and reliable communication .they classify traffic pattern into three different traffic patterns ( normal traffic , on - demand traffic and emergency traffic ) for both on - body and in - body sensor networks .however they have not taken care for the delay and throughput . also complete implementation of their proposed protocolis still to be done .+ phy and mac layers of ieee 802.15.6 standard are discussed by author in paper [ 8 ] .they stated specifications and identified key aspects in both layers .moreover bandwidth efficiency with increase in payload size is also analyzed .they also discuss the different modes of security in the standard .however , bandwidth efficiency of the standard is only investigated for csma / ca .also they not discussed throughput and delay .+ in paper [ 9 ] , author proposes a modified mac protocol for wban which focuses on simplicity , dependability and power efficiency .it is used in contention access period and csma / ca is used in contention free period .data is transmitted in the contention free period where as cap is only used for command packets and best effort data packets .however , propagation delay is not neglected which we consider in our comparison and also interference from other wban nodes are not taken into account while doing calculation .technique which are used by the author have high delay as compared to tdma and fdma . + authors evaluate performance of ieee 802.15.4 mac , wise mac , and smac protocols for a non - invasive wban in terms of energy consumption and delay in [ 10 ] .ieee 802.15.4 mac protocol are improved for low - rate applications by controlling the beacon rate .in addition , beacons are sent according to the wakeup table maintained by the coordinator .however , authors have not discussed delay and offered load in their paper .+ in this [ 11 ] , authors propose a new protocol medmac and they elaborate novel synchronization mechanism , which facilitates contention free tdma channels , without a prohibitive synchronization overhead .they focus on power efficiency of medmac .also they show that medmac performs better than ieee 802.15.4 for very low data rate applications , like pulse and temperature sensors ( less than 20 bps ) .however , they have discussed about collisions but they have not focused on delay in the applications .+ authors in [ 12 ] presents implementation of energy efficient real time on demand mac protocol for medical wireless body sensor network .they introduced secondary channel for slave sensor node for channel listening in idle state .this secondary channel brings in the benefits of acquiring zero - power from slave sensor node battery when listening , to achieve very - low - power .nevertheless problem arises because of the tradeoff between low - power and real time wake up .however , they have not discussed about time critical application which require higher throughput and priority .+ in this paper [ 13 ] , authors introduced a tdma - based energy efficient mac protocol for in - vivo communications between mobile nodes in bsns using uplink / downlink asymmetric network architecture .they also proposed tdma scheduling scheme and changeable frame formats .the latency optimization is discussed and the performance is improved by reducing the data slot duration .however they have not elaborated about throughput and delay sensitive application .channel access mechanisms provided by medium access control ( mac ) layer are also expressed as multiple access techniques .this made it possible for several stations connected to the same physical medium to share it .multiple access techniques have been used in different type of networks .each technique is used according to its requirement . in this paper, we are comparing behavior of different multiple access techniques with change in throughput , delay and offered load .we have discuss them considering three scenarios .+ ( 1 ) offered load as a function of delay .+ ( 2 ) offered load as a function of throughput .+ ( 3 ) throughput as a function of delay .tdma works with principle of dividing time frame in dedicated time slots , each node sends data in rapid succession one after the other in its own time slot .synchronization is one of the key factors while applying tdma .it uses full channel width , dividing it into two alternating time slots .tdma uses less energy than others due to less collision and no idle listening .tdma protocols are more power efficient than other multiple access protocols because nodes transmits only in allocated time slots and all the other time in inactive state .a packet generated by node suffer three type of delays as it reaches receiver .+ ( 1)transmission delay .+ ( 2)queuing delay .+ ( 3)propagation delay .+ equations which we have used to plot tdma in three scenarios are given below :+ relation of d and t + relation of g and t + relation of d and g .description of parameters used in equations [ cols="<,<",options="header " , ] fdma is a basic technology in analog advanced mobile phone service ( amps ) , most widely - installed cellular phone system installed in north america .with fdma , each channel can be assigned to only one user at a time .each node share medium simultaneously though transmits at single frequency .fdma is used with both analog and digital signals .it requires high - performing filters in radio hardware , in contrast to tdma and csma . as each nodeis separated by its frequency , minimization of interference between nodes is done by sharp filters . in fdmaa full frame of frequency band is available for communication , in fdma a continuous flow of data is used , which improves efficiency of sending data .the division of frequency bands among users is shown in fig 2 .+ relation of d and g relation of d and t relation of t and g csma / ca is a extended version of csma .collision avoidance is used to enhance performance of csma by not allowing node to send data if other nodes are transmitting . in normal csma nodessense the medium if they find it free , then they transmits the packet without noticing that another node is already sending the packet , this results in collision . to improve the probability of collision csma / cawas proposed , csma / ca results in the improvement of collision probability .+ it works with principle of node sensing medium , if it finds medium to be free , then it sends packet to receiver .if medium is busy then node goes to back - off time slot for a random period of time and wait for medium to get free . with improve csma / ca rts / cts exchange technique node send request to send ( rts ) to receiver after sensing the medium and finding it free . after sending rts , node waits for clear to send ( cts ) message from receiver . after messageis received , it starts transmission of data , if node does not receive cts message then it goes to back - off time and wait for medium to get free .csma / ca is a layer 2 access method .it is used in 802.11 wireless lan and other wireless communication . +equations which we have used for plotting of csma / ca in three scenarios are given below + relation of d and t + relation of d and g + relation of g and t + pure aloha is the first random access technique introduced and it is so simple that its implementation is straight forward .it belongs to the family of contention - based protocols , which do not guarantee the successful transmission in advance . in thiswhenever a packet is generated , it is transmitted immediately without any further delay .successful reception of a packet depends only whether it is collided or not with other packets . in case of collision ,the collided packets are not received properly . at the end of packet transmissioneach user knows either its transmission successful or not .+ if collision occurs , user schedules its re - transmission to a random time . the randomness is to ensure that same packet do not collide repeatedly .an example of pure aloha is depicted in fig 6 .each packet is belongs to a separate user due to the fact that population is large .+ relation of t and g relation of t and d relation of d and g slotted aloha is a variant of pure aloha with channel is divided into slots .restriction is imposed on users to start transmission on slot boundaries only . whenever packets collide , they overlap completely instead of partially .so only a fraction of slots in which packet is collided is scheduled for re - transmission .it almost doubles the efficiency of slotted aloha as compared to pure aloha .functionality of slotted aloha is shown in fig 5 .successful transmission depends on the condition that , only one packet is transmitted in each frame .if no packet is transmitted in a slot , then slot is idle .slotted aloha requires synchronization between nodes which lead to its disadvantage .relation of t and g relation of t and d relation of d and g + in this section we are going to calculate the throughput of different multiple access techniques .data is transferred from sender to receiver using one of the techniques , throughput due to these techniques have been calculated . due to less difference between sender and receiver , there are no packet losses due to collision , no packets are lost due to buffer overflow . for the calculation of throughputwe are assuming a perfect channel .throughput is calculated for all access techniques through following equation . in equation 16 d is delay , t is throughput and is the number of bits passing through the frame .throughput of csma / ca is calculated by formula given in equation 16 .delay in equation 17 is calculated by adding delays of all elements of frame while it reaches receiver . + the following notations are used : , , , , , , , .+ now we calculate delay time given in equation 17 + + + following notations are used : , , , , . if there is no acknowledgement then turnaround time and is equal to zero .+ throughput is calculated by using equation 16 .delay which a packet experiences as it reaches from sender to destination is calculated as following + different time delay given in equation 22 can be calculated by following equations + following notations are used : , , , , , , , , , , .throughput of fdma is very close to tdma .there is very little difference between throughput of the two multiple access techniques .the calculation for the throughput of fdma is calculated by formula given in equation 16 and the delay which it experience is calculated below + different time delay given in equation 27 can be calculated by following equations + following notations are used + + + + + + + + + + the calculation for the throughput of aloha is done by formula given in equation 16 and the delay which it experience is calculated below + following notations are used : + + the calculation for the throughput of s - aloha is done by formula given in equation 16 and the delay which it experience is calculated below + different time delay given in equation 32 can be calculated by following equations following notations are used : , , , , , , .in this paper different multiple access techniques of mac protocol which are used in wireless body area networks have been compared .techniques are tdma , fdma , csma / ca , aloha and saloha .algoritham for all these techniques are given in this paper showing their working .mathematical equations for the calculation of throughput for all these technqiues have been shown .performance metrices for the comparison of these techniques are throughput , delay and offered load . comparison has been done between performance metirces throughput and delay , delay and offered load and offered load and throughput .tdma is the best technique used in wban with increase in load because it has highest throughput and minimum delay which is the most important requirement of wireless body area networks .1 b. latre , p. de mil , i. moerman , p. demeester , `` throughput and delay analysis of unslotted ieee 802.15.4 '' , journal of networks , vol 1 , no 1 , pp 20 - 28 , may.2006 . s. marinkovic , c. spagnol , e. popovici,``energy efficient tdma based mac protocol for wireless body area networks'',third international conference on sensor technologies and applications 2009 , doi 10.1109/sensorcomm.2009.99 .m. al ameen , n. ullah , k. kwak , `` design and analysis of a mac protocol for wireless body area network using wakeup radio '' , the 11th international symposium on communications and information technologies ( iscit 2011 ) , 978 - 1 - 4577 - 1295 - 1/11 s. ullah , k. s. kwak , `` an ultra - low power and traffic - adaptive medium access control protocol for wireless body area network '' , j med syst , doi 10.1007/s10916 - 010 - 9564 - 2 . s. j , marinkovic , e. m. popovici , senior member , ieee , c. spagnol , associate member , ieee , s.faul , w.p.marnane , `` energy efficient low duty cycle mac protocol for wireless body area networks , ieee transactions on information technology in biomedicine , vol 13 , no 6,pp 915 - 925 nov.2009 .z. yan , b. liu , ' ' a context aware mac protocol for medical wireless body area network `` , 978 - 1 - 4577 - 9538 - 2/11 s. ullah , p. khan , k. sup kwak , ' ' on the development of low - power mac protocol for wbans `` , proceedings of the international multiconference of engineers and computer scientists , 2009 , vol i. k. s. kwak , s. ullah and n. ullah , ' ' an overview of ieee 802.15.6 standard `` , 978 - 1 - 4244 - 8132 - 3/10 .c. li , l. wang , j. li , ' ' scalable and robust medium access control protocol in wireless body area networks `` , 978 - 1 - 4244 - 5213 - 4/09 .s. ullah and k. s. kwak , ' ' performance study of low - power mac protocols for wireless body area networks `` , 2010 ieee 21st international symposium on personal , indoor and mobile radio communications workshops , 978 - 1 - 4244 - 9116 - 2/10 .n. f. timmons and w. g. scanlon , ' ' an adaptive energy efficient mac protocol for the medical body area network `` , 978 - 1 - 4244 - 4067 - 2/09 .x. zhang , h. jiang , x. chen , l. zhang , z. wang , ' ' energy efficient implementation of on demand mac protocol in medical wireless body sensors `` , 978 - 1 - 4244 - 3828 - 0/09 .l. lin , k. juan wong , a. kumar , s. lim tan , s. jay phee , ' ' an energy efficient mac protocol for mobile in vivo body sensor networks " , 978 - 1 - 4577 - 1177 - 0/11 .
|
this paper presents comparison of access techniques used in medium access control ( mac ) protocol for wireless body area networks ( wbans ) . comparison is performed between time division multiple access ( tdma ) , frequency division multiple access ( fdma ) , carrier sense multiple access with collision avoidance ( csma / ca ) , pure aloha and slotted aloha ( s - aloha ) . performance metrics used for comparison are throughput ( t ) , delay ( d ) and offered load ( g ) . the main goal for comparison is to show which technique gives highest throughput and lowest delay with increase in load . energy efficiency is major issue in wban that is why there is need to know which technique performs best for energy conservation and also gives minimum delay . pure aloha , slotted aloha , csma / ca , tdma , fdma , wireless body area networks , throughput , delay , offered load
|
recent technological developments in electrochemical deposition have made possible experimental studies of atomic - scale dynamics .it is therefore now both timely and important to develop new computational methods for the analysis of experimental adsorption dynamics . in this paperwe apply one such analysis technique , the first - order reversal curve ( forc ) method , to analyze model systems with continuous and discontinuous phase transitions .we propose that the method can be a useful new experimental tool in surface electrochemistry .the forc method was originally conceived in connection with the preisach model of magnetic hysteresis .it has since been applied to a variety of magnetic systems , ranging from magnetic recording media and nanostructures to geomagnetic compounds , undergoing _ rate - independent _( i.e. , very slow ) magnetization reversal .recently , there have also been several forc studies of _ rate - dependent _ reversal .here we introduce and apply the forc method in an electrochemical context . for completeness, a brief translation to magnetic language is found in the appendix .we apply forc analysis to rate - dependent adsorption in two - dimensional lattice - gas models of electrochemical deposition .specifically , we study a lattice - gas model with attractive nearest - neighbor interactions ( a simple model of underpotential deposition , upd ) , being driven across its discontinuous phase transition by a time - varying electrochemical potential .in addition , we consider a lattice - gas model with repulsive lateral interactions and nearest - neighbor exclusion ( similar to the model of halide adsorption on ag(100 ) , described in refs . ) , being similarly driven across its continuous phase transition .the rest of this paper is organized as follows . in sec .[ sec : forc ] the forc method is explained .the model used for both systems with continuous and discontinuous transitions is briefly discussed in sec .[ sec : m ] . in sec .[ sec : s ] the dynamics of systems with a discontinuous phase transition are studied using kinetic monte carlo ( kmc ) simulations , as well as a mean - field model .the dynamics of systems with a continuous phase transition are studied in sec .[ sec : c ] .finally , a comparison between the two kinds of phase transitions and our conclusions are presented in sec .[ sec : conc ] .for an electrochemical adsorption system , the forc method consists of saturating the adsorbate coverage in a strong positive ( for anions ; negative for cations ) electrochemical potential ( proportional to the electrode potential ) and , in each case starting from saturation , decreasing the potential to a series of progressively more negative `` reversal potentials '' ( fig .[ fig : loop ] ) .subsequently , the potential is increased back to the saturating value .it is thus a simple generalization of the standard cyclic voltammetry ( cv ) method , in which the negative return potential is decreased for each cycle .this produces a family of forcs , , where is the adsorbate coverage , and where is the instantaneous potential during the increase back toward saturation . although we shall not discuss this further here , it is of course also possible to fix the negative limiting electrode potential and change the positive return potential from cycle to cycle .it is further useful to calculate the forc distribution , which measures the sensitivity of the dynamics to the progress of reversal along the major loop .must be added to eq .( [ forc.definition ] ) . herewe consider the distribution only away from the line . the additional term could be found from the major loop . ]the forc distribution is usually displayed as a contour plot called a ` forc diagram . 'a positive value of indicates that the corresponding reversal curves are converging with increasing , while a negative value indicates divergence. some preliminary results of this work have been submitted for publication elsewhere .kmc simulations of lattice - gas models , where a monte carlo ( mc ) step corresponds to an attempt to cross a free - energy barrier , have been used to simulate the kinetics of electrochemical systems with discontinuous or continuous phase transitions in two dimensions .the energy associated with a lattice - gas configuration is described by the grand - canonical effective hamiltonian for an square system of adsorption sites , where is a sum over all pairs of sites , are the lateral interaction energies between particles on the and sites measured in mev / pair , and is the electrochemical potential measured in mev / atom . the local occupation variables can take the values 1 or 0 , depending on whether site is occupied by an ion ( 1 ) or solvated ( 0 ) .the sign convention is chosen such that favors adsorption , and negative values of denote repulsion while positive values denote attraction between adsorbate particles on the surface .in addition to adsorption / desorption steps , we include diffusion steps with a comparable free - energy barrier . in each time step of the kmc simulation ,an adsorption site is chosen at random and the transition rates from the present configuration to a set of new configurations ( desorption , diffusion ) are calculated .a weighted list for accepting each of these moves is constructed using eq .( [ eq : p ] ) below , to calculate the probabilities of the individual moves between the initial state and final state .the probability for the system to stay in the initial configuration is consequently . using a thermally activated , stochastic barrier - hopping picture ,the energy of the transition state for a microscopic change from an initial state to a final state is approximated by the symmetric butler - volmer formula here and are the energies of the initial and final states , respectively , is the transition state for process , and is a `` bare '' barrier associated with process .this process can here be either nearest - neighbor diffusion ( ) , next - nearest - neighbor diffusion ( ) , or adsorption / desorption ( ) .the probability for a particle to make a transition from state to state is approximated by the one - step arrhenius rate where is the attempt frequency , which sets the overall timescale for the simulation .the electrochemical potential , which is proportional to the electrode potential , is increased monotonically , preventing the system from reaching equilibrium at the instantaneous value of .independent of the diffusional degree of freedom , attractive interactions ( ) produce a discontinuous phase transition between a low - coverage phase at low , and a high - coverage phase at high .in contrast , repulsive interactions ( ) produce a continuous phase transition between a low - coverage disordered phase for low , and a high - coverage , ordered phase for high .examples of systems with a discontinuous phase transition include underpotential deposition , while the adsorption of halides on ag(100 ) are examples of systems with a continuous phase transition .a two - dimensional lattice gas with attractive adsorbate - adsorbate lateral interactions that cause a discontinuous phase transition is a simple model of electrochemical underpotential deposition . using a lattice - gas model with attractive interactions on an lattice with , a family of forcswere simulated , averaging over ten realizations for each reversal curve at room temperature .the lateral interaction energy ( restricted to nearest - neighbor ) was taken to be , where the positive value indicates nearest - neighbor attraction .for this value of , room temperature corresponds to , where is the critical temperature .the barriers for adsorption / desorption and diffusion ( nearest - neighbor only ) were , corresponding to relatively slow diffusion .simulation runs with faster diffusion ( ) and the same adsorption / desorption barrier showed little difference from fig .[ fig : fig1 ] , indicating that diffusion effects are not significant for this model . the reversal electrochemical potentials associated with the reversal curves were separated by increments in the interval ] . as in ref . , the repulsive interactions , with nearest - neighbor exclusion and , are calculated with exact contributions for , and using a mean - field approximation for . the barriers for adsorption / desorption and nearest- and next - nearest - neighbor diffusion , are , , and , respectively .larger values of the diffusion barrier were also used to study the effect of diffusion on the dynamics .a continuous phase transition occurs between a disordered state at low coverage and an ordered state at high coverage .the forcs and the forc diagram are shown in fig .[ fig : fig2 ] . also indicated in fig .[ fig : fig2](*a * ) are the forc minima and the equilibrium isotherm .note that the forc minima in fig .[ fig : fig2](*a * ) lie directly on the equilibrium isotherm .this is because such a system has one stable state for any given value of the potential , as defined by the continuous equilibrium curve .the uniformly positive value of the forc distribution in fig .[ fig : fig2](*b * ) reflects the convergence of the family of forcs with increasing .this convergence results from relaxation toward the equilibrium isotherm , at a rate which increases with the distance from equilibrium .it is interesting to note that , while it is difficult to see at this slow scan rate , the rate of approach to equilibrium decreases greatly along the first forc that dips below the critical coverage ( shown in bold in fig .[ fig : fig2](*a * ) ) .the forcs that lie completely in the range never enter into the disordered phase , and thus their approach to equilibrium is not hindered by jamming .this is a phenomenon that occurs when further adsorption in a disordered adlayer is hindered by the nearest - neighbor exclusion . as a result ,extra diffusion steps are needed to make room for the new adsorbates , and the system follows different dynamics than a system with an ordered adlayer .the forcs that dip below enter into the disordered phase , and thus their approach to equilibrium is delayed by jamming .this is reflected in the forc diagram by the florida - shaped `` peninsula '' centered around this forc in fig .[ fig : fig2](*b * ) .the effect of jamming is more pronounced at higher scan rates , or with a higher diffusion barrier , where the rate of adsorption is much faster than the rate of diffusion .the family of forcs and forc diagram at a higher scan rate , / mcss , are shown in fig .[ fig : highscan ] , and the forcs and forc diagram with a larger diffusion barrier are shown in fig .[ fig : lowdiff ] . in fig .[ fig : highscan ] , two distinct groups of forcs undergoing jammed and unjammed dynamics can be clearly seen .this is reflected in the forc diagram as a splitting of the `` peninsula '' into two `` islands '' of high values .a similar effect is seen in fig .[ fig : lowdiff ] , since also there the rate of adsorption is much faster than the rate of diffusion ( larger diffusion barrier ) . however , fig .[ fig : lowdiff](*a * ) shows a slight difference between the forc minima and the equilibrium curve around the critical coverage .notice also in fig .[ fig : highscan](*a * ) that even at a much higher scan rate than in fig .[ fig : fig2 ] ( nearly two orders of magnitude ) , the forc minima still follow the equilibrium curve very accurately .thus , the forc method should be useful to obtain the equilibrium adsorption isotherm quite accurately in experimental systems with slow equilibration rates .mev / mcss .the black curve in the middle shows the equilibrium isotherm .the minima of each forc are also shown ( black filled circles ) .( * b * ) forc distribution generated from the forcs shown in ( * a*).the positions of the forc minima are also shown ( black filled circles ) .the straight line corresponds to the forc for which the minimum lies closest to the critical coverage ., title="fig : " ] mev / mcss .the black curve in the middle shows the equilibrium isotherm .the minima of each forc are also shown ( black filled circles ) .( * b * ) forc distribution generated from the forcs shown in ( * a*).the positions of the forc minima are also shown ( black filled circles ) .the straight line corresponds to the forc for which the minimum lies closest to the critical coverage ., title="fig : " ] mev .the black curve in the middle shows the equilibrium isotherm .the minima of each forc are also shown ( black filled circles ) .( * b * ) forc distribution generated from the forcs shown in ( * a*).the positions of the forc minima are also shown ( black filled circles ) .the straight line corresponds to the forc for which the minimum lies closest to the critical coverage ., title="fig : " ] mev .the black curve in the middle shows the equilibrium isotherm .the minima of each forc are also shown ( black filled circles ) .( * b * ) forc distribution generated from the forcs shown in ( * a*).the positions of the forc minima are also shown ( black filled circles ) .the straight line corresponds to the forc for which the minimum lies closest to the critical coverage ., title="fig : " ]two observations can be made by comparing the forcs and forc diagrams for systems with discontinuous and continuous phase transitions .first , the forc minima in systems with a continuous phase transition correspond to the equilibrium behavior , while they do not for systems with a discontinuous phase transition .thus , forcs can be used to recover the equilibrium behavior for systems with continuous phase transitions that need a long time to equilibrate .this could be useful in experiments .second , due to the instability that exists in systems with a discontinuous phase transition , the minima of the family of forcs form a back - bending `` van der waals loop '' , and the corresponding forc diagram contains negative regions which do not exist for systems with a continuous phase transition . since experimental implementation of the forc methodshould only require simple reprogramming of a potentiostat designed to carry out a standard cv experiment , we believe the method can be of significant use in obtaining additional dynamic as well as equilibrium information from such experiments for systems that exhibit electrochemical adsorption with related phase transitions .this research was supported by u.s .nsf grant no .dmr-0240078 , and by florida state university through the school of computational science , the center for materials research and technology , and the national high magnetic field laboratory .in this appendix we present a mapping between lattice - gas models of adsorption and discrete spin models of magnetic systems , and then introduce the forc method in the original magnetic language .the occupation variable in the lattice - gas model , is a binary variable , just like the magnetization variables : in the classical preisach model ( cpm ) .we therefore have the mappings and . as a result, the forc method can be applied to electrochemical adsorption , as well as to magnetic hysteresis .the cpm is based on the idea that a material consists of a number of elementary interacting `` particles '' or `` domains , '' called hysterons .the hysterons are assumed to have rectangular hysteresis loops between two states that have the same magnetization values , and , for all hysterons . a typical hysteresis loop for a hysteron is shown in fig . [ fig : hysteron ] . and are the up and down switching magnetic fields respectively .it is also assumed that the different hysterons have a distribution of reversal fields . in the cpm, the total magnetization can be defined as { \rm d}h_u { \rm d}h_d\ ; , \ ] ] where the operator applied to gives if the particle is switched up and if the particle is switched down .note that depends on the field history of and not only on the instantaneous value , which enables the cpm to model irreversible hysteresis behavior . in a typical forc analysis of a magnetic system, the magnetization is saturated in a positive applied magnetic field , and then the applied magnetic field is decreased continuously to a reversal field . the magnetic field is then increased back to saturation .a first - order reversal curve is the response of the magnetization to the increasing magnetic field ( ) .this is done for different values of , and a set of curves , , is collected .the forc distribution is defined as where the tilde denotes a trivially different normalization from the one used here .thus , using the mapping given above , one arrives at eq .( [ forc.definition ] ) as the definition of the forc distribution in an electrochemical system .
|
the first - order reversal curve ( forc ) method for analysis of systems undergoing hysteresis is applied to dynamical models of electrochemical adsorption . in this setting , the method can not only differentiate between discontinuous and continuous phase transitions , but can also quite accurately recover equilibrium behavior from dynamic analysis for systems with a continuous phase transition . discontinuous and continuous phase transitions in a two - dimensional lattice - gas model are compared using the forc method . the forc diagram for a discontinuous phase transition is characterized by a negative ( unstable ) region separating two positive ( stable ) regions , while such a negative region does not exist for continuous phase transitions . experimental data for forc analysis could easily be obtained by simple reprogramming of a potentiostat designed for cyclic - voltammetry experiments . _ * keywords : * _ first - order reversal curve ; hysteresis ; continuous phase transition ; discontinuous phase transition ; lattice - gas model ; monte carlo simulation ; cyclic - voltammetry experiments .
|
the first announcements of successful binary black hole simulations marked an important break - through in numerical relativity and triggered a burst of activity in the field .while most current simulations adopt some variation of the bssn formulation together with what have become standard coordinates " ( namely 1+log slicing and the gamma - driver " condition ) , different implementations differ in many details .most current , three - dimensional numerical relativity codes share one feature , though , namely cartesian coordinates .while cartesian coordinates have many desirable properties , there are applications , for example gravitational collapse and supernova calculations , for which spherical polar coordinates would be better suited .implementing a numerical relativity code in spherical polar coordinates poses several challenges .the first challenge lies in the equations themselves .the original version of the bssn formulation , for example , explicitly assumes cartesian coordinates ( by assuming that the determinant of the conformally related metric be one ) .this issue has been resolved by brown , who introduced a covariant formulation of the bssn equations that is well - suited for curvilinear coordinate systems ( compare ) .another challenge is introduced by the coordinate singularities at the origin and the axis , which introduce singular terms into the equations .although the regularity of the metric ensures that , analytically , these terms cancel exactly , this is not necessarily the case in numerical applications , and special care has to be taken in order to avoid numerical instabilities .several methods have been proposed to enforce regularity in curvilinear coordinates .one possible approach is to rely on a specific gauge , e.g. polar - areal gauge , together with a suitable choice of the dynamical variables .numerous different such methods have been implemented in spherical symmetry ( see , e.g. , for an overview ) ; examples in axisymmetry include .this approach has some clear limitations .it is not obvious how to generalize these methods to relax the assumption of axisymmetry ; moreover the restriction of the gauge freedom prevents adoption of the standard gauge " that proved to be successful in evolutions with the bssn formulation .an alternative method is to apply a regularization procedure , by which both the appropriate parity regularity conditions and local flatness are enforced in order to achieve the desired regularity of the evolution equations ( see for examples ) .typically , these schemes involve the introduction of auxiliary variables as well as finding evolution equations for these variables .the resulting schemes are quite cumbersome , which may explain why , to the best of our knowledge , no such scheme has been implemented without any symmetry assumptions . inyet an alternative approach , cordero - carrin _ et.al ._ recently adopted a partially implicit runge - kutta ( pirk ) method ( see also ) to evolve the hyperbolic , wave - like equations in the fully constrained formulation of the einstein equations ( see ) .essentially , pirk methods evolve regular terms in the evolution equations explicitly , and then use these updated values to evolve singular terms implicitly ( see and section [ sec : pirk ] below for details ) . following this success , montero & cordero - carrin , assuming spherical symmetry ,applied a second - order pirk method to the full set of the bssn einstein equations in curvilinear coordinates , and produced the first successful numerical simulations of vacuum and non - vacuum spacetimes using the covariant bssn formulation in spherical coordinates without the need for a regularization algorithm at the origin ( or without performing a spherical reduction of the equations , compare ) . in this paperwe present a new numerical code that solves the bssn equations in three - dimensional spherical polar coordinates without any symmetry assumptions .the code uses a second - order pirk method to integrate the evolution equations in time .this approach has the additional advantage that it imposes no restriction on the gauge choice .we consider a number of test cases to demonstrate that it is possible to obtain stable and robust evolutions of axisymmetric and non - axisymmetric spacetimes without any special treatment at the origin or the axis . the paper is organized as follows . in section [ basic_equations ]we present the basic equations ; we will review the covariant formulation of the bssn equations , and will then specialize to spherical polar coordinates . in section [ sec : numerics ]we will briefly review pirk methods and will then describe other specifics of our numerical implementation .in section [ sec : numerical_examples ] we present numerical examples , namely weak gravitational waves , `` hydro - without - hydro '' simulations of static and rotating relativistic stars , and single black holes . finally we summarize and discuss our findings in section [ sec : discussion ] .we also include two appendices ; in appendix [ appendixa ] we describe an analytical form of the flat metric in spherical polar coordinates that provides a useful test of the numerical implementation of curvature quantities , while in appendix [ appendixb ] we list the specific source terms for our pirk method applied to the bssn equations .throughout this paper we use geometrized units in which .indices denote spacetime indices , while represent spatial indices .we adopt brown s covariant form of the bssn formulation . in particular , we write the conformally related spatial metric as where is the physical spatial metric , and a conformal factor . in the original bssn formulation the determinant of the conformally related metric is fixed to unity , which completely determines the conformal factor .this approach is suitable when cartesian coordinates are used , but not in more general coordinate systems .we will pose a different condition on below , but note already that the advantage of this approach is that all quantities in this formalism may be treated as tensors of weight zero ( see also ) .we also denote as the conformally rescaled extrinsic curvature .slightly departing from brown s approach we assume this quantity to be trace - free , while brown allows to have a non - zero trace . in the above expression the physical extrinsic curvature and its trace . introducing a background connection ( compare ) we now define which, unlike the two connections themselves , transform as a tensor field .we also define the trace of these variables as it is not necessary for the background connection to be associated with any metric . in section [ sec : implementation ] below we will specialize to applications in spherical polar coordinates and hence will assume that the are associated with the flat metric in spherical polar coordinates .this assumption affects the equations in the remainder of this section in only one way , namely , we will assume that the riemann tensor associated with the connection vanishes , as is appropriate when the background metric is flat .finally , we define the connection vector as a new set of independent variables that are equal to the when the constraint holds .the vector plays the role of the conformal connection functions " in the original bssn formulation , but , unlike the , the transform as a rank-1 tensor of weight zero ( compare exercise 11.3 in ) . in the followingwe will evolve the variables as independent variables , satisfying their own evolution equation . in order to determine the conformal factor we specify the time evolution of the determinant of the conformal metric . in this paperwe adopt brown s lagrangian " choice defining where denotes the lie derivative along the shift vector , we then obtain the following set of evolution equations [ evolution ] ^{\rm tf } \\\part_n \phi & = & \frac{1}{6 } \bar d_k \beta^i - \frac{1}{6 } \alpha k \\ \part_n k & = & \frac{\alpha}{3 } k^2 + \alpha \bar a_{ij } \bar a^{ij } % \nonumber \\ - e^{- 4 \phi } ( \bar d^2 \alpha + 2 \bar d^i \alpha \bar d_i \phi ) \nonumber \\ & & + 4 \pi \alpha ( \rho + s ) \\\part_n \bar \lambda^i & = & \bar \gamma^{jk } \dflat_j \dflat_k \beta^i + \frac{2}{3 } \delta \gamma^i \bar d_j \beta^j + \frac{1}{3 } \bar d^i \bar d_j \beta^j \nonumber \\ & & - 2 \bar a^{jk } ( \delta^i{}_j \partial_k \alpha - 6 \alpha \delta^i{}_j \partial_k \phi - \alpha \delta \gamma^i_{jk } ) \nonumber \\ & & - \frac{4}{3 } \alpha \bar \gamma^{ij } \partial_j k - 16 \pi \alpha \bar \gamma^{ij } s_j.\end{aligned}\ ] ] ( compare equations ( 21 ) in ) . in the above equations , is the lapse function, denotes a covariant derivative that is built from the background connection ( and hence , in our implementation , associated with the flat metric in spherical polar coordinates ) and the superscript denotes the trace - free part .the matter sources , , and denote the density , momentum density , stress , and the trace of the stress as observed by a normal observer , respectively , and are defined by here is the normal one - form on a spatial slice , and is the stress - energy tensor .we compute the ricci tensor associated with from in all of the above expressions we have omitted terms that include the riemann tensor associated with the connection , since these terms vanish for our case of a flat background .the hamiltonian constraint takes the form while the momentum constraints can be written as ( see equations ( 16 ) and ( 17 ) in ) .we note that when and , which is suitable for cartesian coordinates , the above equations reduce to the traditional bssn equations . in the following ,however , we will evaluate these equations in spherical polar coordinates .before the above equations can be integrated , we have to specify coordinate conditions for the lapse and the shift . unless noted otherwise we will adopt a non - advective " version of what has become the standard gauge " in numerical relativity .specifically , we use the 1+log " condition for the lapse in the form and the gamma - driver " condition for the shift in the form [ gammadriver ] ( compare ) . these ( or similar ) conditions play a key role in the moving - puncture " approach to handling black hole singularities in numerical simulations .we now focus on spherical polar coordinates , and will assume that the are associated with the flat metric in spherical polar coordinates , , and , accordingly , the only non - vanishing components of the background connection are when implementing the above equations in spherical polar coordinates , care has to be taken that coordinate singularities do not spoil the numerical simulation .these singularities appear both at the origin , where , and on the axis where .even for a simple scalar wave , appearances of inverse factors of and in the laplace operator can pose a challenge for a numerical implementation . in section [ sec : numerics ] below we discuss a pirk method ( see also ) that handles these singularities very effectively .an additional challenge in general relativity is that these inverse factors of and appear through the dynamical variables themselves .components of the spatial metric , for example , scale with powers of and , the inverse metric then scales with inverse powers of these quantities , and numerical error affecting these terms may easily spoil the numerical evolution .it is therefore important to treat these appearances of and analytically .we therefore factor out suitable powers of and from components of all tensorial objects . and are absorbed in the unit vectors . ]we start by writing the conformally related metric as the sum of the flat background metric and a correction ( which is not assumed to be small ) , the flat metric is given by eq .( [ flatmetric ] ) , and we write the correction in the form we similarly rescale the extrinsic curvature as and the connection vector as we treat the shift and similarly , and finally rewrite the evolution equations ( [ evolution ] ) for the coefficients , and etc. we can compute the connection coefficients ( [ deltagamma ] ) from since we can compute the derivatives of the spatial metric in terms of the coefficients .direct calculation using the flat connection ( [ flatconnection ] ) yields the ( flat ) covariant derivative of the connection vector can similarly be expressed in terms of the as \dflat_\theta \bar \lambda^\theta & = & \displaystyle \frac{1}{r } \left ( \partial_\theta \lambda^\theta + \lambda^r \right ) \\[3 mm ] \dflat_\phi \bar \lambda^\theta & = & \displaystyle \frac{1}{r } \left ( \partial_\phi \lambda^\theta - \cos \theta \lambda^\phi \right ) \\ \\[3 mm ] \dflat_r \bar \lambda^\phi & = & \displaystyle \frac{1}{r \sin \theta }\partial_r \lambda^\phi \\[3 mm ] \dflat_\theta \bar \lambda^\phi & = & \displaystyle \frac{1}{r\sin\theta } \partial_\theta \lambda^\phi \\[3 mm ] \dflat_\phi \bar \lambda^\phi & = & \displaystyle \frac{1}{r \sin\theta } \left ( \partial_\phi \lambda^\phi + \sin \theta \lambda^r + \cos \theta \lambda^\theta \right ) \end{array}\ ] ] using the above expressions , we can compute the ricci tensor ( [ ricci ] ) as follows . in the first term on the right - hand side of ( [ ricci ] )we write the second covariant derivative of as a sum of first partial derivatives of the quantities and ( flat ) connection terms multiplying the , we then insert the expressions ( [ metric_derivs ] ) into the first term on the right - hand side and evaluate all derivatives explicitly , so that these terms can be written in terms of second partial derivatives of the coefficients .once this step has been completed , we add those remaining terms for which the flat background connection ( [ flatconnection ] ) is nonzero .the resulting equations are rather cumbersome , and it is easy to introduce typos in the numerical code .the numerical examples of section [ sec : numerical_examples ] are excellent tests of the code . in appendix [ appendixa ]we describe another analytical test that we have found very useful to check our implementation of curvature quantities . as a final comment we note that the condition ( [ dgammadt ] ) determines the time evolution of the determinant of the conformally related metric , but not its initial value .the latter can be chosen freely in this scheme , in particular it does not need to be chosen equal to that of the background metric ( unlike in the original bssn formulation ) .for some of our numerical simulations , however , in particular for the rotating star simulations of section [ sec : rot_star ] , we found that rescaling the conformally related metric so that its determinant becomes improved the stability of the simulation , so that it required a smaller coefficient in the kreiss - oliger dissipation term ( [ ko ] ) below .the origin of the numerical instabilities in curvilinear coordinate systems are related to the presence of stiff source terms in the equations , e.g. factors of or that become arbitrary large close to the origin or the axis . in the followingwe will refer to these terms as `` singular terms '' .pirk methods evolve all other , i.e. regular , terms in the evolution equations explicitly , and then use these updated values to evolve the singular terms implicitly .this strategy implies that the computational costs of pirk methods are comparable to those of explicit methods .the resulting numerical scheme does not need any analytical or numerical inversion , but is able to provide stable evolutions due to its partially implicit component .we refer to for a detailed derivation of pirk methods ( up to third order ) , and limit our discussion here to a simple description of the second - order pirk method that is implemented in our code .consider a system of partial differential equations u_t = _ 1 ( u , v ) , + v_t = _ 2 ( u ) + _ 3 ( u , v ) , [ e : system ] where , and are general non - linear differential operators .we will denote the corresponding discretized operators by , and , respectively .we will further assume that and contain only regular terms , and hence will update these terms explicitly , whereas the operator contains the singular terms and will therefore be treated partially implicitly .note that is assumed to depend on only . in the case of the bssn equations this holds for almost all variables ;the one exception can be treated as discussed in the paragraph below equation ( [ l3lambda ] ) in appendix [ appendixb ] , where we provide the exact form of the source terms . in our second - order pirk schemewe update the variables and from an old timestep to a new timestep in two stages . in each of these two stages ,we first evolve the variable explicitly , and then evolve the variable taking into account the updated values of for the evaluation of the singular operator . for the system of equations ( [ e : system ] ) , the first stage u^(1 ) = u^n + t l_1 ( u^n , v^n ) , + v^(1 ) = v^n + t ,is followed by the second stage u^n+1 = , + v^n+1 = v^n + . in the first stage , is evolved explicitly ; the updated value is used in the evaluation of the operator for the computation of . in the second stage , again evolved explicitly , and the updated value is used in the evaluation of the operator for the computation of the updated values . our pirk method is stable as long as the timestep is limited by a courant condition ; see eq .( [ courant ] ) below .we include all singular terms appearing in the sources of the equations in the operator .firstly , the conformal metric components , , the conformal factor , , the lapse function , , and the shift vector , , are evolved explicitly ( as is evolved in the previous pirk scheme ) ; secondly , the traceless part of the extrinsic curvature , , and the trace of the extrinsic curvature , , are evolved partially implicitly , using updated values of , , and ; then , the are evolved partially implicitly , using the updated values of , , , , and .finally , is evolved partially implicitly , using the updated values of the previous quantities .lie derivative terms and matter source terms are always included in the explicitly treated parts . in appendix [ appendixb ] , we give the exact form of the source terms included in each operator .we adopt a centered , fourth - order finite differencing representation of most spatial derivatives . for each grid point, the finite - differencing stencil therefore involves the two nearest neighbors in each direction ( see fig .[ fig1 ] ) .an exception from our fourth - order differencing are advective derivatives along the shift , for which we use a second - order ( one - sided ) upwind scheme . because of the second - order time evolution , and the second - order advective terms, our scheme is overall second - order accurate , even though for some cases with vanishing shift we have found that the error appears to be dominated by the fourth - order terms . .grid points , marked by the crosses , are placed at the center of grid cells , so that no grid point ends up at the center ( ) or on the axes ( or ) .our interior grid , bordered by solid lines in the figure , covers the region and ( as well as ) . as suggested by the two highlighted stencils ,our fourth - order differencing scheme requires two levels of ghost zones outside of the interior grid , indicated by the dotted lines.,width=326 ] we adopt a cell - centered grid , as shown schematically in fig .specifically , we divide the physical domain covered by our grid , , and into cells with uniform coordinate size because of our fourth - order finite differencing scheme we need to pad the interior grid with two layers of ghost zones . except at the outer boundary, each ghost zone corresponds to some other zone in the interior of the grid ( with some other value of and ) , so that these ghosts zones can be filled by copying the corresponding values from interior grid points . as a concrete example , consider a grid point with angular coordinates and , say , in the innermost radial zone ( highlighted by a ( blue ) filled circle in fig . [ fig1 ] ) . evaluating the partial derivative with respect to at this pointrequires two grid points that , formally , have negative radii .we can fill these two required ghost points by finding the corresponding points in the interior of the grid , which have angular coordinates and .similarly , evaluating a derivative with respect to for a point with angular coordinates next to the axis ( see the ( red ) filled square in fig .[ fig1 ] ) requires ghost points that can be filled by finding the corresponding grid points with azimuthal angle in the interior of the grid ..parity conditions for components of vectors and tensors as implemented in our coordinate - based code . components of vectors and tensors have to be multiplied with the corresponding sign when they are copied into ghost zones at the center or the axis . [ cols="^,^,^,^",options="header " , ] for scalar functions the corresponding function values can be copied immediately , but for components of vectors or tensors , expressed in spherical polar coordinates , a possible relative sign has to be taken into account .essentially , this occurs because , in spherical polar coordinates , the unit vectors may point into the opposite physical direction when we identify a ghost zone with an interior point , i.e. when we go from to or .we list these relative sign changes , as implemented in our coordinate - based code , in table [ tab1 ] .we also require two sets of two ghost zones for , which can be filled directly using periodicity . at the outer boundarywe also require two ghost zones , as suggested by the ( red ) squared stencil in fig .we impose a sommerfeld boundary condition , which is an approximate implementation of an outgoing wave boundary condition , to fill these ghost zones . in our coordinate - based codewe implement this condition by tracing an outgoing radial characteristic from each of the outer boundary grid points back to the previous time level .we then interpolate the corresponding function to the intersection of that characteristic and the previous time level , and copy that interpolated value , multiplied by a suitable fall - off in , into the boundary grid point .we assume a fall - off with for all metric variables ( i.e. , , and ) as well as the lapse , but a fall - off for the shift as well as .the pirk method of section [ sec : pirk ] is stable as long as the time step is limited by a courant - friedrichs - lewy condition . in order to evaluate this condition we first find the smallest coordinate distance between any two grid - points in our cell - centered , spherical polar grid .this minimum distance is approximately we then set where we have chosen a courant factor for all simulations in this paper .it is a well - known disadvantage of spherical polar coordinates that the accumulation of gridpoints in the vicinity of the origin leads to a very severe limit on the timestep .we will discuss this issue in greater detail in section [ sec : discussion ] .we use kreiss - oliger dissipation to suppress the appearance of high frequency noise at late times . specifically , we add a term of the form to the right hand side of the evolution equation for each dynamical variable . here is a dimensionless coefficient which we have chosen between 0 ( for some of our short - time evolutions ) and 0.001 for the rotating neutron star simulation in sect .[ sec : rot_star ] .as a first test of our codes we consider small - amplitude gravitational waves on a flat minkowski background . following teukolsky we construct an analytical , linear solution for quadrupolar ( ) waves from a function where the constant is related to the amplitude of the wave and to its wavelength ( see also section 9.1 in ) .we set , by which all length scales become dimensionless .we will consider axisymmetric ( ) and non - axisymmetric ( ) modes separately .for an axisymmetric small - amplitude gravitational wave at different instances of time . for this simulationwe used a grid of size and imposed the outer boundary at .we show data as a function of in the ( arbitrary ) direction and .differences between the numerical results ( marked by crosses ) and the analytical solution ( solid lines ) are smaller than the width of the lines in this graph.,width=326 ] as a function of time for a small - amplitude , axisymmetric gravitational wave .we show results for simulations with a grid of size , for , and , with the outer boundary imposed at . at these early times, the error appears to be dominated by the fourth - order differencing of the spatial derivatives.,width=326 ] we first consider axisymmetric waves .since these solutions are independent of the coordinate , we may choose as small as possible ( which is in our code ) without loss of accuracy .we also choose a small amplitude of , so that deviations from the analytic solution , which is accurate only to linear order in , are dominated by our finite - difference error , and not by terms that are higher - order in .in the following we show results for a numerical grid with grid points , where , or , and imposing the outer boundary at . for these simulations we used the 1+log lapse condition ( [ 1+log ] ) , but chose a vanishing shift instead of the gamma - driver condition ( [ gammadriver ] ) . in fig .[ fig2 ] we show snapshots of the metric function at different instances of time for our highest - resolution simulation with . for each time, we include the numerical results as crosses , as well as the analytical solution as a solid line .the differences between the numerical results and analytical solution are well below the resolution limit of this graph , so that the two can not be distinguished in this figure . in fig .[ fig3 ] we show a convergence test for these waves . specifically , we compute the -norm of the difference between the analytical solution and the analytical solution, is the coordinate volume of the numerical grid . in fig .[ fig3 ] we show these norms as a function of time for , and .the norms are rescaled with a factor ; the convergence of the resulting error curves indicates that , at these early times , the error is dominated by the fourth - order differencing of the spatial derivatives . in spherical polar coordinates , the courant condition ( [ courant ] ) limits the time step to such small values that the second - order errors associated with our pirk method are smaller than the fourth - order error of our spatial derivatives ( for vanishing shift ) . for a non - axisymmetric small - amplitude gravitational wave at different instances of time . for this simulation we used a grid of size and imposed the outer boundary at ; we show data as a function of in the direction and .numerical results are marked by the crosses , while the analytical solution is shown as the solid line.,width=326 ] non - axisymmetric gravitational waves represent a rare example of an analytical , time - dependent , three - dimensional , albeit weak - field solution to the einstein equations . clearly , this solution represents a stringent test for our code . in fig .[ fig4 ] we show results for an wave , again for an amplitude . as in fig .[ fig2 ] , we graph solutions for as functions of at different instances of time . again , our numerical solution ( marked by crosses ) can hardly be distinguished from the analytical solution ( shown as solid lines ) . as a test of strong - field , butregular solutions we consider spacetimes containing relativistic stars . in general , this requires evolving the stellar matter self - consistently with the gravitational fields , for example by solving the equations of relativistic hydrodynamics . since this is beyond the scope of this paper , we here adopt the `` hydro - without - hydro '' approach suggested by . in this approach , which can also be described as an `` inverse - cowling approximation '', we leave the matter sources fixed , and evolve only the gravitational fields . in this way, it is possible to assess the stability of a spacetime evolution code , and its capability of accurately evolving strong but regular gravitational fields in spacetimes with static matter , without having to worry about the hydrodynamical evolution .these simulations serve as both a testbed and a preliminary step towards fully relativistic hydrodynamical simulations of stars . in this sectionwe consider static and uniformaly rotating stars separately .we first consider non - rotating relativistic stars , described by the tolman - oppenheimer - volkoff ( tov ) solution .we focus on a polytropic tov star with polytropic index , and with a gravitational mass of about 85% of the maximum - allowed mass . for this model ,the central density is about 40% of that of the maximum mass model .we evolved this star with the 1+log slicing condition for the lapse ( [ 1+log ] ) , but kept the shift fixed to zero . because the spacetime is spherically symmetric, we could choose both and as small as possible ( ) without loss of accuracy .even for very modest grid resolutions in the radial direction ( e.g. , with the outer boundary imposed at four times the stellar radius ) , we found that the gravitational fields settle down into an equilibrium that is similar to the initial data .after this initial transition , which is caused by the finite - difference error , the stellar surface as well as the outer boundaries ( see ) , the solution remains stable . and the lapse for a rapidly rotating star ( see text for details ) .we show both functions both at the initial time , and at a late time .we also show both functions along rays in two different directions , one very close to the equator , the other pointing close to the pole .both profiles remain very similar to their initial data throughout the evolution.,width=326 ] the evolution of the spacetime of a rapidly rotating relativistic star is a more demanding test than the previous one , as it breaks spherical symmetry and instead involves axisymmetric non - vacuum initial data in the strong gravity regime .the initial data used for this test are the numerical solution of a stationary and axisymmetric equilibrium model of a rapidly and uniformly rotating relativistic star , which is computed using the lorene code .we consider a uniformly rotating star with the same polytropic equation of state as the non - rotating model of sect .[ sec : tov ] .our particular model has the same central rest - mass density as that non - rotating model , but rotates at of the allowed mass - shedding limit ( for a star of that central density ) ; expressed in terms of the gravitational mass , the corresponding spin period is approximately 157 .the ratio of the polar to equatorial coordinate radii for this model is . for this simulationwe adopted both the 1+log condition for the lapse ( [ 1+log ] ) and the gamma - driver condition for the shift ( [ gammadriver ] ) .for this test we adopted a grid of size , and imposed the outer boundary at , which equals four times the equatorial radius . in fig .[ fig5 ] we show the initial and late - time profiles of the conformal exponent and the lapse , both in a direction close to the equator and close to the axis .evidently , both functions remain very close to their initial values throughout the evolution , as they should . in this section we present results for two different simulations involving schwarzschild black holes . in section [ sec : trumpet ] we evolve a schwarzschild black hole in a trumpet " geometry , which , in the limit of infinite resolution , is a time - independent solution to the einstein equations given our slicing conditions ( [ 1+log ] ) . in section [ sec : wormhole ] we adopt wormhole initial data and follow the coordinate transition to a trumpet geometry . , and its initial value , as a function of time . for these simulations we used a grid of size for , 2 , 4 and 8 , and imposed the outer boundary at .we rescale all differences with , so that the convergence of these lines demonstrates second - order convergence.,width=326 ] maximally sliced trumpet data represent a time - independent slicing of the schwarzschild spacetime that satisfies our slicing condition ( [ 1+log ] ) .the solution can be expressed analytically in isotropic coordinates , albeit only in parametrized form . in this sectionwe adopt these trumpet data as initial data , so that , in the continuum limit , the solution should remain independent of time . for trumpet data the conformal factor diverges at .while , on our cell - centered grid , functions are never evaluated directly at the origin , derivatives in the neighborhood of the singularity at the origin are clearly affected by the singular behavior of the conformal factor . however , the great virtue of the moving - puncture " gauge conditions ( [ 1+log ] ) and ( [ gammadriver ] ) is that these errors only affect the neighborhood of the puncture , and do not spoil the evolution globally . in the followingwe will demonstrate these properties in our code using spherical polar coordinates . for the simulations presented in this section we adopted a numerical grid of size size for , 2 , 4 and 8 , with the outer boundary imposed at . in fig .[ fig6 ] we show results for the maximum of the radial component of the shift vector as a function of time . specifically , we show the difference between these maximum values and their initial values .since our trumpet data represent a time - independent solution to the einstein equations and our slicing and gauge conditions ( [ 1+log ] ) and ( [ gammadriver ] ) , these differences should converge to zero as the grid resolution is increased . in fig .[ fig6 ] we multiply the differences with ; the convergence of the resulting lines therefore demonstrates second - order convergence of the simulation .apparently the error in these simulations is dominated by the second - order advective terms .we also note that the outer boundary introduces error terms that depend on both the grid resolution and the location of the outer boundary .since the latter does not decrease when we increase the grid resolution , the code converges more slowly in regions that have come into causal contact with the outer boundary .we therefore include in fig .[ fig6 ] only sufficiently early times , before the location of the shift s maximum is affected by the outer boundary . , the lapse function , and the shift , showing the coordinate transition from wormhole initial data to time - independent trumpet data .the ( blue ) long - dashed lines represent the initial data at , the ( red ) dashed lines show our numerical results at time , and the ( black ) solid lines show the analytical trumpet solution .the initial data appear double - valued because we graph this functions as a function of the areal radius ( see text for details ) . for these simulations we adopted a grid size with the outer boundary imposed at .( in these graphs we did not include the innermost two grid points , which are affected by the singular behavior of the puncture.),width=326 ] as a function of time .we show results for different grid sizes for , 2 , 4 and 8 , with the outer boundary imposed at . after a brief transition from the initial data , the shift settles down into a new equilibrium .for relatively coarse grid resolutions the shift experiences a slow drift , but this drift disappears as the grid resolution is increased.,width=326 ] ) at time . as in fig .[ fig8 ] we show results for grid sizes for , 2 , 4 and 8 , with the outer boundary imposed at .all results are rescaled with ; the convergence of the resulting lines demonstrates second - order convergence of our code.,width=326 ] we now turn to evolutions of wormhole initial data , representing a horizontal slice through the penrose diagram of a schwarzschild black hole .for these data , the conformal factor is given by the conformally related metric is flat , , and the extrinsic curvature vanishes , .instead of choosing the killing lapse and killing shift , which would leave these data time - independent , we choose , at , a pre - collapsed " lapse and a vanishing shift , .we then evolve the lapse and the shift with the 1+log condition ( [ 1+log ] ) and the gamma - driver condition ( [ gammadriver ] ) . since these initial data do not represent a time - independent solution to the einstein equations together with our gauge conditions , we observe a non - trivial time evolution that represents a coordinate evolution .for the non - advective " 1+log condition ( [ 1+log ] ) , this coordinate transition results in the maximally sliced trumpet geometry of section [ sec : wormhole ] . in fig .[ fig7 ] we show this coordinate transition for the conformal exponent , the lapse and the shift .we note that some care has to be taken when the numerical and analytical results are compared .the analytical solution of assumes .we also choose in our initial data , but this relation is not necessarily maintained during the time evolution , so that the numerical and analytical solutions may be represented in different spatial coordinate systems ( but on the same spatial slice ) . in order to compare the two solutions we therefore graph all quantities as a function of the gauge - invariant areal radius .since for wormhole data each value of corresponds to two values of the isotropic radius , the initial data in fig .[ fig7 ] appear double - valued . for these comparisons with the analytical solution we also graph the orthonormal component of the shift rather thanthe coordinate component itself .[ fig7 ] clearly shows the coordinate transition from wormhole initial data to the trumpet equilibrium solution . in fig .[ fig8 ] we show the maximum of the radial shift as a function of time . after a brief period of a coordinate transitionthe shift settles down into a new equilibrium .we show results for grid sizes for , 2 , 4 and 8 , with the outer boundary imposed at .the graph shows that differences between the different results decrease rapidly as the grid resolution is increased .for our coarser grid resolutions the shift still experiences a slow drift after the initial transition , but this drift decreases as the grid resolution is increased . finally , in fig .[ fig9 ] , we show profiles of the violations of the hamiltonian constraint ( [ ham ] ) at time . in this graphall results are rescaled with ; the convergence of the resulting lines demonstrates that the numerical error in these simulations is again dominated by the second - order implementations of the advective shift terms , and possibly the time evolution .in this paper we demonstrate that a pirk method can be used to solve the einstein equations in spherical polar coordinates without any need for any regularization at the origin or on the axis .specifically , we integrate a covariant version of the bssn equations in three spatial dimensions without any symmetry assumptions . to the best of our knowledge ,these calculations represent the first successful three - dimensional numerical relativity simulations using spherical polar coordinates .we consider several test cases to assess the stability , accuracy and convergence of the code , namely weak - field teukolsky " gravitational waves , hydro - without - hydro " simulations of static and rotating relativistic stars , and single black holes .spherical polar coordinates have several advantages and disadvantages over cartesian coordinates . at least in single - grid calculations ,spherical polar coordinates allow for a more effective allocation of the numerical grid points for applications that involve one center of mass , for example gravitational collapse of single stars or supernovae .this is true even for uniform grids , which we adopt in this paper , but curvilinear coordinate systems also facilitate the use of non - uniform grids ( e.g. a logarithmic radial coordinate ) to achieve a high resolution near the origin while keeping the outer boundary sufficiently far .spherical polar coordinates have another strong advantage over cartesian coordinates . in simulations of supernovae or gravitational collapse , for example , the shape of the stellar objects is not well represented by cartesian grids .this mismatch between the symmetry of the object and the grid creates direction - dependent numerical errors , which are observed to trigger modes that grow in time . since spherical polar coordinates mimic the symmetry of collapsing stars more accurately , we expect that this problem can at least be reduced with these coordinates .however , spherical polar coordinates also have disadvantages .one of these disadvantages is of practical nature : the equations in spherical polar coordinates include many more terms than those in cartesian coordinates , which makes the numerical implementation more cumbersome and error prone .spherical polar coordinates also introduce coordinate singularities that traditionally have created many numerical problems ; but these problems can be avoided when using a pirk method .perhaps the most severe disadvantage of spherical polar coordinates is caused by the courant limitation on the time step .as shown in eq .( [ courant ] ) , the close proximity of grid points close to the origin limits the size of the time steps to increasingly small values as the resolution is increased . in three - dimensional simulations, decreases approximately with the product .this is a severe disadvantage compared to cartesian coordinates where typically .however , this problem is not unique to numerical relativity , and instead is well - known from dynamical simulations in spherical polar coordinates in any field .accordingly , several different approaches to either solving or reducing this problem have been suggested .one possible approach is to reduce the grid resolution in the angular directions , and , close to the origin .however , for many applications the angular dependence of the solution may be independent of the radius , so that this approach might severely limit the accuracy of the results . it may also be possible to replace the pirk method in a sphere around the origin with a completely implicit scheme , so that the time step there is no longer limited by the courant condition ( [ courant ] ) .similar implicit / explicit ( imex ) `` split - by - region '' schemes have been suggested , for example , in in the context of spectral schemes .finally , the yin - yang " method suggested in mitigates the restrictions imposed by the courant condition ( [ courant ] ) as follows .note that the smallest physical distance between grid points , which in turn limits the time step , occurs next to the axis . in the yin - yang method ,the unit sphere is therefore covered by two different grids that are rotated by an angle of 90 degrees with respect to each other .each one covers only a region around its equator , thereby avoiding the most severe limitation on the time step next to the axis , but combined both grids cover the entire unit sphere . despite the small time step ,however , we have been able to complete all simulations presented in this paper even with a serial code in fact , some of our simulations were performed on a laptop computer .twb and icc gratefully acknowledge support from the alexander - von - humboldt foundation , twb would also like to thank the max - planck - institut fr astrophysik for its hospitality .this work was supported in part by the deutsche forschungsgemeinschaft ( dfg ) through its transregional center sfb / tr7 `` gravitational wave astronomy '' , and by nsf grant phy-1063240 to bowdoin college .in spherical polar coordinates , in particular in the absence of any symmetry assumptions , the numerical implementation of curvature quantities involves a significant number of terms that can easily introduce mistakes ( see section [ sec : implementation ] ) .one way of testing this part of the numerical code is to compare with known analytical solutions , for example for the schwarzschild metric .however , most analytical solutions feature symmetries ( e.g. spherical symmetry for schwarzschild ) that simplify the problem in the spherical polar coordinates of our code . as a consequence ,many terms vanish identically for these solutions , so that not all terms in the code are tested . in this appendixwe describe a simple test that is also analytical , but is neither spherically nor axially symmetric , and hence a very stringent test . starting with the flat metric in cartesiancoordinates we introduce a coordinate transformation of each coordinate that only depends on that coordinate itself ; the resulting metric then takes the form where , and are arbitrary functions . transforming this metric into spherical polar coordinates leads to a metric for which , in general , all coefficients are non - zero and depend on the coordinates in potentially complicated ways. in cartesian coordinates , the only non - vanishing christoffel symbols are where the prime denotes a derivative with respect to the argument .given that the transform like tensors , we can obtain the corresponding coefficients in spherical polar coordinates with a simple coordinate transformation . for sufficiently general functions , and ,all 18 components of in spherical polar coordinates will be non - zero .this yields analytical expressions for the connection coefficients ( [ connection ] ) that can then be compared with numerical results ., for the flat metric ( [ flat_metric ] ) with functions ( [ flat_functions ] ) , evaluated using grid sizes for , 2 , 4 and 8 .all values are rescaled with , so that the convergence of these results indicates fourth - order convergence of our implementation.,width=326 ] similarly , the connection functions are given by in cartesian coordinates , and can be transformed into spherical polar coordinates with a simple coordinate transformation .finally , all components of the ricci tensor in spherical polar coordinates should converge to zero , since the metric ( [ flat_metric ] ) is still flat . in fig .[ fig10 ] we show numerical examples for all components of are non - zero , but converge to zero as the grid resolution is increased . in the graphwe rescale all results with , so that the convergence of the resulting quantities indicates fourth - order convergence of our implementation of the ricci tensor , as expected .we evolve the evolution eqs .( [ evolution ] ) , ( [ 1+log])-([gammadriver ] ) using a second - order pirk method . in this appendixwe provide details on how we split the right - hand sides of these equations into the explicit and partially implicit operators .we start each time step by evolving the conformal metric components , , the conformal factor , the lapse function , , and the shift vector , , explicitly , i.e. , all the source terms of the evolution equations of these variables are included in the operator of the second - order pirk method .we then evolve the traceless part of the extrinsic curvature , , and the trace of the extrinsic curvature , , partially implicitly . more specifically , the corresponding and operators associated with the evolution equations for and in terms of the original bssn variable , related to through eq . ( [ acap ] ) , are ^{\rm tf } , \label{l2a } \\ l_{3(\bar a_{ij } ) } & = - \frac{2}{3 } \bar a_{ij } \bar d_k \beta^k - 2 \alpha \bar a_{ik } \bar a^k { } _ j + \alpha \bar a_{ij } k , \\l_{2(k ) } & = - e^{- 4 \phi } ( \bar d^2 \alpha + 2 \bar d^i \alpha \bar d_i \phi ) + \alpha \bar a_{ij } \bar a^{ij } , \\l_{3(k ) } & = \frac{\alpha}{3 } k^2.\end{aligned}\ ] ] the are evolved partially implicitly , using the updated values of , , , , and . in terms of the original bssn variable , related to through eq .( [ lambda ] ) , the operators are we note that the evaluation of the ricci tensor in eq .( [ l2a ] ) requires updated values of before they become available .it is possible to either replace these updated values with old values , or to update the provisionally in a purely explicit step , use these values in eq .( [ l2a ] ) , but then overwrite these values after the are updated partially implicitly .we have used the latter approach in the simulations presented in this paper .finally , the are evolved partially implicitly , using the updated values of the previous quantities , matter source terms and lie derivative terms are always included in the explicitly treated parts .
|
in the absence of symmetry assumptions most numerical relativity simulations adopt cartesian coordinates . while cartesian coordinates have some desirable properties , spherical polar coordinates appear better suited for certain applications , including gravitational collapse and supernova simulations . development of numerical relativity codes in spherical polar coordinates has been hampered by the need to handle the coordinate singularities at the origin and on the axis , for example by careful regularization of the appropriate variables . assuming spherical symmetry and adopting a covariant version of the bssn equations , montero and cordero - carrin recently demonstrated that such a regularization is not necessary when a partially implicit runge - kutta ( pirk ) method is used for the time evolution of the gravitational fields . here we report on an implementation of the bssn equations in spherical polar coordinates without any symmetry assumptions . using a pirk method we obtain stable simulations in three spatial dimensions without the need to regularize the origin or the axis . we perform and discuss a number of tests to assess the stability , accuracy and convergence of the code , namely weak gravitational waves , hydro - without - hydro " evolutions of spherical and rotating relativistic stars in equilibrium , and single black holes .
|
conway s game of life is the best known two - dimensional cellular automaton , due to the complex behaviour it generates from a simple set of rules .the game of life , introduced to the world at large by martin gardner in 1970 , has provided throughout the years challenging problems to many enthusiasts .a large number of these are documented in stephen silver s comprehensive life lexicon .detailed accounts of the game of life have been given by poundstone in , which uses the game of life as an illustration of complexity , and by sigmund in , which uses the game of life in the context of artificial life and the ability of a cellular automaton to self - replicate .an interesting extension of the game of life to three dimensions was suggested by bays .the game of life is played on a square lattice with interactions to nearest and to next - nearest neighbours , where each cell can be either empty or occupied by a token and is surrounded by eight neighbouring cells .the evolution of the game is governed by the following simple rules .if a cell is empty it gives _ birth _ to a token if exactly three of its neighbours are occupied , and if it is occupied it _ survives _ if either two or three of its neighbours are occupied . in all other cases either the cell remains empty or it _ dies _, i.e. it becomes empty .the game evolves by repeated applications of these rules to produce further configurations .the single player of the game decides what the initial configuration of the lattice will be and then watches the game evolve .one of the questions that researchers have investigated is the asymptotic behaviour of this evolutionary process .it transpires that when the initial configuration is random and its density is high enough , then the game eventually stabilises to a density of about 0.0287 ; see also .one aspect that is missing from conway s game of life is the competitiveness element of two - player games , as gardner noted `` attempts have also been made to invent competitive games based on `` life '' , for two or more players , but so far without memorable results . ''this is the challenge that we take up in this paper . an interesting version of the game of life for two players is known as _ black and white _ or _ , where cells are either white or black and when a birth occurs the colour of the token is decided according to the majority of neighbouring cells .although this variation is interesting in its own right , survival does not involve the two colours and remains non - competitive as in the single player version .we propose a new two - player version of the game of life where both birth and survival are competitive , and provide a preliminary analysis of its behaviour .the rest of the paper is organised as follows . in section[ sec : rules ] we present the rule set for our two - player version of the game of life and in section [ sec : mean - field ] we provide a mean - field analysis of the game . in section [ sec : density ] we present results of simulations to ascertain the asymptotic density of the game and , finally , in section [ sec : conc ] we give our concluding remarks and provide a web link to an implementation of the game .in the two - player variation of the game of life , which we call _ p2life _ , the players , white and black , are competing for space .conway s game of life is considered to be `` interesting '' since its simple set of rules lead to complex and unpredictable behaviour .( the notion that from simple rules complex behaviour can emerge , which may help us understand the diversity of natural phenomena , is discussed in great detail by wolfram in but can already be learned from the 130 year old thesis of van der waals on liquid - vapour equilibria . ) p2life maintains the `` interesting '' behaviour of the game of life by preserving the essence of conway s game and adding to its rules a competitive element to decide who will give birth and who will survive .the rules of p2life , from white s point of view ( the rules from black s point of view are symmetric ) , are as follows : birth .: : if a cell is empty , then we consider two cases : + 1 .the cell has exactly three white neighbours and the number of black neighbours is different from three . in this case a white token is born in the cell . 2 .the cell has exactly three white and three black neighbours . in this casean unbiased coin determines whether a white or black token is born in the cell .: : if a cell is occupied by a white token , then we consider two cases : + 1 . if the difference between the number of white and black neighbours is two or three , then the white token survives . 2 .if the difference between the number of white and black neighbours is one and the number of white neighbours is at least two , then the white token survives .it is clear that if there is only one colour on the lattice then p2life reduces to the standard one - player version of the game .we note that having non - symmetric rules for white and black would allow us to investigate how different behaviour sets interact , but finding such a set of rules which would be `` interesting '' is an open problem . during the process of deciding the rule set for p2life we considered several variations , which we now briefly discuss : 1 . in the first part of the birth rule , insisting that the number of black tokens should be less than three , which is similar to birth in the black and white variant .this decreases the birth rate but still seems to be `` interesting '' .2 . omitting the second part of the birth rule , i.e. the random choice when there are exactly three white and three black neighbours .again , this decreases the birth rate but still seems to be `` interesting '' .omitting the second part of the survival rule , i.e. survival when the difference between the number of white and black tokens is one and the number of white tokens is at least two .this decreases the survival rate and the conflict between the two players but , once again seems `` interesting '' 4 . in the second part of the survival rule omitting the side condition that the number of white neighbours must be at least two , or in the first part of the survival rule omitting the condition that the difference between white and black is at most three . due to the increased survival rates ,these modifications lead to steady growth with spatial boundaries between the two players. it would be interesting to compare these rules to schelling s models of segregation or the ising model .we illustrate two configurations which lead to interesting confrontations between white and black .the configuration shown on the left - hand side of table [ table : gol1 ] leads to a black block and two white gliders as shown on the right - hand side of the table . in the one - player game of lifethis initial configuration annihilates itself into empty space . on the other hand ,the configuration shown on the left - hand side of table [ table : gol2 ] leads to two black blocks as shown on the right - hand side of the table .thus black wins from this position . in the one - player game of lifethis initial configuration leads to six blinkers . & & & + & & & & + & & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & & + & & & & & & & & & & & & & & & & + & & & + & & & + & & & & & & & & & & & & & + & & & & & & & & & & & & & & + & & & & & & & +to compute the initial configuration given a specified initial density of , we use the following procedure for each cell in the lattice .firstly we determine whether it is occupied or empty according to the given probability and then , if it should be occupied we toss an unbiased coin to decide whether it will be occupied by a white or black token . in a similar fashion to the game of life , we now use mean - field theory to determine the density of the tokens after applying the rules to the initial configuration .it can be verified by the rules of p2life that the density , after a single application of the rules to a square lattice having initial density , is given the mean - field equation , from ( [ eq : mean - field ] ) we can compute the maximum density of , which is at an initial density of .this can also be seen from the plot of the mean - field equation shown in figure [ fig : mean - field ] .as we would expect , simulations show a very close match with this plot , although obviously due to correlations we can not use the mean - field equation to predict the long - term behaviour of the game . an interesting point to note about the plotis that when , , which is different from the original game of life ( and the black and white variant ) where .the reason is that although dense regions of a single colour die off immediately , mixed spaces between the two colours allow for survival of tokens .we also note that from figure [ fig : mean - field ] we can verify that the only fixed - point of the mean - field equation for p2life is zero . to improve the prediction power for repeated applications of the rules we could extend the mean - field analysis using the local structure theory of gutowitz and victor .plot of the mean - field density against the initial density ,width=453,height=352 ]we have investigated the properties of p2life with particular emphasis on the estimation of its asymptotic density via simulations using matlab .this approach offers flexibility and reasonable performance ( 0.7 million updates per second ) for a particularly computationally intensive task .we have performed simulations on both periodic ( toroidal ) and cutoff boundary conditions on square lattices of sizes from to . in particular , we have investigated the dependence of the asymptotic density on the initial density of a random , uniformly and independently distributed , initial configuration .each configuration is iterated until the game configuration reaches a stable or oscillatory state . in general , it is straightforward to detect when such conditions occur , with the notable exception of the case of periodic boundary conditions when in the final configuration there exists a glider which travels around the torus without colliding with other tokens in occupied cells .the results for a square lattice of size are shown in figure [ fig : asymptotic ] . each experimentally computed point plotted in figure [ fig : asymptotic ]is the average over one hundred independently selected initial configurations .it is immediately evident that in direct contrast to the one - player version of the game of life , the asymptotic density of p2life is non - zero for non - zero initial density . in the one - player version of the game ,if the initial density increases above approximately the asymptotic density is zero due to the annihilation of all tokens during the first iteration due to overcrowding .in fact , for we estimate that p2life has asymptotic density .this is due to the fact that in p2life the survival rules allow members of both token populations to carry over to the next iteration with , which is consistent with the value obtained via the mean - field approach .we make several other interesting observations on the asymptotic behaviour of p2life .firstly , we have estimated that the maximum asymptotic density of is reached at about and remains fairly constant until .this is different to the behaviour of the one - player version of the game , where a fairly constant asymptotic density is reached at about and remains at that level until approximately .secondly , when periodic boundary conditions are used , the asymptotic density increases by approximately 5% to compared to the situation of cutoff boundary conditions .thirdly , it appears that the size of the lattice does not seem to affect the asymptotic density , although , as in the one - player version there may be small finite - size effects .finally , we estimated the ratio of the loser population over the winner population at the final state .a histogram of the results aggregated over 400 runs for a square lattice of size , where 100 runs were carried out for initial densities of 0.25 , 0.5 , 0.75 and 1.00 , is shown in figure [ fig : ratio ] .we observe that over 69% of the runs resulted in the ratio being over 0.5 , i.e. the loser having more that one third of the final population .plot of asymptotic density against the initial density ,width=453,height=352 ] histogram for the ratio of the loser population over the winner population , width=453,height=352 ]our main contribution is to have shown that , by injecting competitive elements into conway s game of life , `` life '' can be `` interesting '' when more than one player participates in the game .an applet demonstrating p2life can be accessed at a problem that we are now investigating is how to convert p2life into a `` real '' game , i.e. where players are allowed to make moves between generations , which change the configuration of the tokens , and devise strategies to overpower or live side - by - side with their opponent .
|
we present a new extension of conway s game of life for two players , which we call _ p2life_. p2life allows one of two types of token , black or white , to inhabit a cell , and adds competitive elements into the birth and survival rules of the original game . we solve the mean - field equation for p2life and determine by simulation that the asymptotic density of p2life approaches . _ keywords : _ two - player game of life ; cellular automata ; mean - field theory ; asymptotic density .
|
security is an important issue in wireless systems due to the broadcast nature of wireless transmissions . in a pioneering work ,wyner in addressed the security problem from an information - theoretic point of view and considered a wire - tap channel model .he proved that secure transmission of confidential messages to a destination in the presence of a degraded wire - tapper can be achieved , and he established the secrecy capacity which is defined as the highest rate of reliable communication from the transmitter to the legitimate receiver while keeping the wire - tapper completely ignorant of the transmitted messages .recently , there has been numerous studies addressing information theoretic security .for instance , the impact of fading has been investigated in , where it has been shown that a non - zero secrecy capacity can be achieved even when the eavesdropper channel is better than the main channel on average .the secrecy capacity region of the fading broadcast channel with confidential messages and associated optimal power control policies have been identified in , where it is shown that the transmitter allocates more power as the strength of the main channel increases with respect to that of the eavesdropper channel .in addition to security issues , providing acceptable performance and quality is vital to many applications .for instance , voice over ip ( voip ) and interactive - video ( e.g , .videoconferencing ) systems are required to satisfy certain buffer or delay constraints . in this paper, we consider statistical qos constraints in the form of limitations on the buffer length , and incorporate the concept of effective capacity , which can be seen as the maximum constant arrival rate that a given time - varying service process can support while satisfying statistical qos guarantees .the analysis and application of effective capacity in various settings have attracted much interest recently ( see e.g. , and references therein ) .we define the _ effective secrecy throughput region _ as the maximum constant arrival rate pairs that can be supported while the service rate is confined by the secrecy capacity region .we assume that the channel side information is known at both the transmitter and receivers .then , following a similar analysis as shown in , we obtain the optimal power allocation policies that achieve points on the boundary of the effective secrecy throughput region .the rest of the paper is organized as follows .section ii briefly describes the system model and the necessary preliminaries on statistical qos constraints and effective capacity . in section iii , we present our main results on the optimal power control policies .finally , section iv concludes the paper .we consider a scenario in which a single transmitter broadcasts messages to two receivers .the transmitter wishes to send receiver 1 confidential messages that need to kept secret from receiver 2 , and also at the same time send common messages to both receivers .a depiction of the system model is given in figure [ fig : systemmodel ] .it is assumed that the transmitter generates data sequences which are divided into frames of duration .these data frames are initially stored in the buffer before they are transmitted over the wireless channel .the channel input - output relationships are given by =h_1[i]x[i]+z_1[i]\,\text{and}\,y_2[i]=h_2[i]x[i]+z_2[i]\end{aligned}\ ] ] where is the frame index , ] and ] s are jointly stationary and ergodic discrete - time processes , and we denote the magnitude - square of the fading coefficients by =|h_j[i]|^2 ] , and we assume that the bandwidth available for the system is . above , ] .the additive gaussian noise samples \} ] as the instantaneous transmit power in the frame .now , the instantaneous transmitted snr level for receiver 1 becomes =\frac{p[i]}{n_1 b} ] for receiver 1 . if we denote the ratio between the noise powers of the two channels as , the instantaneous transmitted snr level for receiver 2 becomes =\gamma\mu^1[i] ] , which is the time - accumulated service process . , i=1,2,\ldots\} ] , where ] .then , ( [ eq : effectivedefi ] ) can be written as }\}\,\quad \text{bits / s } , \label{eq : effectivedefirate}\end{aligned}\ ] ] where ] vary independently .the _ effective secrecy throughput _normalized by bandwidth is this section , we investigate the fading broadcast channel with confidential message ( bcc ) by incorporating the statistical qos constraints .et al . _ in have shown that the fading channel can be viewed as a set of parallel subchannels with each corresponding to one fading state . in , it has been assumed that no delay constraints are imposed on the transmitted messages .under such an assumption , the ergodic secrecy capacity region is determined and the optimal power allocation policies achieving the boundary of the capacity region are identified . in this paper , we analyze the performance under statistical buffer constraints by considering the effective capacity formulation . according to the formula for effective capacity ( [ eq : effectivedefirate ] ) , we first have the following result . the effective secure throughput region of the fading bcc is }\},\ , \nonumber\\ & \hspace{3cm}\text{subject to } \ , \forall { \mathbb{e}}\{\mathbf{r}\}\in \mathcal{c}_s\bigg\}\end{aligned}\ ] ] where is the vector composed of the instantaneous rates for the common and confidential messages , respectively .we assume that can take any possible value defined in the ergodic secrecy capacity region .since the secrecy capacity region is convex , we can easily prove the following .the effective secrecy throughput region is convex . _proof : _ let the two effective capacity pairs and belong to .therefore , there exists some ] for and , respectively . by a time sharing strategy , for any , we know that +(1-\alpha)\mathbf{r'}[i]\}\in\mathcal{r}_{s} ] and the rhs function takes values from ] ; , ^+ ] ; , ^+$ ] ; where can be numerically computed to satisfy the average power constraint . according the above optimal power allocation policy of , in contrast to * case i *, the condition ( [ eq : case2cond ] ) indicates that the power allocated to is small such that the interference from sending confidential messages can be ignored in , i.e. , smaller effective secrecy throughput .* case iii : * the first two sub - cases and are trivial because , other than the condition , there is no difference in the power allocation policies from what we have derived in * case i * and * case ii*. we are more interested in the case in which there is decided by the following condition we will first derive the optimal power control policies for any given , and then determine .the lagrangian is given by \nonumber\\ & -\frac{\lambda_1}{\beta \log_e 2}\log_e\bigg(\int_{\mathbf{z}\in\mathcal{z}}\left(\frac{1+\mu_1(\mathbf{z})z_m}{1+\gamma\mu_1(\mathbf{z } ) z_m}\right)^{-\beta}p_{\mathbf{z}}(z_m , z_e)d\mathbf{z}\nonumber\\ & \hspace{1.5cm}+\int_{\mathbf{z}\in\mathcal{z}^c}p_{\mathbf{z}}(z_m , z_e)d\mathbf{z}\bigg)\nonumber\\ & \hspace{1cm}-\kappa\left({\mathbb{e}}_{\mathbf{z}\in\mathcal{z } } \{\mu_0(\mathbf{z})+\mu_1(\mathbf{z})\}+{\mathbb{e}}_{\mathbf{z}\in\mathcal{z}^c}\{\mu_0(\mathbf{z})\}\right)\end{aligned}\ ] ] are the solutions to the following where ( [ eq:2optcond1])-([eq:2optcond3 ] ) are obtained by taking the derivative of with respect to for , for , and for , respectively .similarly as before , whenever or have negative values through these equations , they are set to 0 . considering ( [ eq:3optcond2 ] ) , we see that when , needs to satisfy and is given by ( [ eq:3optcond3 ] ) when , is given by now , for , we need to have the following where is computed from ( [ eq:3subcond3 ] ) . for any , we need to find the associated power control policy satisfying ( [ eq:3optcond1])-([eq:3optcond3 ] ) .then , we need to further search over for that satisfies we obtain the following algorithm to determine the optimal power control policies .given , obtain ; denote , ; compute from ( [ eq:3subcond2 ] ) ; ( [ eq:3subcond1 ] ) holds or ; ;([eq:3subcond4 ] ) holds compute and from ( [ eq:3optcond2 ] ) and ( [ eq:3optcond3]); , is given by ( [ eq:3subcond3 ] ) ; , is given by ( [ eq:3subcond3 ] ) ; where can be numerically computed to satisfy the average power constraint .based on the previous results , we have the following algorithm to find the optimal power control policies .find given in * case i * ; ; and ;find given in * case ii * ; ; and ;for a given , find given in * case iii - c*;search over to find that satisfies and . in fig .[ fig : region1 ] , we plot the achievable effective secrecy throughput region in rayleigh fading channel . we assume that , i.e. , the noise variances at both receivers are equal . in the figure ,the circles fall into case i or case iii - a , and the pluses fall into case ii or case iii - b , and case iii - c is shown as line only .ms , hz , and db . ]in this paper , we have investigated the fading broadcast channels with confidential message under statistical qos constraints .we have first defined the effective secrecy throughput region , which was later proved to be convex .then , the problem of finding points on the boundary of the throughput region is shown to be equivalent to solving a series of optimization problem .we have extended the approach used in previous studies to the scenario considered in this paper .following similar steps , we have determined the conditions satisfied by the optimal power control policies . in particular ,we have identified the algorithms for computing the power allocated to each fading state from the optimality conditions .numerical results are provided as well .j. tang and x. zhang , `` cross - layer - model based adaptive resource allocation for statistical qos guarantees in mobile wireless networks , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 6 , pp.2318 - 2328 , june 2008 . l. liu , p. parag , and j .- f .chamberland , quality of service analysis for wireless user - cooperation networks , " _ ieee trans .inform . theory _3833 - 3842 , oct .
|
in this paper , the fading broadcast channel with confidential messages is studied in the presence of statistical quality of service ( qos ) constraints in the form of limitations on the buffer length . we employ the effective capacity formulation to measure the throughput of the confidential and common messages . we assume that the channel side information ( csi ) is available at both the transmitter and the receivers . assuming average power constraints at the transmitter side , we first define the _ effective secure throughput region _ , and prove that the throughput region is convex . then , we obtain the optimal power control policies that achieve the boundary points of the _ effective secure throughput region_.
|
the idea that quantum entanglement and quantum interactions with a part of a composite system allow faster than light communication has been entertained for quite a long time .all existing proposals have been shown to be unviable . for a general overviewwe refer the reader to papers by herbert , selleri , eberhard , ghirardi & weber , ghirardi , rimini & weber , herbert , ghirardi ( who has derived the no - cloning theorem just to reject the challenging proposal [ 6 ] by herbert - see the document attached to ref [ 7 ] ) , and , more recently , by greenberger and kalamidas . a detailed analysis of the problem and the explicit refutation of all proposals excluding the one of kalamidas appear in the recent work by ghirardi .in view of the interest of the subject and of the fact that a lively debate on the topic is still going on we consider our duty to make rigorously clear that the proposal [ 9 ] is basically flawed .we will not go into details concerning the precise suggestion and will simply present a very sketchy description of the experimental set - up .the main point can be grasped by the following picture , taken from the paper by kalamidas , depicting a source s of entangled photons in precise modes which impinge on appropriate beam splitters and , the first one with equal transmittivity and reflectivity , the other two with ( real ) parameters and characterizing such properties .finally , in the region at right , one can ( or not at his free will ) inject coherent photon states characterized by the indicated modes :kalamidas mechanism for superluminal signaling rests on the possibility of injecting or to avoid to do so the coherent states at the extreme right .correspondingly , one has , as his initial state either : where and are coherent states of modes and , or , alternatively , the state : one has fixed the initial state the process starts and the state evolves in time .the evolution implies the passage of photons through the indicated beam splitters .it has to be mentioned that the recent debate on ref.[9 ] has seen disagreeing positions concerning the functioning of these devices .we will not enter into technical details , we simply describe the effect of crossing a beam splitter in terms of appropriate unitary operations which account for its functioning .the result is the one considered by kalamidas .let me stress , due to its importance , that this move to simply consider the unitary nature of the transformations overcomes any specific debate .actually , the quite general and legitimate assumption that * any unitary transformation of the hilbert space can actually be implemented * makes useless entering into the details of the functioning of the beam splitters , a move that we consider important since , apparently , different people make different claims concerning such a functioning .i simply consider the evolution of the initial statevector induced by the unitary transformation , with : using such expressions one easily evaluates the evolved of each of the two initial states going through all the beam splitters with their particular characteristics.the computation is quite easy and the final state , when the coherent states are present at right , turns to have the following form : \nonumber \\ & \times & d_{a3}(t\alpha)d_{a2}(-r\alpha)d_{b3}(t\alpha)d_{b2}(-r\alpha)|0>.\end{aligned}\ ] ] alternatively , when the second initial state is considered , the evolution leads to : |0\rangle.\end{aligned}\ ] ]i must confess that the original paper by kalamidas as well as many of the comments which followed are not sufficiently clear concerning what one does at right on the photons appearing there .one finds statements of the type when there is one photon in mode and one photon in mode " then there is a coherent superposition of single photon occupation possibilities between modes and " . herei can not avoid stressing that such statements , as they stand , are meaningless because they take into account one of the possible outcomes and not the complete unfoldng of the measurement process . if one is advancing a precise proposal for an experiment , he must clearly specify which actions are actually performed . and here comes the crucial point : the alleged important consequences of an action performed at right on the outcomes at left must be deduced from the analysis of the outcomes of possible observations in the region at left ( we want to have a signal there ) .it seems to me that the proponent of the new mechanism for superluminal communication has not taken into account a fundamental fact which has been repeatedly stressed precisely in the literature on the subject .what we have to investigate are the implications of precise actions at right for the physics of the systems in the region at left . in turn, all what is physically relevant at left , as well known , is exhaustively accounted by the reduced statistical operator referring to the systems which are there , i.e. the one obtained from the full statistical operator by taking the partial trace on the right degrees of freedom : $ ] , with obvious meaning of the symbols .now , the operator is unaffected by all conceivable legitimate actions made at right .the game is the usual one .one can consider : * unitary evolutions involving the systems at right : * projective measurement of an observable with spectral family * nonideal measurements associated to a family . in all these cases ( which exhaust all legitimate quantum possibilities ) ,due to the cyclic property of the trace , to the unitarity of and to the fact that the projection operators as well as the quantities sum to the identity operator ( obviously the one referring to the hilbert space of the systems at right ) , the reduced statistical operator does not change in any way whatsoever as a consequence of the action at right . in brief , for investigating the physics at left one can ignore completely possible evolutions or measurements of any kind done at right .obviously the same does not hold if one performs a selective measurement at right .but in this case the changes at left induced by the measurement depend on the outcome which one gets , so that , to take advantage of the change , the receiver at left must be informed concerning the outcome at right , and this requires a luminal communication . in accordance with these remarks , sentences like those i have mentioned above and appearing in ref.[8 ] , must be made much more precise .if at right one performs a measurement identifying the occupation numbers of the various states , one has to describe it appropriately taking into account all possible outcomes .concentrating the attention on a specific outcome one is actually considering a selective measurement , an inappropriate procedure , as just discussed . concluding this part : to compare the situation at left in the case in which at right coherent states are injected or not , one can plainly work with the evolved states ( 4 ) and ( 5 ) .the fundamental question concerning the possibility of superluminal communication becomes then : does it exist an observable for the particles at left ( i.e. involving modes and ) which has a different mean value or spread or probability for individual outcomes when the state is the one of eq.(4 ) or the one of eq.(5 ) ?in accordance with the previous analysis , to answer the just raised question we consider the most general self - adjoint operator of the hilbert space of the modes at left which we will simply denote as , and we will evaluate its mean value in the two states ( 4 ) and ( 5 ) . in the case of state( 5 ) , we have : h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})\nonumber \\ & & [ ( a_{1}^{\dag}+b_{1}^{\dag})(t a_{2}^{\dag}+r a_{3}^{\dag})+e^{i\phi}(-a_{1}^{\dag}+b_{1}^{\dag})(t b_{2}^{\dag}+r b_{3}^{\dag})]d_{a2}(-r\alpha)d_{a3}(t\alpha)d_{b2}(-r\alpha)d_{b3}(t\alpha)|0\rangle.\end{aligned}\ ] ] one has now to take into account that the vacuum is the product of the vacua for all modes , .the previous equation becomes : \cdot _ { 1 } \langle0|(a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & & \frac{1}{4}e^{i\phi } [ _ { 2}\langle0|_{3}\langle 0|d_{b3}^{\dag}(t\alpha ) d_{b2}^{\dag}(-r\alpha)d_{a3}^{\dag}(t\alpha)d_{a2}^{\dag}(-r\alpha)(t a_{2}+r a_{3})(t b_{2}^{\dag}+r b_{3}^{\dag } ) \nonumber \\ & & d_{a2}(-r\alpha)d_{a3}(t\alpha)d_{b2}(-r\alpha)d_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _ { 1 } \langle0|(a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & & \frac{1}{4}e^{-i\phi } [ _ { 2}\langle0|_{3}\langle 0|d_{b3}^{\dag}(t\alpha ) d_{b2}^{\dag}(-r\alpha)d_{a3}^{\dag}(t\alpha)d_{a2}^{\dag}(-r\alpha)(t b_{2}+r b_{3})(t a_{2}^{\dag}+r a_{3}^{\dag } ) \nonumber \\ & & d_{a2}(-r\alpha)d_{a3}(t\alpha)d_{b2}(-r\alpha)d_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _ { 1 } \langle0|(-a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}+\nonumber \\ & & \frac{1}{4 } [ _ { 2}\langle0|_{3}\langle 0|d_{b3}^{\dag}(t\alpha ) d_{b2}^{\dag}(-r\alpha)d_{a3}^{\dag}(t\alpha)d_{a2}^{\dag}(-r\alpha)(t b_{2}+r b_{3})(t b_{2}^{\dag}+r b_{3}^{\dag } ) \nonumber \\ & & d_{a2}(-r\alpha)d_{a3}(t\alpha)d_{b2}(-r\alpha)d_{b3}(t\alpha)|0\rangle_{2}|0\rangle_{3}]\cdot _ { 1 } \langle0|(-a_{1}+b_{1})h(a_{1},a_{1}^{\dag},b_{1},b_{1}^{\dag})(-a_{1}^{\dag}+b_{1}^{\dag})|0\rangle_{1}.\end{aligned}\ ] ] let us take now into consideration the expression in square brackets of the first term ( the one which contains the coherent states and the vacua for modes 2 and 3 ) .if one keeps in mind that the coherent states are eigenstates of the annihilation operators one can apply the four terms arising from the expression to the coherent states . obviously , before doing this one has to commute the operators and in the expression and the similar one for mode 3. in so doing the expression reduces to 1 .just for the same reason and with the same trick one shows that one can replace with 1 the expression in the last term .the same calculation shows also that the corresponding expressions in the secon and third terms reduce to 0 .the final step consist therefore in evaluating , for the first and fourth terms the expressions : .\ ] ] taking into account that one gets the final expression for the expectation value of the arbitrary hermitian operator when one starts with the initial state containing the coherent states : .\end{aligned}\ ] ] it is now an easy game to repeat the calculation for the much simpler case in which the initial state is .one simply has precisely the expression ( 7 ) with all the coherent states missing .taking into account that the operators of modes 2 and 3 act now on the vacuum state one immediately realizes that one gets once more the result ( 9 ) .we have proved , with complete rigour that the expectation value of any conceivable self adjoint operator of the space of the modes 1 at left remains the same when one injects or does not inject the coherent states at right .note that the result is completely independent from the choice of the phase characterizing the two terms of the entangled initial state and from the parameters and of the beam splitters and it does not involve any approximate procedure . a last remark . during the alive debate which took place recently in connexion with kalamidas proposal ,other authors have reached the same conclusion .however the reasons for claiming this were not always crystal clear and a lot of discussion had to do with the approximations introduced by kalamidas . for these reasons we have decided to be extremely general and we have been pedantic in discussing even well known facts and properties of an ensemble of photons . our aim has been to refute in a completely clean and logically consistent way the idea that the device consents faster than light signaling .
|
in a recent paper , kalamidas has advanced a new proposal of faster than light communication which has not yet been proved invalid . in this paper , by strictly sticking to the standard quantum formalism , we prove that , as all previous proposals , it does not work . + key words : faster - than - light signalling . .
|
polar amplification posits that if the average global temperature increases , the relative change in the polar regions will be larger , and hence the observed decline of the arctic sea ice cover during the satellite era has been a key focus of research .the arctic oscillation ( ao ) is an indicator of how atmospheric circulation can be related to observed changes in the sea ice cover .however , because it captures only approximately 50% of the variability of the sea level pressure , it has been argued that the characteristics of the ao may have changed over time in a manner that the ao index is less predictive .of particular relevance here is the prevalence of free drift , and hence how sea level pressure and ice velocity are correlated .first , as the ice cover has thinned , modeling studies indicate that the mechanical and dynamical properties will change , and predict free drift becomes increasingly prevalent .second , the wind - driven circulation has oscillated between cyclonic and anticyclonic ( at 5 to 7 year intervals ) from 1948 to 1996 , after which the anticyclonic pattern has prevailed .thus , a central question concerns the coexistence of the changes in circulation patterns with the persistence of correlations between the wind and ice velocity fields .one of a number of the feedbacks often posited to drive polar amplification is the ice - albedo feedback ( e.g. , * ? ? ?due to the seasonality of the solar insolation at high latitudes , one can distill two key processes regulating the stability of the ice cover on the seasonal time scale ; the ice - albedo feedback during the summer , and the loss of heat from the surface by long - wave radiation in the winter , and these are modulated by stochastic variability . on multiple time scales( from weather to decadal ) we have extracted this variability quantitatively from satellite data on both the ice albedo itself and the equivalent ice extent ( eie ) , ; two key quantities reflecting the ice - albedo feedback .this analysis shows that the eie and the ice albedo are multi - fractal in time rather than an ar-1 process , which is commonly used to characterize arctic sea ice in climate models .indeed , one can show that an ar-1 process is inappropriate for two key reasons ; the existence of multiple time scales in the data can not be treated in a quantitatively consistent manner with a single decay time for the autocorrelation , and the strength of the seasonal cycle is such that , if not appropriately removed , model output or satellite retrievals will always have a single characteristic time of approximately 1 year ; a time scale at which all moments of the multi - fractal analysis are forced to converge . here , we find that the velocity field of sea ice is also a multi - fractal in time exhibiting points and described above .moreover , we find ( 1 ) a three and a half decade stationarity in the spatial correlations of the horizontal velocity components and the shear in the geostrophic wind field , yielding ostensibly the same results for 1978 - 2012 as found by over a two year time window ( 1979 - 1980 ) , and ( 2 ) a robust white noise structure present in the velocity fields on annual to bi - annual time scales , which we argue underlies the white noise characteristics of the eie on these time scales .finally , whereas previous analyses have shown the correlation between ice motion and geostrophic wind from days to months , we find this to extend up to years .we use the buoy derived pressure fields computed on a regular latitude - longitude grid of for the period january 1 , 1979 - december 31 , 2006 .the ice motion velocity vectors are obtained in a gridded format from the national snow and ice data center ( nsidc ) .these vectors are derived from multiple sensors that include amsr - e , avhrr , iabp buoys , smmr , ssm / i , ssmis and ncep / ncar , and their coverage extends from october 25 , 1978 - december 31 , 2012 . the raw ice velocity vectors from each source are processed to form the daily gridded ice velocity fields with a spatial resolution of 25 km . to minimize the effect of the coastline on ice motion , we discard all the grid points in the fields that are within a distance of 100 km of the coast ( e.g. , see * ? ? ?* ; * ? ? ?* ) ( figure [ fig : arctic_map ] ) .the gridded ice motion fields have the -component referenced to east longitude and the -component referenced to east longitude .we calculate the mean velocity in both the - and -directions for each day .we then analyze these time series using the multi - fractal temporally weighted detrended fluctuation analysis ( mf - twdfa ) methodology , described in the next sub - section , to extract the time scales in these time series and relate them to the time scales obtained from the analysis of eie .it is interesting to note that the ice motion data reflects both the shorter synoptic time scales and a strong seasonal cycle .importantly , this also demonstrates that by solely looking at the bare time series , one can not necessarily extract information regarding the process leading to such multiple time scale fields , which emboldens us in the use of multi - fractal methods . in geophysical time series analysis , the two - point autocorrelation functionis typically used to estimate the correlation time scale .this estimation has major drawbacks since ( ) it assumes that there is only a single correlation structure in the data , , where is the two point autocorrelation function , ( ) long term trends ( linear or non - linear ) and periodicities present in the data may obscure the estimated time scales .thus , in order to characterize the dynamics of the system , one may need multiple exponents , .namely , it is possible that for some there is an exponent , for some there is an exponent , and so forth .a two - point autocorrelation function will give a 1 year time scale for any signal with a sufficiently strong seasonal cycle and thereby mask any other time scales .this also serves as a motivation for using the multi - fractal methodology for the sea ice velocity fields .there are four stages in the implementation of mf - twdfa , which we summarize in turn .+ ( * 1 . * ) one constructs a non - stationary _ profile _ of the original time series , which is the cumulative sum ( * 2 . * ) one divides the profile into segments of equal length that do not overlap . excepting rare circumstances, the original time series is not an exact multiple of leaving excess segments of .these are dealt with by repeating the procedure from the end of the profile and returning to the beginning and hence creating segments .+ ( * 3 . * ) in the standard procedure an estimate is made of _ within a fixed window _ using order polynomial functions s . here, however , a moving window that is smaller than , but determined by the distance between points , is used to construct a point by point approximation to the profile , .we then compute the variance up ( ) and down ( ) the profile as + i ) - { \hat{y}}([\nu-1]s + i ) \}^2 \nonumber \\& \hspace{-10 mm } \text{for , and } \nonumber \\ \nonumber \\ \text{var}(\nu , s ) \equiv & \frac{1}{s } \sum_{i=1}^{s } \ { y(n-[\nu - n_s]s + i ) - \nonumber \\ & \hspace{15mm}{\hat{y}}(n-[\nu - n_s]s + i)\}^2\nonumber \\ & \hspace{-10 mm } \text{for . } \label{eq : vartw}\end{aligned}\ ] ] therefore we replace the global linear regression of fitting the polynomial to the data , with a weighted local estimate determined by the proximity of points to the point in the time series such that .a larger ( or smaller ) weight is given to according to whether is small ( large ) .+ ( * 4 . * ) the generalized fluctuation function is formed as ^{1/q}. \label{eq : fluct}\ ] ] the principal tool of the approach is to examine how depends on the choice of time segment for a given order of the moment taken .the scaling of is characterized by a generalized hurst exponent viz ., a few characteristics are worth pointing out at this juncture ( e.g. , ref . * ? ? ? * and refs . therein ) .the dominant time scales are the points where the fluctuation function changes slope , i.e. shifts from one dynamical behavior to another . for a monofractal timeseries , the generalized hurst exponents are independent of . if there is long term persistence in the data then , for which also serves as a check for the two - point autocorrelation function. one can relate to the slope of the power spectrum as follows .if , with frequency , then ( e.g. , * ? ? ?for a white noise process and hence . for a red noise process and hence .therefore , the slope of the fluctuation function curves as a function of reveal the different dynamical processes that operate on different time scales .this then implies that if the data is only short term correlated ( ) , its asymptotic behavior will be given by .( clearly , `` short '' and `` long '' depend in the details of the particular time series . ) other advantages of using temporally weighted fitting with moving windows over the regular mf - dfa are that the approximated profile is continuous across windows , reducing spurious slope changes at longer time scales , and while mf - dfa can only produce time scales up to , mf - twdfa extends this to . outlines the many time scales in the air / sea / ice system , ranging from days to several months , associated with the interaction of the ice with the atmosphere and the ocean mixed layer , whereas time scales ranging from decades to much longer , are ascribed to the deep ocean component of the system , generally understood to be uncoupled from ice drift and surface currents .for the central basin pack ice , the principal forces balance air and water stresses , sea surface tilt , and the coriolis effect . based on 2 years of data concluded that on time scales of days to months , more than 70% of the variance of the ice motion can be explained by the geostrophic winds . in figure[ fig : arctic_pressure_ds ] we plot the fluctuation function for arctic sea ice and geostrophic wind velocities in _x- _ and _ y- _ directions without the seasonal cycle , for the period 1978 - 2012 and 1978 - 2006 respectively .the match of the fluctuation functions for the sea ice velocity and the geostrophic wind is excellent up to characteristic ( crossover ) time scales of several years .this demonstrates that the high correlation between the winds and the ice velocity concluded by is extended from months to years , and that this correlation persists over climatological time scales .previously we showed that when the seasonal cycle is removed from the eie over the 32 year period ( 1978 - 2010 ) , it exhibits a white noise dynamical behavior on annual to bi - annual time scales . here , we extend the analysis of the eie without the seasonal cycle and examine it in progressive periods ; 1978 - 1980 , 1978 - 1981 , and 1978 - 2014 .this analysis confirms the presence of white noise structure as a robust signal on annual to bi - annual time scales . finally , the data from a hybridized data set from 1901 - 2014 compiled by walsh and chapman also shows white noise structure on these time scales .thus , for such a robust signal to exist , the physical mechanism responsible for it must be stationary . to further pursue the robustness of the statistics , we calculate the correlation between the components of sea ice velocity , parallel or perpendicular to a line joining two points separated by a distance ( see e.g. , * ? ? ?* ) . the spatial autocorrelation in both the parallel and perpendicular directions for 2 years of data ( 1979 - 80 ) ( from * ?* ) and for 34 years of data ( 1978 - 2012 ) are shown in figure [ fig : arctic_velocitycorr ] . in figure[ fig : ice_shear ] we compare the spatial autocorrelation functions in the shear for the ice velocity and the geostrophic wind field for the period 1979 - 1980 ( from * ? ? ?* ) with that for 1979 - 2006 .these demonstrate a striking three decade stationarity in the key correlations underlying the structure of the velocity field of pack ice .finally , we use the entire 34 year record to calculate the mean speed of each pixel every day of the year .this mean is computed using a specified threshold , , i.e. , if a pixel has contained sea ice for years , where the maximum is 34 years .thus , thresholds specify a minimum time for which a pixel contained ice .these are then used to produce a histogram for each day of the year for the sea ice speed , with each bin representing the number of pixels having the corresponding speed .these histograms are then normalized in order to compare between different thresholds .figure [ fig : arctic_velocitythreshold](a ) shows the normalized histograms for january 1 .the left histogram is for year , and the right histogram is for .next , for each day of the year we compute the difference in the two histograms , calculate the area under the curve , and plot this in figure [ fig : arctic_velocitythreshold](b ) for different thresholds from , with respect to . the change in the area as the threshold is increasedis negligible , with the maximum variability appearing during the summer , as expected due to the typical seasonality of free drift .moreover , when we subtract the running mean with a window size of 7 days from all the curves , and plot the residuals , the curves for all thresholds collapse ( lower curve in fig .[ fig : arctic_velocitythreshold]b ) .finally , when we analyze the residuals with the mf - twdfa methodology , we extract the weather time scale of 10 days and an approximately 47 day time scale , associated with the high variability during the summer as seen in fig .[ fig : arctic_velocitythreshold](b ) .these results further demonstrate the stationarity of the arctic sea ice velocity fields .we therefore ascribe the white noise structure of the eie to that of the velocity field .the multiple time scales of arctic sea ice motion vectors extracted from the multifractal analysis have been described in [ sec : disc ] . in all cases, we can attribute the 5 day time scale to the relaxation time scale for synoptic fluctuations in the sea ice motion .when the seasonal cycle is not removed , the only time scales are due to synoptic scale weather and the seasonal cycle itself . when the seasonal cycle is removed , the slope of the fluctuation curves demonstrate that a white noise process operates on time scales 60 days .the data motivate a stochastic treatment of the ice motion , which for simplicity of illustration we write for the -component of the velocity vector as where is the daily average ice velocity component , is the relaxation time scale , is the frequency of the seasonal cycle , and and are the strengths of the respective forcings .figure [ fig : lngvncompare ] shows the model to be in excellent agreement with the observations for time periods of days to decades . from periods of months to decades the dynamics is ostensibly white and thus the variance of the ice velocity is quasi - stationary .this is the key point , as the comparison is robust for reasonable changes in the model parameters ( e.g. , ranging from 2 to 6 days ) .however , the model is a minimal one and by considering a variety of other effects , such as multiplicative noise in combination with the periodic forcing , the potential for a stochastic resonance arises , which might produce behavior not found in the observational record .however , it is the observations themselves , with their long term stationarity , that motivate the simplicity of the model and thus its utility .using a variety of stochastic analysis approaches we examined approximately three decades of data and demonstrate the stationary structure of the correlation between sea ice motion and geostrophic winds . with two years of data , showed that on time scales of days to months , more than 70% of the variance of the ice motion can be explained solely by the geostrophic winds . over climatological time scales ,we find a striking robustness of this conclusion , which extends from days to years , and thus is most likely associated with the prevalence of free drift as the ice cover has declined ( e.g. , * ? ? ?we find that the ice motion field exhibits a white noise structure that explains that found in the equivalent ice extent over the same three and a half decade period .this is due to the long - term stationarity of the spatial correlation structure of the velocity fields . finally , using a periodically forced stochastic model, we can explain the combination of time scales that underlie the observed persistent structure of the velocity field and the forcing that produces it .these results can act as a test bed for the statistical structure of model results .the authors acknowledge nasa grant nnh13zda001n - cryo for support .jsw acknowledges swedish research council grant no .638 - 2013 - 9243 and a royal society wolfson research merit award for support .to obtain the spatial correlation functions , optimal interpolation is performed to interpolate the data on a rectangular grid using the following algorithm : * calculate the background field by first performing cubic interpolation on the raw data to analyze grid points for all days .then take the mean field .+ note : using a constant or a non - climatological field would be unsuitable as the results would be poorly constrained .* calculate the background error correlation matrix as , \label{eq : bg}\ ] ] where ^ 2 + [ y(i ) - y(j)]^2}12 & 12#1212_12%12[1][0] \doibase 10.1175/1520 - 0469(1982)039<2229:spotap>2.0.co;2 [ * * , ( ) ] in link:\doibase 10.1007/978 - 1 - 4899 - 5352 - 0{_}8 [ _ _ ] , , vol ., ( , ) chap . , pp . * * , ( ) link:\doibase 10.1029/jc087ic08p05845 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1029/2012gl053545 [ * * ( ) , 10.1029/2012gl053545 ] link:\doibase 10.1098/rsta.2014.0160 [ * * ( ) , 10.1098/rsta.2014.0160 ] * * , ( ) * * , ( ) * * , ( ) `` , '' ( ) , `` , '' ( ) , link:\doibase 10.1029/2008jc005227 [ * * ( ) , 10.1029/2008jc005227 ] * * , ( ) in link:\doibase 10.1007/978 - 1 - 4899 - 5352 - 0{_}14 [ _ _ ] , , vol . , ( , ) chap ., pp . `` , '' ( ) , * * , ( ) link:\doibase 10.1029/jc091ic06p07691 [ * * , ( ) ] _ _ ( , ) _ _ , ( , ) * * , ( )
|
argued that the surface pressure field over the arctic ocean can be treated as an isotropic , stationary , homogeneous , gaussian random field and thereby estimated a number of covariance functions from two years ( 1979 and 1980 ) of data . given the active interest in changes of general circulation quantities and indices in the polar regions during the recent few decades , the spatial correlations in sea ice velocity fields are of particular interest . we ask how persistent are these correlations ? to this end , we develop a stochastic model for arctic sea ice velocity fields based on a multi - fractal analysis of observed sea ice velocity fields from satellites and buoys for the period 1978 - 2012 . having previously found that the arctic equivalent ice extent ( eie ) has a white noise structure on annual to bi - annual time scales , we assess the connection between eie and ice motion . we demonstrate the long - term stationarity of the spatial correlation structure of the velocity fields , and the robustness of their white noise structure on multiple time scales , which ( a ) combine to explain the white noise characteristics of the eie on annual to biannual time scales , and ( b ) explain why the fluctuations in the ice velocity are proportional to fluctuations in the geostrophic winds on time scales of days to months , as described by . moreover , we find that the statistical structure of these two quantities is commensurate from days up to years , which may be related to the increasing prevalence of free drift in the ice pack .
|
the study of the critical properties of magnetic systems plays an important role in statistical mechanics and as a consequence also in thermodynamics . for equilibrium ,the extensitivity of the entropy is a question of principle for most physicists .nevertheless , an important issue may be raised . while many physicists believe that statistical mechanics generalizations with an extra parameter are suitable to study the optimization combinatorial process as for example the simulated annealing ( see e.g. , ) , or areas such as econophysics , population dynamics and growth models , bibliometry and others . in this paper, we generate the critical dynamics of ising systems using a new master equation .this master equation leads to a generalized metropolis prescription , which depends only on the spin interaction energy variations with respect to its neighborhood .furthermore , it satisfies the detailed energy balance condition and it converges asymptotically to the generalized boltzmann - gibbs weights . in refs . generalized prescriptions have been treated as local . here , we demonstrate that they are instead non - local . however , a non - local prescription such as the one of ref . is numerically more expensive and destroys the phase transition .another possibility is to recover locality . using a special deformation of the master equation, we show how to recover locality for a generalized prescription and additionally recovering the detailed energy balance in equilibrium spin systems , maintaining the system phase transition .to apply our metropolis prescription , we have simulated a two dimensional ising system in two different ways : using equilibrium monte carlo ( mc ) simulations we estimate critical temperatures for different -values and performing time - dependent simulations . in the second part, we also calculate the critical exponents set corresponding to each critical temperature.finally , we have developed an alternative methodology to refine the determination of the critical temperature .our approach is based on the optimization of the magnetization power laws in log scale via of maximization of determination coefficient ( ) of the linear fits .our presentation is organized as follows . in sec .[ sec : critical_dynamics ] , we briefly review the results of the critical dynamics for spins systems . in this review ,we calculate the critical exponents for the several spin phases , that emerge from different initial conditions . in sec .sec : generalized_master_equation , we propose a new master equation that leads to a metropolis algorithm , which preserves locality and detailed energy balance , also for . in sec .[ sec : results ] , we simulate an equilibrium ising spin system in a square lattice and show the differences between the results of our approach and of refs .next , we evolve a ising spin system in a square lattice , from ordered and disordered initial conditions in the context of time dependent simulations . from such non equilibrium monte carlo simulations ,also called short time simulations , we are able to calculate the dynamic and static critical exponents ones .finally , the conclusions are presented in sec .[ sec : conclusions ] .here , we briefly review finite size scaling in the dynamics relaxation of spin systems . we present our alternative deduction of the some expected power laws in the short time dynamics context . readers , which want a more complete review about this topic , may want to read .this topic is based on time dependent simulations , and it constitutes an important issue in the context of phase transitions and critical phenomena .such methods can be applied not only to estimate the critical parameters in spin systems , but also to calculate the critical exponents ( static and dynamic ones ) through different scaling relations by setting different initial conditions .the study of the statistical systems dynamical critical properties has become simpler in nonequilibrium physics after the seminal ideas of janssen , schaub and schmittmann and huse .quenching systems from high temperatures to the critical one , they have shown universality and scaling behavior to appear already in the early stages of time evolution , via renormalization group techniques and numerical calculations respectively . hence , using short time dynamics , one can often circumvent the well - known problem of the critical slowing down that plagues investigations of the long - time regime . the dynamic scaling relation obtained by janssen _et al . _ for the magnetization _k_-th moment , extended to finite size systems , is written as the arguments are : the time ; the reduced temperature , with being the critical one , the lattice linear size and initial magnetization .here , the operator denotes averages over different configurations due to different possible time evolution from each initial configuration compatible with a given . on the equation right - hand - side, one has : an arbitrary spatial rescaling factor ; an anomalous dimension related to .the exponents and are the equilibrium critical exponents associated with the order parameter and the correlation length , respectively .the exponent is the dynamic one , which characterizes the time correlations in equilibrium .after the scaling and at the critical temperature , the first ( ) magnetization moment is : . denoting and , one has : .the derivative with respect to is : ] , where ] , meaning that each term in the summation vanishes . in this case, is the boltzmann distribution : , where the summation is over the different energy states and .employing detailed balance requires to find simple prescriptions for spins systems dynamics , as for example the metropolis prescription : =\min \{1,\exp [ -\beta ( e^{(a)}-e^{(b)})]\} ] , where . hereit is important to mention that is a scale temperature that can be used to interpret experimental and computational experiments .there is a heated ongoing discussion whether it is the physical temperature or not .the function is the generalized exponential . for ,one retrieves the standard exponential function .it is this singularity at that brings up interesting effects such the survival / extinction transitions in one - species population dynamical models .the inverse of the generalized exponential function is the generalized logarithmic function , which for leads to the standard logarithm function .notice that the inequality , for fixed produces a limiting value for .this generalized logarithmic function has been introduced first in the context on non - extensive thermostatistics tsallis_1,tsallis_2 and has a clear geometrical interpretation ss the area between 1 and underneath the non - symmetric hyperbole .it is interesting to notice , that in 1984 cressie and read proposed an entropy that would lead to a generalization of the logarithm function given by : . in this case, we would gain the limiting value in but lose its geometrical interpretation . to recover the additive property of the argument , when multiplying two generalized exponential functions : [ and [ consider the following algebraic operators : observe that , if , then and if , then .however , in equilibrium , the ising model prescribes an adapted metropolis dynamics that considers a generalized version of exponential function crokidakis2009,boer2011 : = \frac{p_{1-q}[e^{(a)}]}{% p_{1-q}[e^{(b ) } ] } = \left\ { \frac{e_{1-q}[-\beta^{\prime } e^{(a)}]}{% e_{1-q}[-\beta^{\prime } e^{(b)}]}\right\}^{q } \ ; .\label{metropolis_i}\ ] ] from the generalization of the exponential function in the boltzmann - gibbs weight , the transition rate of eq .[ metropolis_i ] can be used to determine the system evolution , as the metropolis algorithm .nevertheless , we stress that in such a choice , the dynamics is not local . because generalized exponential functions are non - additive , a spin flip introduces a change in the system energy that is spread all over the lattice .more precisely , consider the ising model in a square lattice , one can show that : }{e_{1-q}[-\beta ^{\prime } e^{(b)}]}% = e_{1-q}\{-\beta ^{\prime } [ e^{(a)}\ominus _ { 1-q}e^{(b)}]\ } \label{eq_nonlocal0}\]]or:}{e_{1-q}[-\beta ^{\prime } e^{(b)}]}% \neq e_{1-q}\{-\beta ^{\prime } [ e^{(a)}-e^{(b)}]\}\ ; , \label{eq_nonlocal}\]]where is given by eq .[ eq : energy_difference ] , which depends only the spins that directly interact with the flipped spin , violating the detailed energy balance .in refs , the authors consider ( with no explanations ) the equality in eq .[ eq_nonlocal ] , instead of considering eq .[ eq_nonlocal0 ] .thus , the detailed energy balance is violated , since the system is updated following a local calculation of the generalized metropolis algorithm of eq .[ metropolis_i ] . to correct this problem, one must update the spin system using the non - locality of eq .[ metropolis_i ] , which is numerically expensive , since the energy of the whole lattice must be recalculated due to a simple spin flip .the other alternative is to require that the transition rate depends locally in the energy difference of a simple spin flip , which in turn leads us to a modified master equation .since the former is very expensive numerically , we explore only the latter alternative which is numerically faster and is able to produce statistically significant results for fairly large spin systems .based on the operators of eq .[ eq_mais ] to eq .[ eq_dividir ] , we propose the following generalized master equation : }{dt}=\sum\limits_{\sigma _ { i}^{(b ) } } & & w[\sigma _ { i}^{(b)}\rightarrow \sigma _ { i}^{(a)}]\otimes _ { \tilde{q}% /q}p_{q}[e^{(b)}]\;\ominus _ { \tilde{q}/q } \nonumber \\ & & w[\sigma _ { i}^{(a)}\rightarrow \sigma _ { i}^{(b)}]\otimes _ { \tilde{q}% /q}p_{q}[e^{(a)}]\;.\label{generalized_master_equation}\end{aligned}\]]where is given by eq .[ eq : pq ] . here , it is suitable to call and write the generalized exponentials as a function of . in equilibrium , and a dynamics governed by eq [ eq : pq ] .the detailed balance ( a sufficient condition to equilibrium ) for the generalized master equation is \oslash _ { \tilde{q}/q } w[\sigma_{i}^{(a)}\rightarrow \sigma_{i}^{(b ) } ] = p_{q}[e^{(a ) } ] \oslash _ { \tilde{q}/q } p_{q}[e^{(b ) } ] \ ; , \ ] ] which leads to a new generalized metropolis algorithm : ^{q}\oslash _ { \tilde{q}/q}\left [ e_{\tilde{q}}(-\beta ^{\prime } e^{(b)})\right]^{q}\right\}% \\ \nonumber = \min \left\ { 1 , \left [ e_{\tilde{q}}(-\beta ^{\prime } ( e^{(a)}-e^{(b)}))\right]^{q}\right\ } \\ & = & \min \left\ { 1,\left [ e_{\tilde{q}}(\beta^{\prime } j\left [ \sigma^{(a)}_{i_{x},i_{y}}-\sigma^{(b)}_{i_{x},i_{y}}\right ] s_{i_{x},i_{y}})\right ] ^{q}\right\ } \label{metropolis_ii}\end{aligned}\ ] ] and now the transition probability depends only on energy between the read site and its neighbors , i.e. , locality is retrieved .we have performed monte carlo simulations of the square lattice ising model in the context of generalized boltzmann - gibbs weights .these simulations are based on two approaches for metropolis dynamics .the first one ( metropolis i ) is described in ref . , where the nonlocal transition rate of eq .( [ metropolis_i ] ) is used to update the spin system . in the second approach ( metropolis ii ) , the local transition rate of eq .metropolis_ii is used .we separate our results in two different subsections : the equilibrium simulations and short time critical dynamics . in this partwe analyze the magnetization , where denotes averages under monte carlo ( mc ) steps .we perfom mc simulations for , and . in the simulations , we have used up to , with periodic boundary conditions and a random initial configuration of the spins with . differently from reported in ref . , where the results have been obtained after mc steps per spin , we have used mc steps per spin , an equilibrium situation consistent with the one reported by newman and barkema .this results in mc steps for the whole lattice of up to spins . , 0.8 and 0.6 . using the dynamics based on metropolis ii , we observe phase transitions for critical values upper to as differently from previous studies , which are based on metropolis i. ] fig . [fig : magnetization_equilibrio ] shows the magnetization curves as function of critical temperature for different -values .the critical temperature increases as decreases .this behavior , using our algorithm ( metropolis ii ) differs from the one obtained using the algorithm of refs . and( metropolis i ) .we stress that both algorithms agree for , the usual boltzmann - gibbs weights , converging to the theoretical value . in table[ table : critical_values_metropolis_i_and_ii ] , we show the critical temperature and error obtained from the extrapolation ( see fig .[ extrapolation ] ) using both algorithms .these results suggest a thorough difference among the processes and critical values found between two the dynamics metropolis i and ii . in fig .fig : magnetization_equilibrio , the curves show phase transitions for critical values upper to as .this differs from previous studies , which are based on prescription metropolis i. fig .[ fig : magnetization_equilibrio ] shows that , differently from the and cases , for the discontinuity in the magnetization curve does not depend on system size .in fact , in this case , the critical temperature does not depend on .this effect occurs due to the cutoff of the escort probability distribution as reported for metropolis i for . for metropolis ii ,[ extrapolation ] depicts that remains constant for all values of , for . for both cases , ( obviously ) and ,we have verified that and , obtained from the collapse of the curves versus .this data collapse permits the extrapolation of versus , since for both cases according to fig .[ extrapolation ] . in following ,we show using non - equilibrium simulations that , for and , validating the data collapse results ( see table [ table : best_values_critical_temperature_metropolisii ] ) . ) of critical temperatures for different : and for the 2d ising model . ] [ cols="^,^,^",options="header " , ] the magnetization decay obtained by the metropolis ii algorithm depicted in fig .[ fig : decay_magnetization ] . after obtaining these estimates for the critical temperatures ,we perform short time simulations to obtain the critical dynamic exponents and and the static one , using the power laws of sec .[ sec : critical_dynamics ] . here , we calculated from time correlation .tom and de oliveira showed that correlation behaves as , where is exactly the same exponent from initial slope of magnetization from lattices prepared with initial fixed magnetization .the advantage of this method is that we repeat runs , but the lattice does not require a fixed initial magnetization .it is enough to choose the spin with probability 1/2 .i.e. , in average . this method does not require the extrapolation .[ f2 ] and [ correlation ] depict plots of time evolving of of eq .[ z ] and as a function of for the different metropolis algorithm . to obtain the exponentsconsider the following steps .firstly , in simulations that start from the ordered state and , calculate the slope of the linear fit of as a function of .the error bars are obtained , via running simulations for , calculating that each for seed , with runs . versus in log scale .the slope gives which supplies the -value .both prescriptions ( metropolis i e ii ) are studied . ] for two prescriptions : metropolis i and ii . ] once we have calculated , we estimate taking the slope in log - log plot versus .we used different runs starting from random spins configurations with for time series and the same number of runs for time series starting from (ordered state ) .similarly , we repeated the numerical experiment for different seeds to obtain the uncertainties . in two dimensional systems ,the slope is ( see eq .[ z ] ) and so is calculated according to and the uncertainty in is obtained by relation . herethe denotes the amount estimated from different seeds .once is calculated , the exponent is calculated according to , where was estimated via magnetization decay and from cumulant .the exponent was similarly obtained performing different runs to evolve the time series of correlation and estimating directly the slope in this case .tables [ table : best_values_critical_temperature_metropolisi ] and table : best_values_critical_temperature_metropolisii show results for the critical exponents obtained with the two algorithms .we do not observe a monotonic behavior of the critical exponents as function of in either case but on the other hand for both cases we can not assert , for example , that ] ( metropolis ii ) or even other exponents do not change for which implies that we can not simply extrapolate the critical properties from to .in the non - extensive thermostatistics context , we have proposed a generalized master equation leading to a generalized metropolis algorithm . this algorithm is local and satisfies the detailed energy balance to calculate the time evolution of spins systems .we calculate the critical temperatures using the generalized metropolis dynamics , via equilibrium and non - equilibrium monte carlo simulations .we have obtained the critical parameters performing monte carlo simulations in two different ways .firstly , we show the phase transitions from curves versus , considering the magnetization averaging , in equilibrium , under different mc steps .next , we use the short time dynamics , via relaxation of magnetization from samples initially prepared of ordered or disordered states , i.e. , time series of magnetization and their moments averaged over initial conditions and over different runs . we have also studied the metropolis algorithm of refs .we show that it does not preserve locality neither the detailed energy balance in equilibrium .while our non - equilibrium simulations corroborate results of refs. when we use their extension of the metropolis algorithm ( metropolis i ) , the exponents and critical temperatures obtained are very different when we use our prescription ( metropolis ii ) .when the extensive case is considered , both methods lead to the same expected values .simultaneously , we have developed a methodology to refine the determination of the best critical temperature .this procedure is based on optimization of the power laws of the magnetization function that relaxes from ordered state in log scale , via of maximization of determination coefficient of the linear fits .this approach can be extended for other spin systems , since their general usefulness . for a more complete elucidation about existence of phase transitions for , we have performed simulations for small systems mc simulations , recalculating the whole lattice energy in each simple spin flip , according to metropolis i algorithm only to check the variations on the critical behavior of the model .notice that this does not apply to metropolis ii algorithm , since it has been designed to work as the standard metropolis one .our numerical results show discontinuities in the magnetization , but no finite size scaling , corroborating the results of ref . , which used the broad histogram technics to show that no phase transition occurs for using metropolis i algorithm .it is important to mention that only metropolis i shows inconsistence on critical phenomena of model since global and local simulation schemes leads to different critical properties .metropolis ii overcomes this problem since local and global prescriptions are the same even for .broad histogram method works with a non - biased random walk that explore the configuration space , leading to a phase transition suppression for .nevertheless this algorithm must also be adapted to deal with the generalized boltzmann weight in the same way the master equation needed to be modified .this is out of the scope of the present paper but this issue will be treated in a near future .the authors are partly supported by the brazilian research council cnpq under grants 308750/2009 - 8 , 476683/2011 - 4 , 305738/2010 - 0 , and 476722/2010 - 1 .authors also thanks prof .u. hansmann for carefully reading this manuscript , as well as cesup ( super computer center of federal university of rio grande do sul ) and prof .leonardo g. brunet ( if - ufrgs ) for the available computational resources and support of clustered computing ( ada.if.ufrgs.br ) .finally we would to thank the excellent quality of reviews of the anonymous referees .
|
the extension of boltzmann - gibbs thermostatistics , proposed by tsallis , introduces an additional parameter to the inverse temperature . here , we show that a previously introduced generalized metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance . in this dynamics , locality is only retrieved for , which corresponds to the standard metropolis algorithm . non - locality implies in very time consuming computer calculations , since the energy of the whole system must be reevaluated , when a single spin is flipped . to circumvent this costly calculation , we propose a generalized master equation , which gives rise to a local generalized metropolis dynamics that obeys the detailed energy balance . to compare the different critical values obtained with other generalized dynamics , we perform monte carlo simulations in equilibrium for ising model . by using the short time non - equilibrium numerical simulations , we also calculate for this model : the critical temperature , the static and dynamical critical exponents as function of . even for , we show that suitable time evolving power laws can be found for each initial condition . our numerical experiments corroborate the literature results , when we use non - local dynamics , showing that short time parameter determination works also in this case . however , the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes . we further propose a simple algorithm to optimize modeling the time evolution with a power law considering in a log - log plot two successive refinements .
|
advances in computational methods and numerical simulations have allowed to incorporate efficient models that are capable of describing real problems. introduced by ghitany et .al ( 2011 ) the weight lindley distribution with two parameters , is very flexible model to be fitted by reliability data since this distribution has increasing and bathtub hazard shape .some properties of this model were studied by ghitany et .al ( 2011 ) as well as the parameter estimation based on the maximum likelihood method .mazucheli et al . (2013 ) compare the efficiency of four estimation methods : maximum likelihood , method of moments , ordinary least - squares , and weighted least - squares and conclude that the weighted least - squares method reproduces similar results to those obtained using the maximum likelihood . using a bayesian approach ali ( 2013 ) consider different non - informative and informative prior for the parameters of the wl distribution . however , in studies involving a temporal response , is common the presence of incomplete or partial data , the so called censored data ( lawless , 2002 ) .it is important to point out that even incomplete these data provide important information about the lifetime of the components and the omission of those can result in biased conclusions . in literaturethere are different mechanisms of censorship ( balakrishnan & aggarwala , 2000 ; lawless , 2002 ; balakrishnan & kundu , 2013 ) . due to the large number of applications in medical survival analysis and industrial life testing ,it will be considered censored data with type ii , type i and random censoring mechanism .some referred papers regarding the reliability applications with those types of censoring can be seen in ghitany & al - awadhi ( 2002 ) , goodman et .al . ( 2006 ) , joarder et . al . (2011 ) , iliopoulos & balakrishnan ( 2011 ) , arbuckle et .2014 ) .the main objective of this paper is to estimate the parameters of the weight lindley distribution using the maximum likelihood estimation and considering data with different types of censoring , such as , type ii , type i and random censoring mechanism .the originality of this study comes from the fact that , for the weight lindley distribution , there has been no previous work considering data with censoring mechanisms .the paper is organized as follows . in section 2 , we review some properties of the weight lindley distribution . in section 3 ,we present the maximum likelihood method and its properties . in section 4 ,we carry out inference for this model considering different censoring mechanism . in section 5a simulation study is presented . in section 6the methodology is illustrated in a real data set .some final comments are presented in section 7 .let be a random variable representing a lifetime data , with weighted lindley distribution and denoted by , the probability density function ( p.d.f ) is given by for all , and and is known as gamma function .the wl ( [ fdpwl ] ) distribution can be expressed as a two - component mixture where and has p.d.f distribution , for the mean and variance of the wl distribution can be easily computed by the survival function of with the probability of an observation does not fail until a specified time is where is called upper incomplete gamma .the hazard function quantify the instantaneous risk of failure at a given time .the hazard function of is given by figure [ friswl ] gives examples from the shapes of the hazard function for different values of and . andusing the classical approach , the maximum likelihood estimators was chosen due to its asymptotic properties .maximum likelihood estimators are obtained from maximizing the likelihood function ( see , casella e berger , 2002 ) .the likelihood function of given , is for a model with parameters , if the likelihood function is differentiable at , the likelihood equations are obtained by solving the equation system the solutions of ( 4 ) provide the maximum likelihood estimators . in many cases , numerical methods such as newton - rapshonare required to find the solution of the nonlinear system .the maximum likelihood estimators of are biased for small sample sizes . for large samples they are not biased and asymptotically efficient .such estimators , under some regularity conditions , have an asymptotically normal joint distribution given by , \mbox { para } n \to \infty , \ ] ] where is the fisher information matrix , and , is the fisher information of in and given by , ,\ i , j=1,2,\ldots , k.\ ] ] in the presence of censored observations , usually , its not possible to compute the fisher information matrix , an alternative is consider the observed information matrix , where the terms is given by for large samples , approximated confidence intervals can be constructed for the individuals parameters , with confidence coefficient , through marginal distributions given by \mbox { para } n \to \infty .\ ] ]in this section , we provide the maximum likelihood estimator for the two parameters of the weight lindley distribution considering type ii , type i and random censored data .other types of censoring such as progressive type ii censoring ( balakrishnan & aggarwala , 2000 ) and hybrid censoring mechanism ( balakrishnan & kundu , 2013 ) can also be obtained to wl distribution . usually in industrial experiments ,the study of some electronic components are finished after a fixed number of failures , in this case components will be censored .this mechanism of censoring is call type ii , see casela & berger ( 2001 ) for more details , and its likelihood function is given by where is the order statistic .let be a random sample of wl distribution , that is , .the likelihood function is given by , the logarithm of the likelihood function ( [ verost2 ] ) is given by , from and , we get the likelihood equations , where , and is known as meijer g - function .the solutions provide the maximum likelihood estimators of and .numerical methods such as newton - rapshon are required to find the solution of the non - linear system . in the presence of typei censored data , a fixed time is predetermined at the end of the experiment .consider patients in a treatment and suppose that died until the time , then are alive and will be censored .the likelihood function for this case is given by where is a random variable and is an indicator function .let be a random sample of wl distribution , that is , .the likelihood function is given by , the logarithm of the likelihood function ( [ eqverodwc1 ] ) is given by , from and , we get the likelihood equations , the solutions provide the maximum likelihood estimators of and . in medical survival analysis and industrial life testing ,random censoring schemes has been receive special attention .suppose that the component could experiment censoring in time , then the data set is , were and .this type of censoring have as special case type i and ii censoring mechanism . the likelihood function for this case is given by be a random sample of wl distribution , that is , .the likelihood function considering data with random censoring is given by , the logarithm of the likelihood function ( [ eqverodwca ] ) is given by , from and , we get the likelihood equations , the solutions provide the maximum likelihood estimators of and .in this section we develop a simulation study via monte carlo methods . the main goal of these simulations is to study the efficiency of the proposed method .the following procedure was adopted : 1 . set the sample size and the parameter values ; 2 .generate values of the with size ; 3 . using the values obtained in step 2 ,calculate the mle e ; 4 .repeat the steps and times ; 5 . using and ,compute the mean relative estimates ( mre ) , the mean square errors ( mse ) , the bias and coverage probability .it is expected that for this approach the mre s are closer to one with smaller mse .the coverage probability was computed for the confidence intervals . for a large number of experiments , using a confidence level of , the frequencies of intervals that covered the true values of should be close .the type ii censored data were drawn setting the completed data and were censored . to generate type i and random censored data, we utilize the same methods used by goodman et .al . ( 2006 ) and bayoud ( 2012 ) , using these approaches it is expected that the proportions of censoring $ ] are approximately and .the results were computed using the software r ( r core development team ) .the seed used to generate the random values was 2014 .the chosen values to perform this procedure were , and .the values of were selected to allow the increasing and bathtub shape in the hazard function .the maximum likelihood estimates were computed using the log - likelihood functions ( [ logverct2 ] ) , ( [ logverct1 ] ) and ( [ logverctr ] ) and the package maxlik available in r to maximize such functions .the coverage probabilities were also calculated using the numeric observed information matrix obtained from the maxlik package results .tables 1 - 6 shows the mre s , mse s , bias and the coverage probability c with a confidence level equals to from the estimates obtained using mle for simulated samples , considering different values of , and of censored data ..mre , mse , bias and c estimates for samples of sizes , with and of type ii censored data .[ cols="^,^ , > , > , > , > , > , > , > , > " , ] in the figure [ grafico - obscajust2 ] , we have the ttt - plot , survival function adjusted by different distributions and kaplan meier estimator and the hazard function adjusted by wl distribution .similar to first dataset , comparing the empirical survival function with the adjusted distributions and through the aic results it can be observed a good fit for the weight lindley distribution .based on the ttt - plot there is a indication that the hazard function has bathtub shape .the hazard function adjusted by weight lindley distribution confirm those results .therefore , through the proposed methodology the data considering the electrical devices can be described by the weight lindley distribution .in this paper , we derived the maximum likelihood equations for the parameters of the weight lindley distribution considering different types of censoring , such as , type i , type ii and random censoring mechanism .based on simulation studies and on real applications , we demonstrated that using the proposed methodology it was possible to obtain good estimates of the parameters of weight lindley distribution .these results are of great practical interest since this will enable us for the use of the weight lindley distribution in various application issues .there are a large number of possible extensions of the current work .the presence of covariates , as well as of long - term survivals , is very common in practice .our approach should be investigate in both contexts .a possible approach is to consider the regression schemes adopted by achcar & louzada - neto ( 1992 ) and perdona & louzada - neto ( 2011 ) , respectively .the research was partially supported by cnpq , fapesp and capes of brazil .arbuckle , t.e ; davis , k ; marro , l ; fisher , m ; legrand , m ; et al . phthalate and bisphenol a exposure among pregnant women in canada - results from the mirec study .environment international , v.68 , p.5565 , 2014 .ghitany , m.e ; alqallaf , f ; al - mutairi , d.k ; husain , h.a . a two - parameter weighted lindley distribution and its applications to survival data , mathematics and computers in simulation 81 11901201 , 2011 .
|
in this paper the maximum likelihood equations for the parameters of the weight lindley distribution are studied considering different types of censoring , such as , type i , type ii and random censoring mechanism . a numerical simulation study is perform to evaluate the maximum likelihood estimates . the proposed methodology is illustrated in a real data set . * keywords * : weight lindley distribution , maximum likelihood estimation , censored data , random censoring .
|
in cellular networks where each base transceiver station ( bts ) independently transmits to mobile stations within its own cell , inter - cell interference is a major limitation on the sum spectral efficiency . rather than treating inter - cell interference as noise, the modern view is that it can be exploited by coordinating transmissions from the btss .it is well - known that coordinated transmissions can potentially increase the spectral efficiency dramatically ( e.g. , see ) .a recent comprehensive review of multicell coordination techniques is given in and references therein .there can be different levels of bts coordination .the basic level is to share channel state information ( csi ) for the direct and interfering channels among the btss . that allows the btss to adapt their transmission strategies to channel conditions jointly , and includes inter - cell joint power control , user scheduling , and beamforming .( see also , which consider power allocation and beamforming for peer - to - peer ( interference ) networks . )these techniques treat the inter - cell interference as noise , but it is mitigated by exploiting the heterogeneity of csi across different users . a higher level of coordination is multicell joint processing , which requires the cooperating btss to exchange message data in addition to csi .interference can be mitigated by using `` virtual '' or `` network '' multiple - input multiple - output ( mimo ) techniques , which view all interfering signals as carrying useful information .although multicell joint processing can potentially provide substantial performance gains , it introduces a number of challenges .in particular , most coordinated transmission schemes in the literature not only require knowledge of codebooks and perfect csi at all transmitters and receivers , but also require the cooperating transmissions to be aligned in phase so that transmissions superpose coherently at the receivers .phase - aligning oscillators at different geographical locations is difficult , since small carrier frequency offsets translate to large baseband phase rotations .this paper presents a _ noncoherent _ scheme for downlink cooperation , which does not require phase alignment at the transmitters . for simplicity, we consider a scenario where two btss cooperatively transmit to two mobiles assigned the same time - frequency resource block , one in each cell as depicted in fig .[ f : twocells ] .it is assumed the two btss partially or fully share their messages via a bi - directional dedicated link .each bts may transmit a superposition of two codewords , one for each receiver .each receiver decodes only its own message , and treats the undesired signals as background noise . assuming that gaussian codebooks are used to encode all messages , the proposed scheme is simple : the message intended for each receiver is in general split into two pieces to be transmitted by the two btss , respectively .the rate and power allocations across the messages at each bts are then optimized .that is , for a given set of channel gains the available power at each bts is divided between a signal used to transmit its own message and a signal used to transmit the message from the other bts .this cooperative scheme is motivated by scenarios where each bts has no _ a priori _ information about the phase at the other bts .while the optimal ( capacity - achieving ) cooperative transmission scheme is unknown , and seems to be difficult to determine , the proposed rate - splitting scheme is a likely candidate .furthermore , it serves as a baseline for comparisons with other schemes in which limited phase information may be obtained .we optimize the powers allocated across the data streams and associated beamformers with multiple transmit antennas with cooperative transmissions for both narrowband and wideband scenarios . for narrowband channels with a single transmit antenna the frontier of the achievable rate regionis computed by solving a _ linear - fractional program_. the weighted sum rate can be maximized by comparing at most six extremal rate pairs in the constraint set for transmit power . with multiple transmit antennas ,the achievable rate region can be characterized by maximizing the weighted sum rate over the allocated power and the beamforming vector for each message , and the resulting optimization problem can be solved efficiently by numerical methods .this noncoherent cooperative scheme often achieves a significantly larger rate region and much higher sum rate than non - cooperative schemes . with wideband ( frequency - selective ) channels ,the power is allocated over multiple sub - carriers .maximizing the sum rate is in general a non - convex problem . under mild assumptions ,however , the dual problem can be solved efficiently .moreover , we propose a suboptimal power allocation scheme for the case of a single transmit antenna , which admits a simple analytical solution .this suboptimal scheme performs almost as well as the optimal power allocation when the direct- and cross - channel gains are of the same order .the optimization problems presented can be easily extended to more than two btss and two mobiles. however , the structure of the solution becomes more complicated , necessitating general numerical ( convex programming ) techniques .here we focus on the scenario with two mobiles in adjacent cells since in practice a particular mobile is likely to have only one relatively strong interferer , and coordinating among more than two mobiles across cells becomes quite complicated .( this complication can be compounded by the scheduler , which may reassign nearby mobiles different time - frequency resources over successive frames . )finally , the two - mobile scenario provides significant insight into the potential gains of the cooperative scheme .consider downlink transmission in two adjacent cells each with the same set of narrowband channels .within each cell , the signals from the bts to its associated different mobiles occupy non - overlapping time - frequency resources ; however , in each time - frequency slot , there can be inter - cell interference . herewe consider two mobiles in adjacent cells assigned the same narrowband channel . assuming a narrowband system with block fading and single transmit antenna at each bts and single receive antenna at each mobile , the baseband signal received by mobile during the -th symbol interval is where , denotes the positive block fading gain from bts to mobile , , for , denotes the transmitted signal from bts at the -th symbol interval , denotes the phase of the fading channel from bts to mobile , and denotes the noise at mobile , which is a sample from a sequence of independent , unit - variance circularly symmetric complex gaussian ( cscg ) random variables . [ tcsi ] .knowledge of csi at each terminal . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] [ pr : stationary ] if , then the power allocation , which maximizes satisfies for , and has the form shown in table [ t ] where . the proofs of propositions [ pr : miueq1 ] and [ pr : stationary ] consist of examining the stationary points associated with the two conditions in lemma 1 .( note that there must exist a point on the rate frontier that achieves sum rate . )we first show that if , then achieves its maximum at one of the four corner points listed in table [ t ] . from ( [ eq : rk ] ) and lemma 1, we have +\mu\log\bigg[\frac{1+g_{21}p_1+g_{22}p_2}{1+g_{21}p_{11}+g_{22}p_{12}}\bigg].\ ] ] it is easy to show that for any fixed , is increasing with so that is maximized at an extreme value for .more generally , it is straightforward to show that is increasing with for all and .hence is maximized at one of the extreme points of the power constraint set . to find stationary points on the rate region frontier satisfying , lemma 1 states that we can assume . in general ,the rate region frontier may not contain points satisfying the condition . in that case, the power allocation schemes satisfying must be suboptimal .however , without knowing the rate pairs on the frontier , we can assume this condition is satisfied and characterize the stationary points , which then serve as candidate points for achieving the maximum weighted sum rate .this gives two possible frontiers corresponding to the two types of power allocations in table [ t ] .namely , one candidate frontier is obtained by fixing ( or ) and sweeping the value of ( or ) across the interval ] ) .the other candidate frontier is obtained by fixing ( or ) and sweeping the value of ( or ) over the interval ] ) .the actual rate frontier is then the maximum of the two candidate frontiers .for the first candidate frontier we have examining , the maximizing value is the solution to the quadratic equation , where , , and .since , the smaller root is the solution .it is easy to check that if , , and satisfy and , then achieves the maximum .similarly , we could fix and optimize over .the resulting necessary conditions show that if ( or ) , then maximizing over ( or ) gives a candidate stationary point , as stated in proposition [ pr : stationary ] .a similar argument shows that if ( or ) , then maximizing over ( or ) gives a second candidate stationary point on the frontier .if , then , which implies that ( if it is real ) . it can be similarly verified for the other cases that there are no valid stationary points in the interior of the power constraint set , which establishes proposition [ pr : miueq1 ] .the preceding propositions state that the maximum weighted sum rate can be efficiently determined by searching over the small number of candidate power allocations shown in table [ t ] . this will be used as the basis for optimizing wideband power allocations discussed in the next section .we now consider a wideband system with frequency - selective channels .a wideband channel is modeled as a set of discrete ( parallel ) channels .each sub - channel is modeled similarly as ( [ eq : recsig1 ] ) , with the same csi known at the terminals . instead of one power constraint for each sub - channel ,the sub - channels are subject to a total power constraint at each bts .the problem is to maximize the weighted sum across users of the rates summed across sub - channels : where denotes the sub - channel index , denotes the power allocated to sub - channel , and denotes the total power constraint at bts .rates and are given by ( [ eq : rk ] ) , where the channel gains and powers depend on .this can be viewed as a two - level optimization problem . at the lower levelthe weighted sum rate is maximized for each sub - channel given its allocated power .the upper level then optimizes the power allocation across sub - channels subject to the total power constraints .based on the discussion in the last section , the maximum rate for each sub - channel is achieved by one of the cases in table [ t ] . in general , solving the two - level problem requires iterating between the lower- and upper - levels .the three cooperative power assignments in table [ t ] give the following weighted sum rates for sub - channel : \label{eq : rc1}\\ r_2^{(\text{c } ) } ( l)\triangleq&\mu \log\big[1+g_{21}(l)p_1(l)+g_{22}(l)p_2(l)\big]\label{eq : rc2}\\ r_3^{(\text{c})}(l)\triangleq&\log\bigg[1+\frac{g_{12}(l)p_2(l)}{1+g_{11}(l)p_1(l)}\bigg]+\mu\log\bigg[1+\frac{g_{21}(l)p_1(l)}{1+g_{22}(l)p_2(l)}\bigg],\label{eq : rc3}\end{aligned}\ ] ] whereas the non - cooperative assignment gives +\mu\log\bigg[1+\frac{g_{22}(l)p_2(l)}{1+g_{11}(l)p_1(l)}\bigg ] .% \end{split}\ ] ] the power control problem is then subject to ( [ eq : problem1con ] ) .note that the rate objective includes only the corner points and does not explicitly include the interior points ( stationary points ) .however , the interior points are implicitly included in the rate objective due to the power optimization at the upper level .namely , if the weighted sum rate for sub - channel is maximized at an interior point , e.g. , corresponding to , , , where is the -th sub - channel power constraint at bts , then the rate can be increased by decreasing via the upper - level optimization .problem ( [ eq : problem2 ] ) is non - convex in general because of the non - convexity of the objective function .however , letting the number of sub - carriers within a given band go to infinity , we can assume that converges to a continuous function of frequency , and the corresponding continuous optimization problem can be efficiently solved numerically .the continuous optimization problem can be formulated as [ eq : problem3 ] where the index is replaced by the continuous variable . _ definition : _ consider the general optimization problem : [ eq : timesharing ] where are the optimization variables , each function is not necessarily concave , and each function is not necessarily convex .power constraints are denoted by an -vector . here , `` '' is used to denote a component - wise inequality .an optimization problem of the form ( [ eq : timesharing ] ) is said to satisfy the _ time - sharing condition _ if for any , with corresponding optimal solutions and , respectively , and for any , there exists a feasible solution , such that , and .the time - sharing condition essentially states that the optimal value of the objective in ( [ eq : timesharing ] ) is concave in .it is shown in that if the time - sharing condition is satisfied , then the optimization problem has zero duality gap , i.e. , solving the dual problem gives the same optimal value as solving the primal problem even if it is not convex .since is continuous in , and are continuous in , and the integrand of the objective function is continuous in .therefore , we can apply the techniques used in the proof of theorem 2 in to show that this optimization problem satisfies the time - sharing condition .then the optimization problem ( [ eq : problem3 ] ) has zero duality gap and for this problem solving its dual is more efficient .the solution to problem ( [ eq : problem3 ] ) approximates the solution to ( [ eq : problem2 ] ) and it becomes more accurate as . in practical systems with a large number of sub - carriers , the channel gains between consecutive sub - carriers are typically highly correlated .hence we expect that the time - sharing condition is approximately satisfied for ( [ eq : problem2 ] ) , so that the solution to the dual problem will be nearly - optimal .the lagrangian function associated with problem ( [ eq : problem2 ] ) is where is the lagrange multiplier associated with the power constraint for bts and the bold - font denotes the vector version of the corresponding variables .the dual optimization problem associated with problem ( [ eq : problem2 ] ) is where is the dual objective function .compared with numerically solving the primal problem ( [ eq : problem2 ] ) directly , two properties of the dual problem lead to a reduction in computational complexity .one is that for any fixed , the solution to ( [ eq : dual ] ) can be computed per - sub - carrier since ( [ eq : dual ] ) can be decomposed into parallel unconstrained optimization problems .note that for each sub - carrier an exhaustive search for the optimal must be carried out .the other property is the dual problem is convex in the variables , which guarantees the convergence of numerical methods . in the numerical results that follow we solve the dual optimization problem efficiently via a nested bisection search over and .( an outer loop updates and an inner loop updates . )if the required accuracy of each is given by , then the overall computational complexity of the bisection search is , which is linear in . at high signal - to - noise ratios ( snrs ) , we have and for , since and are interference - limited .hence we can simplify problem by maximizing over only , , in the integrand of the rate objective .the corresponding suboptimal power allocation problem can be written as subject to ( [ eq : problem3con ] ) .this is still a two - level optimization problem .the lower level selects the better mac channel from the two options for each sub - carrier , and the upper level distributes the power to each sub - carrier subject to the total power constraint .although and are concave functions of and , is in general not concave in and . problem ( [ eq : problem4 ] )can be efficiently solved as described in the last subsection ( via its dual problem ) ; however , it turns out that under some mild conditions it can be transformed into a convex program so that standard efficient numerical techniques can also be applied .specifically , we solve this problem by finding a convex function that upper bounds + , and optimizing this upper bound over the power allocations .it can be shown that substituting the optimized power allocation for the upper bound into the original objective in ( [ eq : problem4 ] ) gives the same value as the optimized upper bound .letting ,\label{eq : ubrate } % \end{split}\ ] ] we observe that serves as a pointwise upper bound for any full - cooperation scheme , i.e. , we now consider the optimization problem subject to ( [ eq : problem3con ] ) .since is concave with respect to and , problem ( [ eq : problem5 ] ) is a convex optimization problem . ) for each branch of the mac channel and each , there are two optional channel gains to be selected .] therefore the necessary karush - kuhn - tucker ( kkt ) conditions for optimality are also sufficient .letting , the kkt conditions for the optimal power allocation , and , can be stated as where , and are non - negative and can be determined by substituting the optimal power allocation into the power constraints . for a given set of channel fading gains ,the optimal power allocation is not unique only for the preceding case c. assuming the joint distribution of the channel gains is continuous ( e.g. , rayleigh fading ) , this happens with probability zero .therefore with probability one the optimal power allocation is unique and is determined by the first two conditions .the optimal power control scheme implies that only one bts is assigned to transmit at any given , i.e. , the bts with relatively stronger direct- or cross - channel gains .the two - level water - filling structure of the power allocation indicates that orthogonal transmission is optimal and the gain of the cooperative scheme as considered here in the wideband channel comes from cell selection for each sub - channel , since for each sub - channel at most one link among the four direct- and cross - links between btss and mobiles is activated .it is straightforward to show that substituting the optimized and in the objective in ( [ eq : problem4 ] ) gives the same result as substituting those functions into the corresponding upper bound .this is because the solution states that only one bts transmits at any given . since and maximize the upper bound , they must also maximize the original objective .we observe that ] into the four rates listed in ( [ eq : rc1])([eq : rnc ] ) , the maximum value among the four rates is equal to the maximum value of the two cooperative rates in ( [ eq : rc1 ] ) and ( [ eq : rc2 ] ) . since ] is a locally optimal power allocation scheme for ( [ eq : problem3 ] ) although it may not be globally optimal .we emphasize that the equivalence between problems ( [ eq : problem4 ] ) and ( [ eq : problem5 ] ) relies on the continuity of the integrand in the objective function . with a finite number of sub - carriers ,the solutions to the two problems may not be the same .for example , with only one sub - carrier , by inspection the optimized objective in ( [ eq : problem5 ] ) is , which can not be achieved by ( [ eq : problem4 ] ) in general . in fig .[ f : s_antenna ] we compare the maximum sum rates of the cooperative and non - cooperative schemes with wideband channels and . for this examplethere are sub - carriers .the channels on each sub - carrier across the four direct- and cross - links undergo independent rayleigh fading , and for each link the channels across sub - carriers are assumed to have correlation coefficients of .the figure compares achievable rates for the following scenarios : 1 ) optimized power assignments across sub - carriers and both btss according to ( [ eq : problem3 ] ) ; 2 ) optimized power assignments across sub - carriers and both btss according to ( [ eq : problem4 ] ) ; 3 ) cooperation between btss with equal power assignments across sub - carriers ; 4 ) both btss carry out joint power control but do not assist each other with data transmissions . the achievable downlink sum rate of perfect bts cooperation with phase alignment is included for comparison .[ f : wide ] shows results with equal - variance direct- and cross - channel gains , which corresponds to the scenario where the two mobiles are both located close to the cell boundary , and fig .[ f : wide_3db ] shows results for the case where the cross - channel gains are 3 db weaker than the direct - channel gains . the results in fig .[ f : wide ] show that the cooperative scheme considered offers approximately 5 db gain relative to non - cooperative joint power control scheme in .also , cooperation with wideband power allocation offers about one db gain with respect to equal power allocation and there is negligible difference between the suboptimal and optimal wideband power allocations . the cooperative scheme considered can only provide diversity gain , and therefore achieves only one degree of freedom ( asymptotic slope of rate curves in fig .[ f : s_antenna ] ) . in contrast ,if the btss can cooperate with phase alignment , then two degrees of freedom can be achieved as illustrated in the figure .the performance improvement due to cooperation diminishes if the average cross - channel gains become weaker than the direct gains , as illustrated in fig .[ f : wide_3db ] .we now consider the case where each bts has transmit antennas and each mobile has a single receive antenna . assuming a narrowband system with block fading , the baseband signal received by mobile during the -th symbol interval is where , for , denotes the transmit signal vector of dimension from bts , denotes the complex channel vector consisting of the fading coefficients from the transmit antennas at bts to the receive antenna at mobile , and denotes the corresponding rapidly changing phase caused by the drifting frequency offset of the local oscillator .the same drift is experienced by all antennas .similarly , it is assumed that the complex vectors are known to both btss and mobile and remain constant within one coding block , while the phases are known to mobile and unknown to the other bts . without loss of generality, we can assume for all . in analogy with the single transmit antenna case , in the cooperative scheme each bts splits its message into two parts , where one part is transmitted by itself and the other partis shared with and transmitted by the other bts .each bts transmits a superposition of two codewords intended for the two mobiles and each codeword consists of scalar coding followed by beamforming , which can be written as where is a vector with , and denotes the normalized beamforming vector for the scalar symbol .the per - bts power constraints are again given by ( [ eq : powercon ] ) .each mobile decodes its two messages successively , treating signals for the other mobile as background noise .this scheme achieves the rate pair where define and .the optimal beamforming vector must lie in the space spanned by and , because any power spent on the null space of and will not be received by any mobile , and it does not have any impact on the signal - to - interference - plus - noise ratio ( sinr ) at each mobile .[ f : illusbf ] illustrates the beamforming vector geometrically in the plane spanned by and , where denotes the angle between and , and denotes the angle between and .then can be parameterized as where and denote the projection and orthogonal projection onto the column space of , respectively , and the terms and in the corresponding denominators are used to normalize the two orthogonal vectors and ..,scaledwidth=28.0% ] from ( [ eq : beamformer ] ) , we can write . similarly , the other three beamforming vectors can be parameterized by introducing and $ ] , . the achievable rate pair can then be re - stated as ,~j\neq k.\label{eq : beamrk}\\\ ] ] optimizing the beamforming vectors is now equivalent to optimizing the corresponding angles . if then bts transmits to mobile with a maximum - ratio beamformer and if then bts transmits to mobile with a zero - forcing beamformer .in general , the optimal beamforming vectors must strike a balance between these two extremes . at highsnrs the solution should be close to zero - forcing , and at low snrs the solution should be close to maximum - ratio combining . note that with antennas , the interference term in the denominator of ( ) can be nulled out by choosing , therefore two degrees of freedom can be achieved .the achievable rate region can be obtained by maximizing the weighted sum rate over the beamforming vectors and the power allocated to each message for each and then sweeping . to achieve a rate pair on the boundary of the rate region ,the beams and powers must be jointly optimized .the following proposition states that both btss should always transmit with full power .[ propbeam ] for every rate pair on the boundary of the rate region , the corresponding power allocation satisfies .this follows from the observation that each beam contains a component , which is orthogonal to the cross - channel .hence increasing power along that component increases the desired power without increasing interference .specifically , let be the optimal parameters for a rate pair on the boundary of the rate region . from ( [ eq : beamrk ] ) , the useful signal power from bts to mobile is and the corresponding interference power is , .if , then increasing will increase without changing .if , then with fixed interference , i.e. , , the desired signal power can be expressed as , which is an increasing function of , implying that should be maximized .therefore the power constraint at bts must be binding .maximizing is a non - convex problem ; however , since there are only six variables to optimize it can be solved by exhaustive search optimally or by an iterative approach , in which the power allocation is optimized with fixed angles , the angles are optimized with fixed power allocation , and these two procedures are iterated until converges . note that convergence is guaranteed since monotonically increases in each step , and is bounded due to the power limitations .the iterative approach can reduce the search complexity ; however , optimality can not be guaranteed although in our simulations global optimality was always observed . , , and randomly generated channels.,scaledwidth=50.0% ] fig .[ f : beam ] compares the rate region frontiers achieved by the cooperative scheme with the non - cooperative scheme presented in , in which the btss carry out joint beamforming and power control but do not share messages .also shown is the rate pair achieved with zero - forcing transmission at each bts without cooperation , i.e. , bts transmits to its own associated mobile with the orthogonal projection .the figure shows that for this example bts cooperation gives substantial gains in when is small . in a wideband system with frequency - selective channels that are modeled as a set of discrete channels ,the cooperative beamforming and power control problem is given by where is given by ( [ eq : beamrk ] ) , the channel gains , angles , and the powers depend on , and the power constraints in ( [ eq : beamcon2 ] ) are satisfied with equality due to proposition [ propbeam ] . as for a single antenna, this is again a two - level optimization problem . the lower level optimizes the beamforming vectors to maximize the weighted sum rate for each sub - channel given the power allocated to each message . the upper level then optimizes the power allocation across sub - channels for each message subject to the total power constraints .similar to the previous case with a single transmit antenna , the dual problem associated with problem ( [ eq : problem1bf ] ) can be formulated where the lagrangian also depends on the angels .( we omit the details due to space limitations . ) as before , letting the number of sub - channels within a given band tend to infinity , we can assume that and converge to continuous functions of frequency .the corresponding optimization problem over and has zero - duality gap and can be efficiently solved numerically .the numerical results in fig .[ f : m_antenna ] were generated by solving the discrete version in ( [ eq : problem1bf ] ) using a nested bisection search for , where the maximization of the lagrangian function in the inner loop is performed over the angles . in fig .[ f : m_antenna ] we compare the maximum sum rates of the cooperative and non - cooperative schemes with wideband channels and .there are sub - carriers .the four channel vectors on each sub - carrier undergo independent rayleigh fading , and for each link the channel vectors across sub - carriers are assumed to have correlation coefficients of .the figure compares achievable rates for the following scenarios : 1 ) optimized power assignments across sub - carriers and both btss according to ( [ eq : problem1bf ] ) ; 2 ) cooperative transmission between btss with equal power assignments across sub - carriers ; 3 ) joint beamforming between btss but without message sharing , in which case each bts transmits to its associated mobile in the null space of the cross - channel to the other mobile . the achievable downlink sum rate of perfect bts cooperation with phase alignment is also included for comparison . fig .[ f : widebeam ] shows results with equal - variance direct- and cross - channel gains , and fig .[ f : widebeam_3db ] shows results for the case where the cross - channel gains are 3 db weaker than the direct - channel gains . the results in fig .[ f : widebeam ] show that the cooperative scheme considered offers approximately 4 db gain relative to the non - cooperative joint beamforming scheme presented in .also , cooperation with wideband power allocation offers one db gain with respect to equal power allocation .the cooperative scheme achieves the same number of degrees of freedom as with phase alignment ( which is two since there are two single - antenna mobiles ) at the expense of adding one more antenna at each bts .the performance improvement due to cooperation diminishes if the average cross - channel gains become weaker than the direct gains , as illustrated in fig .[ f : widebeam_3db ] .we have presented a two - cell cooperation scheme with message sharing between btss , which does not require transmissions to be phase - aligned . with a single antenna at the btss and mobiles ,the rates are maximized by optimizing the power allocation across messages , and also sub - channels in the wideband scenario .the scheme provides large gains with respect to non - cooperative ( single cell ) power optimization , but gains with respect to cooperative power allocation across the two btss without message sharing are relatively modest , although they can be significant , especially at low snrs .the gains are primarily due to cell selection , so that they are most pronounced when the cross - channel gains are comparable with direct - channel gains .we also extended our results to cooperative joint beamforming with message sharing .this can provide more degrees of freedom compared to the single transmit antenna case , but the gains due to message sharing are again relatively modest .finally , the absence of phase alignment , as assumed here , reduces the achievable degrees of freedom in the high - snr regime relative to perfect phase alignment .fundamental limits ( e.g. , achievable rate region or degrees of freedom ) of the broadcast channel considered without transmitter phase alignment , as well as schemes that exploit partial phase information are left for future work .we rewrite ( [ eq : rk ] ) as first , we consider the case .we will verify that and by contradiction .suppose .then we can choose two small positive numbers and that satisfy and let and . from ( [ eq : deltap ] ) , if we replace and in ( [ eq : r1v2 ] ) with and , then the equality still holds , i.e. , however , since , combining with ( [ eq : r2v2 ] ) gives defining the achievable rate pair with the new power allocation as , ( [ eq : r1v3 ] ) and ( [ eq : r2v3 ] ) imply this contradicts the assumption that is on the rate region frontier .hence must hold at the optimum .similarly , it can be shown that . if , then the optimal power allocation scheme is not unique .suppose there exists a solution that satisfies , then we can choose and to satisfy ( [ eq : deltap ] ) and then by the preceding argument , the new power allocation and must achieve the same rate as and . as a consequence, there exists an optimal power allocation in which the power constraints at both btss are satisfied with equality .we now consider and show that and by contradiction .suppose and .then we can choose two small positive numbers and that satisfy ( [ eq : deltap ] ) and let and . from ( [ eq : deltap ] ) , if we replace and in ( [ eq : r1v2 ] ) with and respectively , then ( [ eq : r1v3 ] ) still holds .in addition , since , combining with ( [ eq : r2v2 ] ) , we again obtain ( [ eq : r2v3 ] ) .as before , if we define the achievable rate pair with the new power allocation as , ( [ eq : r1v3 ] ) and ( [ eq : r2v3 ] ) imply and , which contradicts the assumption that is on the rate region frontier . therefore , and by similar arguments we have .d. gesbert , s. hanly , h. huang , s. shamai ( shitz ) , o. simeone , and w. yu , `` multi - cell mimo cooperative networks : a new look at interference , '' _ieee j. select .areas commun ._ , vol . 28 , pp .13801408 , dec .a. gjendemsj , d. gesbert , g. e. oien , and s. g. kiani , `` binary power control for sum rate maximization over multiple interfering links , '' _ ieee trans .wireless comm ._ , vol . 7 , pp .31643173 , aug .2008 .g. j. foschini , h. huang , k. karakayali , r. a. valenzuela , and s. venkatesan , `` the value of coherent base station coordination , '' in _ proc .conference on information sciences and systems ( ciss ) _ , john hopkins university , mar . 2005 .g. caire , s. ramprashad , h. papadopoulos , c. pepin , and c .- e .sundberg , `` multiuser mimo downlink with limited inter - cell cooperation : approximate interference alignment in time , frequency and space , '' in _ proc .46th annual allerton conference on communication , control , and computing _ , monticello ,il , sept . 2008 .s. jing , d. n. c. tse , j. b. soriaga , j. hou , j. e. smee , and r. padovani , `` multicell downlink capacity with coordinated processing , '' _ eurasip journal on wireless communications and networking _ , vol .2008 .o. simeone , o. somekh , g. kramer , h. v. poor , and s. shamai ( shitz ) , `` throughput of cellular systems with conferencing mobiles and cooperative base stations , '' _ eurasip journal on wireless communications and networking _ , vol .2008 .m. charafeddine , a. sezgin , and a. paulraj , `` rate region frontiers for interference channel with interference as noise , '' in _ proc .45th annual allerton conference on communication , control , and computing _ , monticello , il , sept .2007 .v. jungnickel , t. wirth , m. schellmann , t. haustein , and w. zirwas , `` synchronization of cooperative base stations , '' in _ proc .ieee international symposium on wireless communication systems _ , pp . 329334 , 2008 .d. n. c. tse and s. v. hanly , `` multiaccess fading channels part i : polymatroid structure , optimal resource allocation and throughput capacities , '' _ ieee trans . inform .theory _ , vol .44 , pp . 27962815 , nov .
|
multicell joint processing can mitigate inter - cell interference and thereby increase the spectral efficiency of cellular systems . most previous work has assumed phase - aligned ( coherent ) transmissions from different base transceiver stations ( btss ) , which is difficult to achieve in practice . in this work , a _ noncoherent _ cooperative transmission scheme for the downlink is studied , which does not require phase alignment . the focus is on jointly serving two users in adjacent cells sharing the same resource block . the two btss partially share their messages through a backhaul link , and each bts transmits a superposition of two codewords , one for each receiver . each receiver decodes its own message , and treats the signals for the other receiver as background noise . with narrowband transmissions the achievable rate region and maximum achievable weighted sum rate are characterized by optimizing the power allocation ( and the beamforming vectors in the case of multiple transmit antennas ) at each bts between its two codewords . for a wideband ( multicarrier ) system , a dual formulation of the optimal power allocation problem across sub - carriers is presented , which can be efficiently solved by numerical methods . results show that the proposed cooperation scheme can improve the sum rate substantially in the low to moderate signal - to - noise ratio ( snr ) range . multicell joint processing , cooperation , interference management , phase alignment , message sharing , wideband , power allocation .
|
in astrophysical applications inertial waves that can exist in rotating bodies may be excited by several different physical mechanisms , most notably through tidal perturbation by a companion ( eg .papaloizou & pringle 1981 , hereafter pp ) or in the case of compact objects through secular instability arising through gravitational wave losses ( eg .chandrasekhar 1970 , friedman & schutz 1978 , andersson 1998 , friedman morsink 1998 ) .they also can play a role in other physical systems .for example , they can also be excited by several mechanisms in the earth s fluid core with possible detection being announced ( aldridge lumb 1987 ) . for rotating planets and stars that have a barotropic equation of statethese wave modes are governed by coriolis forces and so have oscillation periods that are comparable to the rotation period .they are accordingly readily excited by tidal interaction with a perturbing body when the characteristic time associated with the orbit is comparable to the rotation period , which is expected naturally when the rotation period and orbit become tidally coupled .they may then play an important role in governing the secular orbital evolution of the system .inertial modes excited in close binary systems in circular orbit were considered by pp and savonije & papaloizou ( 1997 ) .wu ( 2005)a , b considered the excitation of inertial modes in jupiter as a result of tidal interaction with a satellite and excitation as a result of a parabolic encounter of a planet or star with a central star was studied by papaloizou & ivanov ( 2005 ) , hereafter referred to as pi and ivanov & papaloizou ( 2007 ) , hereafter referred to as ip .the latter work was applied to the problem of circularisation of extrasolar giant planets starting with high eccentricity . in that work the planetwas assumed coreless .ogilvie & lin ( 2004 ) and ogilvie ( 2009 ) have considered the case of a cored planet in circular orbit around a central star and found that inertial waves play an important role .the importance of the role played by inertial waves in the transfer of the rotational energy of a rotating neutron star to gravitational waves via the chandrasekhar - friedman - schutz ( cfs ) instability was pointed out by andersson ( 1998 ) .later studies mainly concentrated on physical mechanisms of dissipation of energy stored in these modes that limit amplitudes of the modes , and , consequently , the strength of the gravitational wave signal . in these studies either numerical methods or simple local estimates of properties of inertial modes were mainly used , see eg . kokkotas ( 2008 ) for a recent review and references .an analytical treatment of problems related to inertial waves , such as eg . finding normal mode spectra and eigenfunctions , and coupling them to other physical fields , etc ., is difficult due to a number of principal complicating technical issues .in particular , the dynamical equations governing the perturbations of a rotating body ( called planet later on ) are , in general , non - separable , for compressible fluids .when such fluids are considered and rotation is assumed to be small , a low frequency anelastic approximation that filters out the high frequency modes is often used ( see eg .this simplifies the problem to finding solutions to leading order in the small parameter , where is the rotation frequency , is the constant of gravity and , are the mass and radius of the planet . in this approximation eigenfrequencies of inertial modesare proportional to while the form of the spatial distribution of perturbed quantities does not depend on the rotation rate. however , even when this approximation is adopted , the problem is , in general , non - separable apart from models with a special form of density distribution , see arras et al ( 2003 ) , wu ( 2005)a and below . additionally , the problem of calculating the inertial mode spectrum and its response to tidal forcing is complicated by the fact that in the inviscid case the spectrum is either everywhere dense or continuous in any frequency interval it spans ( papaloizou & pringle , 1982 ) .this is in contrast to the situation of , for example , high frequency modes , which are discrete with well separated eigenvalues .when the anelastic approximation is adopted the singular ill posed nature of the inviscid eigenvalue problem is seen to come from the fact that the governing equation is hyperbolic and the nature of the spectrum is determined by the properties of the characteristics ( eg . wood 1977 ) .a discrete spectrum is believed to occur when there are no such trajectories that define periodic attractors .otherwise the inviscid spectrum is continuous .then , when a small viscosity is introduced the spectrum becomes discrete but normal modes have energy focused onto wave attractors ( see eg .ogilvie & lin 2004 ) . given these complexitiesit is desirable to work with and compare a variety of analytical and numerical approaches .coreless inviscid rotating planets with an assumed spherical or ellipsoidal shape have a discrete but everywhere dense spectrum that makes difficulties for example with mode identification and application of standard perturbation theory .however , numerical work indicates that there are well defined global modes that can be identified and followed through a sequence of models ( eg .lockitch & friedman , 1999 , hereafter lf , and pi ) . in this paperwe investigate the inertial mode spectrum of a uniformly rotating coreless barotropic planet or star and its tidal response by a wkbj approach coupled with first order perturbation theory and compare its eigenvalue predictions with numerical results obtained by a variety of authors and find good agreement apart from some unidentified wkbj modes that are near the limits of the spectrum and for which the perturbation theory appears not to work . for the identified modes we also find remarkably good agreement for the form of the eigenfunctions .this indicates that they can be represented at low resolution with small scale phenomena being unimportant , meaningful mode identification ( in that the modes can be followed from one model to another ) and at least first order perturbation theory works for these modes .this is also confirmed in a following paper ( hereafter referred to as pin ) where we investigate the inertial mode spectrum and its tidal response by numerical solution of an initial value problem _ without the anelastic approximation_. we are able to confirm the validity of the anelastic approximation and the applicability of the first order perturbation theory developed here for demonstrating this as well as estimating eigenvalues .thus a suggestion of goodman & lackner ( 2009 ) that tidal interaction might be seriously overestimated by use of the anelastic approximation is not confirmed .a wkbj approach to the same problem was also considered by arras et al ( 2003 ) and wu ( 2005)a . however , in this work only terms of leading order in an expansion in inverse powers of a large wkbj parameter ( see the text below for its definition ) were taken into account and treatment of perturbations near the surface and close to the rotational axis were oversimplified . as a consequence , although their results are correct in the formal limit they can not be used to make a correspondence between wkbj modes and those obtained numerically , or an approximate description of modes with a scale that is not very small . in this paperwe treat the problem in a more extended way , considering terms of the next order together with an accurate treatment of perturbations near the surface and close to the rotation axis .additionally , we consider a frequency correction of the next order , for modes having non - zero azimuthal number , .we checked results obtained with use of the wkbj formalism against practically all numerical data existing in the literature finding good agreement in practically all cases .therefore , we can assume that our formalism may be applied to provide an approximate analytic description of inertial modes , including those with large scale variations , where the wkbj approach might be expected to be invalid .also , different quantities associated with the modes may be described within the framework of our formalism or its natural extension , such as the tidal overlap integrals ( see pi and ip ) , quantities determining the growth rate due to the cfs instability and decay of inertial waves due different processes , eg . by non - linear mode - mode interactions ( see eg .schenk et al 2002 , arras et al 2003 ) .thus , the formalism developed here may provide a basis for the analytic treatment of inertial waves in many different astrophysical applications .the plan of the paper is as follows . in section [ sec2 ]we briefly review the basic equations and their linearised form for a uniformly rotating barotropic planet or star . in section [ anelastic ]we go on to consider these in the anelastic approximation which is appropriate when the rotation frequency of the star is very much less than the critical or break up rotation frequency .we give a simple physical argument why we expect this approximation to be valid in this limit even when the sound speed tends to a small value or possibly zero at the surface of the configuration .in section [ sec2.5 ] we give a brief discussion about when discrete normal modes may be expected to occur such as in the case of a coreless slowly rotating planet with surface boundary assumed to be either spherical or ellipsoidal .we then present a formal first order perturbation theory that can be used to estimate corrections to eigenfrequencies occurring as either a consequence of terms neglected in the wkbj approximation or the anelastic approximation .the latter application is tested by a direct comparison with the results of numerical simulations in pin .section [ sec2.6 ] concludes with a brief account of the form of the anelastic equations in pseudo - spheroidal coordinates in which they become separable for density profiles of the form where is the local radius , is the surface radius and is a constant .( arras et al 2003 , wu 2005a ) . in section [ sec3 ]we develop a wkbj approximation for calculating the normal modes which is based on the idea that in the short wavelength limit these modes coincide with those appropriate to separable cases which include the homogeneous incompressible sphere as a well known example .solutions of a general wkbj form appropriate to the interior of the sphere are matched to solutions appropriate to the surface regions where they become separable which is the case when the density vanishes as a power of the distance to the boundary as is expected for a polytropic equation of state .this matching results in an expression for the eigenfrequencies given in section [ sec3.5 ] .in section [ surface ] we go on to develop expressions for the eigenfunctions appropriate to any location in the planet including the rotation axis and the critical latitude region where one of the inertial mode characteristics is tangential to the planet surface .these solutions are then used to obtain corrections to the eigenfrequencies resulting from density gradient terms neglected in the initial wkbj approximation in section [ sec3.9 ] . in section [ sec4 ]we compare the corrected eigenfrequencies obtained from the wkbj approximation with those obtained numerically by several different authors who used differing numerical approaches and find good agreement even for global modes .a similar comparison with the results of numerical simulations for a polytropic model with positive results is reported in pin .we also compare the forms of the eigenfunctions with those obtained in ivanov & papaloizou ( 2007 ) and find a good agreement even for global modes .finally in section [ sec5.1 ] we discuss our results in the context of the evaluation of the overlap integrals that occur in evaluating the response to tidal forcing .we show that these vanish smoothly in the limit that the polytropic index tends to zero and we indicate that they vanish at the lowest wkbj order and are thus expected to vanish rapidly as the order of the mode increases .we go on to summarize our conclusions in section [ conclu ] .in this section we review the formalism and equations we adopt in this paper .as much of this has been presented in previous work ( pi , ip ) only a brief review is given here . in what followswe continue to investigate oscillations of a uniformly rotating fully convective body referred hereafter to as a planet , focusing on the low frequency branch associated with inertial waves .the planet is characterised by its mass , radius and the associated characteristic frequency where is the gravitational constant .we adopt a cylindrical coordinate system and associated spherical coordinate system with origin at the centre of mass of the planet . in this paperwe make use of the fourier transform of a general perturbation quantity , say q , with respect to the azimuthal angle and the time in the form where the sum is over and and denotes the complex conjugate of the preceding quantity hereafter .the reality of implies that the fourier transform , indicated by tilde satisfies the inner products of two complex scalars , , that are functions of and are defined as where denotes the complex conjugate .note that the definition of the inner product differs from what is given in ip where the planet s density was used as a weight function .integrals of this type are always taken over the section of the unperturbed planet for which we assume that the planet is rotating with uniform angular velocity .the hydrodynamic equations for the perturbed quantities take the simplest form in the rotating frame with axis along the direction of rotation .since the planet is fully convective , the entropy per unit of mass of the planetary gas remains approximately the same over the volume of the planet , and the pressure can be considered as a function of density only , thus .as the characteristic oscillation periods associated with inertial modes are in general significantly shorter than the global thermal timescale we may adopt the approximation that perturbations of the planet can be assumed to be adiabatic .then the relation holds during perturbation as well leading to a barotropic equation of state . in the barotropic approximationthe linearised euler equations take the form ( see pi ) where is the lagrangian displacement vector , is the density perturbation , is the adiabatic sound speed , is the stellar gravitational potential arising from the perturbations and is an external forcing potential , say , the tidal potential in the problem of excitation of inertial waves by tides , see pi and ip .the linearised continuity equation is note that the centrifugal term is absent in equation being formally incorporated into the potential governing the static equilibrium of the unperturbed star .the convective derivative as there is no unperturbed motion in the rotating frame .although incorporation of the perturbation to the internal gravitational potential presents no principal difficulty , in this paper , for simplicity we neglect it , setting . this procedure known as the cowling approximationcan be formally justified in the case when perturbations of small spatial scale in the wkbj limit are considered .however , it turns out that when low frequency inertial modes are considered the cowling approximation has been found to lead to results which are in qualitative and quantitative agreement with those obtained numerically for global modes obtained with a proper treatment of perturbations to the gravitational potential ( see below ) .therefore , we do not expect that the use of the cowling approximation can significantly influence our main conclusions .provided that the expressions for the density and sound speed are specified for some unperturbed model of the planet , the set of equations is complete .now we express the lagrangian displacement vector and the density perturbation in terms of with help of equations and , and substitute the result into the continuity equation from which we obtain an equation for its fourier transform in the form where , and it is very important to note that the operators , and are self - adjoint when the inner product with denoting the volume of the star . herethese operators are assumed to act on well behaved functions and the density is taken to vanish at the surface boundary .also when and are positive definite and is non negative .when remains positive definite if consideration is restricted to the physically acceptable variations that conserve mass , this constraint eliminating the possibility that is constant .when the cowling approximation is adopted equation ( [ eq 4 ] ) fully specifies solutions to the problem of forced linear perturbations of a rotating barotropic planet . in the general case , a complete set of equationsis described in pi . when equation ( [ eq 4 ] ) leads to an eigenvalue problem describing the free oscillations of a rotating star in the form assuming that rotation of the planet is relatively slow such that the angular velocity , these may be classified as or modes with eigenfrequencies such that or inertial modes with eigenfrequencies .the and modes exist in non rotating stars and can be treated in a framework of perturbation theory taking advantage of the small parameter ( see eg . ivanov papaloizou 2004 and references therein ) .on the other hand for inertial waves is of order unity , such a perturbation approach can not be used . since , in general , equation ( [ eq 41 ] ) is rather complicated even for numerical solution , in order to make it more tractable the so - called anelastic approximation has been frequently used ( see eg .pp , lockitch & friedman 1999 and dintrans & ouyed 2001 ) for which the right hand side of ( [ eq 41 ] ) is neglected . in order to justify this approximationwe note that for eigenfunctions that are non singular everywhere in the planet , we can crudely estimate the derivatives entering equation ( [ eq 41 ] ) as and , where the parameter .consider first the interior region of the planet where we approximately have .it follows from equation ( [ eq 41 ] ) that the left hand side and the right hand side can be respectively estimated as the ratio of these is of order this estimate is , however , not valid near the boundary of the planet where and the left hand side of the inequality ( [ eqn3 ] ) diverges .however , in the same limit the terms containing the density gradient on the left hand side of ( [ eq 41 ] ) will dominate terms involving the second derivatives of .thus in this limit the magnitude of the contribution from terms on the left hand side of ( [ eq 4 ] ) may be estimated to be where we remark that it follows from hydrostatic equilibrium that close to the surface accordingly , when the ratio of the terms on the right and left hand sides of equation ( [ eq 41 ] ) can be estimated as from equations ( [ eqn3 ] ) and ( [ eqn5 ] ) it follows that when the terms determining deviation from the anelastic approximation are small compared to the leading terms everywhere in the planet .accordingly , in the slow rotation regime , we can use this approximation to find the leading order solutions for eigenfrequencies and eigenfunctions and then proceed to regard the terms on the right hand side of ( [ eq 41 ] ) as a perturbation .the validity of the anelastic approximation in the context of the tidal excitation of inertial modes has been recently questioned in a recent paper by goodman & lackner ( 2009 ) on account of the divergence of the terms on the right hand side of equation([eq 41 ] ) as although an actual demonstration of its failure was not given .in fact the above discussion , which also applies to equation ( [ eq 4 ] ) as this differs only by the addition of a forcing term , indicates that these terms are never important provided is sufficiently small .this is to be expected because as is reduced , the structure of the modes remains unaffected in the anelastic approximation whereas the radial width of the region where terms on the right hand side of equation([eq 41 ] ) might become comparable to any other terms shrinks to zero .we also note that the vanishing of the normal velocity at the boundary in the anelastic approximation is correct in the limit as the ratio of the horizontal to normal components there can be shown using the above arguments to also be on the order of finally in pin , we find by comparing the results of tidal forcing calculations using a spectral approach with the anelastic approximation , to those obtained using direct numerical solution of the initial value problem , that it gives good results even when is not very small .it was shown by pi and ip that both quite generally and also when the anelastic approximation is used equations ( [ eq 4 ] ) and ( [ eq 41 ] ) can be brought to the standard form leading to an eigenvalue problem for a self - adjoint operator . herewe describe the approach , which leads to the self - adjoint formulation of the problem in the anelastic approximation .the self - adjoint and non negative character of the operators , and is made use of to formally introduce their square roots , eg . , defined by condition , etc .as is standard , the requirement of non negativity , makes the definitions of these square roots unique .the positive definiteness of ( see above discussion ) also allows definition of the inverse of , .let us consider a new generalised two dimensional vector with components such that and the straightforward generalisation of the inner product given by equation ( [ eq 7 ] ) .it is now easy to see that equation ( [ eq 4 ] ) is equivalent to where and the vector has the components note that as follows from ( [ eq 12 ] ) the relation between the components of and can be taken to be given by since the off diagonal elements in the matrix ( [ eq 13 ] ) are adjoint of each other and the diagonal elements are self adjoint , it is clear that the operator is self - adjoint .equation ( [ eq 12 ] ) can be formally solved using the spectral decomposition of we now make a few remarks concerning the spectrum .it has been known for many years ( see eg .greenspan 1968 , stewartson rickard 1969 ) that the eigenvalue problem we consider is not well posed in the inertial mode range this is because in this spectral range the eigenvalue equation ( [ eq 41 ] ) becomes a hyperbolic partial differential equation with boundary conditions specified on the planet boundary .the form of the spectrum depends on the behaviour of the characteristics , which correspond to localised inertial waves , under successive reflections from the boundary .note that these reflections maintain a constant angle with the rotation axis rather than the normal to the boundary .the situation was conveniently summarised by wood ( 1977 ) ( see also fokin 1994a , b and references therein ) .there are three types of behaviour of the characteristic paths for frequencies in the inertial mode range .they may all close forming periodic trajectories , they may be ergodic , or there may be a finite number of periodic trajectories that form attractors .the first two types of behaviour are believed to be associated with discrete normal modes while the third type leads to wave attractors and a continuous spectrum . the homogeneous sphere within a spherical or ellipsoidal boundary exhibits the first two kinds of behaviour and has discrete normal modes which form a dense spectrum ( eg .bryan 1889 ) while the same system with a solid core has wave attractors ( eg .ogilvie & lin 2004 , ogilvie 2009 ) .note the characteristics behave in the same way for all spheres or ellipsoids with a continuous density distribution so that these should have normal modes .note too that in the limit of very short wavelength modes only the second derivative terms matter in equation ( [ eq 41 ] ) and the system becomes equivalent to the two dimensional case studied by ralston ( 1973 ) and schaeffer ( 1975 ) . in that casethe normal modes are associated with the frequencies for which all characteristic paths are periodic .they form a dense spectrum and are infinitely degenerate . from this discussionwe expect the modes of a system with a continuous density distribution to approach the same form as those of the homogeneous sphere , an aspect upon which we build our later wkbj approach . from the above discussionwe expect the normal modes for the cases of interest to form a discrete but dense spectrum .the anelastic approximation can be implemented by setting in equation ( [ eqn6 ] ) . in this casewe can look for a solution to ( [ eq 12 ] ) in the form where are the real eigenfunctions of satisfying the associated necessarily real eigenfrequencies being substituting ( [ eq 14 ] ) into ( [ eq 12 ] ) we obtain the operator induces the inner product and associated orthogonality relation for eigenfunctions according to the rule where using ( [ eqn6 ] ) and ( [ eq 15 ] ) we explicitly obtain where is the norm .the decomposition ( [ eq 13a ] ) should be valid for any vector with components , where is any function of the spatial coordinates .the second component of this equality shows that in order for this to be valid an identity must be hold ( ip ) .this identity allows us to represent the relation ( [ eq 16 ] ) in a different form ( pi ) : note that in response problems such as the problem of excitation of the inertial waves during the periastron flyby , in order to take account of causality issues correctly when extending to the complex plane , one should add a small imaginary part in the resonance denominator in ( [ eq 16b ] ) according to the landau prescription : , where is a small real quantity .when external forces are absent and the potential is set to zero , equation ( [ eq 4 ] ) ( or , alternatively , equation ( [ eq 12 ] ) ) defines the full eigenvalue problem . under very general assumptionsit was shown by ip that this problem can be formally solved in an analogous manner . however , it is rather difficult to use the general expressions obtained by ip without making further approximations .here we note that , given that the spectrum is discrete , we may find conditions satisfied by the eigenfunctions and eigenvalues by replacing by in equations ( [ eq 16 ] ) and ( [ eq 16b ] ) .these conditions relate any eigenfunction , now equated to and its associated eigenvalue to the eigenfunctions and eigenvalues of the anelastic problem . proceeding in this way we go on to form the quantity where is an anelastic eigenfunction and we have made use of the orthogonality relation ( [ eqn7 ] ) .as argued in section [ anelastic ] , the quantity on the right hand side can be regarded as a perturbation where the small parameter is provided an eigenfunction can be identified as and is non degenerate with it follows from ( [ eqn16c ] ) that in this limit where .the spectrum of inertial modes is dense .this may lead to a potential difficulties in identifying and following modes as parameters change as we discussed above .however , it is possible to argue that this problem can be alleviated for large scale global modes by for example modifying the eigenvalue problem by adding terms that have a very small effect on the global modes but spectrally separate close by short scale modes .dintrans and ouyed ( 2001 ) adopt such a procedure by adding a viscosity and this enables them to identify and follow global modes .note that a similar situation would result if conservative high order derivative terms were added that preserved the self - adjoint form of the problem .numerical work presented below and in pin also confirms that global modes have a clear identity and can be followed as parameters change provided that the angular frequency is sufficiently small .thus we both expect and verify the validity of the expression ( [ eqn16d ] ) in this limit . for larger values of should take into account a possibility of mixing between two neighbouring large scale global modes to explain results of numerical calculations , see pin . in this case expression ( [ eqn16d ] ) should be modified in an appropriate way . in the next sectionwe find solutions of the eigenvalue problem in the wkbj approximation .it will be shown that the corresponding eigenvalues and eigenmodes are independent of the sign of to two leading orders .this is explained by the fact that to that order solutions are determined only by operators containing second and first derivatives in equation ( [ eq 4 ] ) . on the other hand it follows from the same equation that the only dependence on sign of is determined by the operator which does not contain any derivatives of . in order to find the first correction to the wkbj eigenfrequencies that depends on sign of we treat the operator as a perturbation .this leads to a change in the eigenfrequency that can be found by using the same formalism that lead to equation ( [ eqn16d ] ) but then simply replacing in that equation by equation ( [ eqn16d ] ) then gives note that since it follows from equation ( [ eqnn2 ] ) that when the sign of is proportional to the sign of . in what followswe assume that an object experiencing tidal interactions can be approximated as having a spherically symmetric structure .in this case it is appropriate to use another form of ( [ eq 4 ] ) with , which is especially convenient for an analysis of wkbj solutions .we can obtain this from ( [ eq 4 ] ) using the fact that for a spherical star and .we obtain w = { 1\over rh}\left(\left[\sigma^{2 } \varpi { \partial w \over \partial \varpi } -d z { \partial w \over \partial z } \right ] -\left[2m\sigma \omega -{d ( \sigma/ \omega_{k}(r))^{2}}\right]w\right ) , \label{eqn12}\ ] ] where we set , for simplicity , , is the laplace operator , is a characteristic density scale height and , where is the mass enclosed within a radius .note that we use the hydrostatic balance equation to obtain equation ( [ eqn12 ] ) from equation ( [ eq 4 ] ) .the last term in the second square braces on the right hand side describes correction to the anelastic approximation .it is discarded when the wkbj approximation is used .when the density approaches a constant value , tends to infinity and the right hand side of equation ( [ eqn12 ] ) vanishes . in this caseit describes an incompressible fluid , see eg .greenspan ( 1968 ) .it was shown by bryan ( 1889 ) that in this case this equation is separable in special pseudo - spheroidal orthogonal coordinates defined by the relations since the governing equations are invariant to the mapping , without loss of generality we assume from now on that for all modes while can have either sign .also , from equation ( [ eqn12 ] ) it follows that the modes should be either even or odd with respect to the reflection in the equatorial plane .therefore , it is sufficient to consider only the upper hemisphere . in this regionwe can assume that the variables and are contained within the intervals ] , respectively .a detailed description of this coordinate system can be found in eg .arras et al ( 2003 ) , wu ( 2005)a . using the new variables equation ( [ eqn12 ] ) takes the form where and the quantities , , are understood to be functions of the variables and .it is easy to see that the eigenfunctions of the operators are the associated legendre functions , , and we have where .let us stress that as the domains of and are not , ] . for simplicity , in the main textwe are going to consider the modes even with respect to reflection , called hereafter ` the even modes ' .for example , such modes are excited by tidal interactions since tidal potential is an even function of .the case of the modes odd with respect to this reflection ( the odd modes ) can be dealt with in a similar way .this case is considered in appendix a. from equation ( [ eqn13 ] ) it follows that reflection of the coordinate leads to the reflection of the coordinate such that , while the coordinate changes according to the rule we readily find that ( [ eqn24 ] ) is unchanged under this transformation provided the phase ( see also eg .wu 2005a ) .we remark that the same result is obtained by requiring that the derivative of ( [ eqn24 ] ) with respect to vanish on the equator where in the wkbj approximation sufficiently far from the rotational axis all terms proportional to give small corrections to the solution ( [ eqn22 ] ) and are formally discarded .however , when and , accordingly , , it follows from equation ( [ eqn16 ] ) that the term proportional to in the expression for the operator diverges in this limit and should be retained .when this is done the phase can be found from condition of regularity of close to the rotation axis .we begin by using the wkbj solution already found to develop an approximate expression for that is appropriate for small values of and which can be matched at large distances from the rotation axis .an appropriate expression for which can be matched to the correct wkbj limit sufficiently far from the rotational axis is where we take into account that the factor entering ( [ eqn22 ] ) is proportional to the product , see equation ( [ eqn13 ] ) , and the factor is formally incorporated in the definition of which is to be found by imposing the condition of regularity on the rotation axis . in order to do thiswe obtain an equation for from equation ( [ eqn12 ] ) ( or ( [ eqn15 ] ) ) that retains terms containing the derivatives and terms that potentially diverge in the limit while other terms can be discarded . from equations ( [ eqn12 ] ) and ( [ eqn15 ] )it follows that satisfies equation ( [ eqn18 ] ) in the limit of small the solution to ( [ eqn26 ] ) regular at the point can be expressed in terms of the bessel function where we assume from now on that is positive . ] . in the limit of large the asymptotic form of the expression ( [ eqn27 ] )is it is easy to see that when is small .therefore , from equations ( [ eqn25 ] ) and ( [ eqn28 ] ) it follows that the solution has the required form ( [ eqn24 ] ) provided that and we have , accordingly , note that the phase ( [ eqn29 ] ) , which can in fact be verified with reference to the incompressible sphere , differs from that given in arras et al ( 2003 ) and wu ( 2005)a . this disagreement is due to an oversimplified treatment of the wkbj solution close to the rotational axis in these papers .the eigenvalues appropriate to the problem of free oscillations can be found by matching the solution ( [ eqn30 ] ) to approximate solutions valid near the surface of the planet . in pseudo - spheroidal coordinates ( [ eqn13 ] )the equation determining the upper hemispherical surface of the planet has two branches : 1 ) and 2 ) . in order to simultaneously consider solutions to equation ( [ eqn15 ] ) that can be close to either of these branches , we introduce two new coordinates with corresponding to the first branch and corresponding to the second branch , that are defined by the relation where the sign ( ) corresponds to the 1st ( 2nd ) branch , and assume later on that the are small .the form of the solutions close to the surface depends on the density profile . in what followswe the consider the planet models with a polytropic equation of state for which the density profile close to the surface is given by equation ( [ eqn21 ] ) .the variable entering equation ( [ eqn21 ] ) can be expressed through and as where we assume from now on that the upper ( lower ) sign corresponds to the 1st ( 2nd ) branch and the index takes on the values ( first branch ) and ( 2nd branch ) with we now look for solutions close to the surface that have large but small .this is possible because in the wkbj theory is a large parameter .the domain for which is small for both and is called the critical latitude domain and will be considered separately below .solutions valid in all of these domains must match correctly on to a solution of the form ( [ eqn30 ] ) in order to produce a valid eigenfunction . using equations ( [ eqn21 ] ) and ( [ eqn24 ] ) we can look for a solution close to the surface in the form where we also use equation ( [ eqn13 ] ) in order to express the factor in terms setting there . substituting this expression in equation ( [ eqn15 ] ) and taking the limit obtain where , for simplicity , we omit the index in the quantities and , and we recall that the term proportional to gives the correction to the anelastic approximation . since in the low frequency limit is assumed to be much smaller than unity , this term is small and we approximately have . equation ( [ eqn33 ] ) can be brought into a standard form by the change of variables adopting these we obtain where a prime denotes differentiation with respect to and this is the confluent hyper - geometric equation .its solution that is regular at the surface is expressed in terms of the confluent hyper - geometric function as note that this solution is similar to solutions of the schrodinger equation with the coulomb potential describing wave functions belonging to continuous part of its spectrum , ( see eg .landau lifshitz 1977 ) . in the limit of obtain from ( [ eqn37 ] ) where is the gamma function .since the quantity is assumed to be small we can approximately write where is the psi function . in the same approximation equation ( [ eqn39 ] ) can be rewritten in the form after substituting the result expressed by equation ( [ eqn41 ] ) into ( [ eqn32 ] ) the resulting expression should be of the general form ( given by [ eqn24 ] ) evaluated close to the surface .this , however , can not be realised on account of the presence of the factor in ( [ eqn41 ] ) .this term , having a coordinate dependence of order of after removing a constant phase would formally require terms of that order that are not accounted for in the expressions ( [ eqn22 ] ) and ( [ eqn24 ] ) to enable matching , therefore to the order we are currently working , it is discarded .since only this term depends on the sign of and on the correction to the anelastic approximation , both dependencies are absent in the resulting approximation .another way of obtaining solutions to ( [ eqn33 ] ) compatible with the form ( [ eqn24 ] ) inside the planet is to set to zero the small quantity in equation ( [ eqn37 ] ) . in this casethe solution can be expressed in terms of a bessel function such that ) we use the relations and , see eg .gradshteyn ryzhik 2000 , pp 1013 , 1014 .] note that this expression is equivalent to ( [ eqn37 ] ) when the anelastic approximation is adopted and .when we get where we the index has been restored and we use the explicit expression for . substituting ( [ eqn42 ] ) into equation ( [ eqn32 ] ) , taking into account that the factor , and that close to the surface we have it can now be seen that the expression ( [ eqn32 ] ) has the required form ( [ eqn24 ] ) provided that the phases satisfy appropriate appropriate conditions .however , these phases have already been determined from the requirements of regularity on the rotation axis and symmetry with respect to reflection in the equatorial plane and are accordingly specified through equation ( [ eqn30 ] ) which equation([eqn32 ] ) must match .it is readily seen that the expressions ( [ eqn30 ] ) and ( [ eqn32 ] ) can be compatible only for particular choices of and .these compatibility conditions determine the eigenspectrum of the problem in the wkbj approximation .they are easily found from equations ( [ eqn30 ] ) , ( [ eqn42 ] ) and ( [ eqn43])to be given by here and are positive or negative integers that must be chosen in a way which ensures that the angle belongs to the branch .adding the above relations we obtain where . substituting ( [ eqn45 ] ) into the first expression in ( [ eqn44 ] )we obtain an expression for the eigenfrequency where we set from now on .as shown in appendix a the modes with different symmetry with respect to reflection ( both the even and the ` odd ' modes ) can be described by the same expression ( [ eqn46 ] ) provided that the expression for changes to where the integer is even for the even modes while for those with odd symmetry is odd . for the wkbj approximation to be valid should be large , and , accordingly , .we would like , however , to consider all values of and allowed by our assumption that is positive and belongs to the interval .these conditions imply that is positive and lead to inequality : \le k \le \left[l+{n\over 4}\right ] , \label{eqn47}\ ] ] where ] that is defined for ] decreases monotonically with and is such that \equiv 1 ] and \equiv 0 ] the quantity is a parameter such that an explicit form of ] for which we denote as as {-}+ ( 1-\eta[z_{1}])w_{pole } , \label{eqn62n}\ ] ] where and we recall that in the above is to be obtained from equation ( [ eqn58 ] ) . the expression ( [ eqn62n ] ) can be rewritten in another useful form , which explicitly shows that the solution is separable close to the surface : where \delta_{1}^{n/2}\bar w ( \delta_{1})+(1-\eta [ z_{1 } ] ) \sqrt{\pi \lambda y_{1}\over 2}j_{|m|}(\lambda y_{1 } ) . \label{eqn62nb}\ ] ] in this case we formulate an expression valid close to the surface and for all close to the equatorial plane and away from the critical latitude , and it is convenient to represent the function in terms of an asymptotic series in ascending powers of . substituting the series ( [ eqn61aa ] ) in ( [ eqn58 ] ) for we obtain : where where the coefficients and are given in equation ( [ eqn61bb ] ) and we take into account that , according to equations ( [ eqn44 ] ) and ( [ eqn59 ] ) . on the other hand it is convenient to use the expression ( [ eqn58 ] ) directly in the region close to the critical latitude .the expressions ( [ eqn58 ] ) and ( [ eqn62nc ] ) can be combined with help of the function ] defined in section [ dom ] is inconvenient for a numerical implementation we consider instead of it a function , which is zero in the regions ] and represented as a ratio of two polynomials of in the intermediate region , which are chosen in such a way to ensure that several first derivatives are equal to zero in both points and .let us stress that for self - consistency we use the frequency ( or ) as given by equation ( [ eqn46 ] ) in those equations even when the frequency correction is non zero .as above , the numerical results for the polytrope are taken from pi .the wkbj results for are compared with those obtained for a realistic model of a planet of one jupiter mass .this model has a first order phase transition between metallic and molecular hydrogen , which has been discussed , eg ., in ip . over the surface of the star for the polytrope .the upper plots are obtained by numerical methods and the lower plots from the wkbj theory .the values of eigenfrequencies are given in the text . from left to right the integers and determining the wkbj eigenfrequencies are , and respectively.,width=680 ] but for the upper plots represent numerical results for a planet with a realistic equation of state , the lower plots are calculated from the wkbj theory assuming that the eigenfrequencies are given in the text.,width=680 ] for the global odd modes are shown .the azimuthal number from left to right .the upper plots represent the analytical results given by equation ( [ ne1 ] ) , the lower plots show the corresponding wkbj counterparts.,width=680 ] a comparison of the different results is shown in figs .[ ff1 ] and [ ff2 ] .note that in all cases shown in this section the same contour levels are used for the numerical and analytical data . in fig .[ ff1 ] we show the distribution of over the planet s volume for the modes and .numerical results are presented in the upper plots , which are taken from pi .these are for modes with ( upper left plot ) , ( upper middle plot ) and ( upper right plot ) .the modes with and are the so - called two main global modes , according to pi .they mainly determine transfer of energy and angular momentum through dynamic tides induced by a parabolic encounter .the respective wkbj counterparts have the smallest possible wkbj order .the corresponding analytic eigenfrequencies are ( ) and ( ) .the distribution shown on the upper right plot may be identified with a next order mode having and one can see that there is a surprisingly good agreement between the analytical and numerical results .in particular , the retrograde mode represented on the left hand side plots has a spot in distribution at the angle with respect to the rotational axis .this agrees with position of the critical latitude since .the distributions shown on the middle and right plots correspond to prograde modes .they have a well pronounced approximately vertical isolines .the main global mode may be distinguished from the mode corresponding to the next order by the number of nodes in the horizontal direction , this being one in the case of the global mode and two for the next order mode .we have checked that similar agreement exists between the wkbj and numerical results corresponding to since the distributions are quite similar they are not shown here . for compare the wkbj results with calculations done by a spectral method for a model of a planet of jupiter size and mass in fig .[ ff2 ] . as in the previous casethe upper plots correspond to the numerical results . from left to right the numerical values of the eigenfrequencies are ( the main global mode ) , and .their analytical counterparts have , ; , and , respectively .note that a more pronounced disagreement in eigenfrequencies corresponding to the mode represented on the right hand side plot is mainly determined by the fact that this mode has a distribution concentrated near the surface of the planet , where the equation of state differs from that of a polytrope .one can see that again there is very good agreement between the results .this is especially good for the main global mode represented in the plots on the left hand side .the agreement gets somewhat worse moving from right to left .this may be explained by a number of factors such as inaccuracies of the numerical and analytical methods as well as the physical effects determined by changes of the equation of state in the outer layers of the planet and the presence of the phase transition .these factors mainly influence modes with a small spatial structure while the large scale main global mode is hardly affected by them .finally we consider the global odd modes and compare the analytic distributions given by equation ( [ ne1 ] ) with the corresponding wkbj distributions for and in fig .although there is a disagreement in position of the spot close to the critical latitude , which is situated on the planet s surface in the case of the exact analytic solutions and slightly interior to the surface of the planet in the case of the wkbj distributions , there is a similarity in the distributions in the planet s interior .this is quite surprising since in this case the analytic distributions do not depend on the planet s structure at all while the wkbj distributions are determined by the density distribution close to the planet s surface .as we pointed out in the introduction , integrals of the form where corresponds to a particular eigenmode and is some smooth function , appear in astrophysical applications of the theory developed in this paper . in particular , as was discussed in pi and ip , integrals of this type enter in expressions for the transfer of energy and angular momentum transferred during the periastron passage of a massive perturber .these apply to the case when the spectrum of normal modes is discrete and they involve integrals of form ( [ eqn84 ] ) , where with being the associated legendre function . assuming that varies on a small spatial scale while the function is smoothly varying such integrals may , in principal , be evaluated using our formalism with help of a theory of asymptotic evaluation of multidimensional integrals , see eg .fedoryuk ( 1987 ) , wong ( 1989 ) . however , some important integrals of form ( [ eqn84 ] ) require an extension of our formalism , which can provide a smooth matching of the solution close to the surface to the wkbj solution in the inner part of the planet that is valid at the next orders in inverse powers of .this is due to cancellations of leading terms in corresponding asymptotic series . since this problem appears to be a rather generic one we would like to discuss it here in more detail for the important case when .the overlap integral of this type determines excitation of the modes which are the most important for the tidal problem ( eg .pi , ip and see also pin ) .explicitly , we have in this case where .note that this integral must converge to zero in the incompressible limit as in this case it is well known that inertial modes are not excited in the anelastic approximation .this fact , however , is not obvious for the integral written in the form ( [ eqn84a ] ) because close to the surface we have with the constant converging to a nonzero value as therefore , as the eigenfunctions are regular , the integral has a logarithmic divergence at the surface of the planet as this raises the possibility that the overlap integral might converge to a nonzero value or behave pathologically as the incompressible limit is approached . in order to show that , in fact , this is not so and in a smooth manner , let us consider some fiducial models having the property that the quantity is constant . for models in hydrostatic equilibrium under their own gravity ,constancy of implies that ratio , where is the mass interior to the radius is constant .the model must accordingly be incompressible .goodman lackner ( 2009 ) obtained a wider class of models in hydrostatic equilibrium under a fixed quadratic gravitational potential .because the potential is fixed independently of the mass distribution and there are no constraints on the equation of state , such models may be constructed for an arbitrary density distribution .now let use consider the integral for the fiducial models described above this is identical to the overlap integral ( [ eqn84a ] ) where we note that we may adopt natural units such that the constant which should be identified with the surface value of is equal to unity in that case .more generally the integrand in ( [ eqn84b ] ) can be transformed using equation of hydrostatic equilibrium ( [ good1 ] ) such that now let us consider equation ( [ eq 41 ] ) for free normal modes in the anelastic approximation by setting and the right hand side of this equation to zero .then we multiply it by , set , and integrate over . after removing derivatives of by integrating by parts , assuming that the density vanishes at the surface boundary , it is easy to see that it follows from ( [ eq 41 ] ) that in the anelastic approximation provided means , in particular , that inertial waves can not be excited in the goodman lackner ( 2009 ) models in this approximation as was found by these authors when compressibility was fully taken into account ( see pin for additional discussion ) .using the fact that we may rewrite ( [ eqn84a ] ) for models under their own self - gravity quite generally , adopting natural units , as taking into account that the factor in the brackets is proportional to for small , and , accordingly , in this limit the integrand is proportional to we see that now the logarithmic divergence of disappears and , therefore , it is clear from the representation ( [ eqn84d ] ) that the overlap integral indeed smoothly tends to zero in the limit .the theory of asymptotic evaluation of integrals of the form ( [ eqn84d ] ) tells that the values of such integrals are determined either by inner stationary points , where gradient of the wkbj phase vanishes or contributions close to the surface or other parts of the integration domain , where the wkbj approximation is not valid . from the expression of the in the wkbj regime ( [ eqn30 ] ) it follows that there are no stationary points in the inner region of the planet .considering the regions close to the surface it appears to be reasonable to assume that the leading contribution is determined by the region close to the critical latitude , where a hot spot is observed in distributions of , see the previous section . in this region the quantities and are small .we can use them as new integration variables in ( [ eqn84d ] ) with help of ( [ eqn66 ] ) , separate the contribution of this region to the integral by introduction of the functions ] , lies within the range of integration and . as was shown by larichev ( 1973 ) the integral for any particular form of the function $ ] .thus , the leading order contribution to the overlap integral from the surface region close to the critical latitude is equal to zero . in principal , one can look for the next order terms .however , in this case our simple approach to the problem seems to be inadequate since eg .the assumption that can be represented as a product of two functions separately depending on the coordinates may be broken at this level , etc .. a more accurate approach is left for a future work .we note , however , that this cancellation means that the overlap integral should decay rapidly with increasing possibly being inversely proportional to a large power of this may qualitatively explain why a small number of relatively large scale modes are significantly excited by dynamic tides , see pi , ip and pin . in this paperwe have developed a wkbj approximation , together with a formal first order perturbation approach for calculating the normal modes of a uniformly rotating coreless planet under the assumption of a spherically symmetric structure .matching of the general wkbj form valid in the interior to separable solutions valid near the surface resulted in expressions for eigenfunctions that were valid at any location within the planet together with an expression for the associated eigenfrequencies given in section [ sec3.5 ] .corrections as a result of density gradient terms neglected in the initial wkbj approach were also obtained from formal first order perturbation theory .the corrected wkbj eigenfrequencies obtained using the wkbj eigenfunctions were compared with results obtained numerically by several different authors and found to be in good agreement , away from the limits of the inertial mode spectrum where identifications could be made , even for modes with a global structure .we also compared the spatial forms of the eigenfunctions with those obtained using the spectral method described in ip finding similar good agreement .this is consistent with the idea that these global modes can be identified and that first order perturbation theory works even though they are embedded in a dense spectrum . in further support of this, the formal first order perturbation theory developed here is subsequently used to estimate corrections to the eigenfrequencies as a consequence of the anelastic approximation and is then compared with simulation results for a polytropic model with in pin .these different approaches are found to be in agreement for small enough rotation frequencies and also indicates that , as implied by the simplified discussion in section [ anelastic ] of this paper , that corrections as a result of the anelastic approximation are never very significant for the models adopted .our results show that the problem of finding eigenfrequencies and eigenvalues of inertial modes allows for an approximate analytical treatment , even in the case of modes having a large scale distribution of perturbed quantities .although we consider only the case of a polytropic planet , our formalism can be applied to a much wider context .indeed , the approach developed here is mainly determined by the form of the density close to planet s surface , where we assume that it is proportional to a power of distance from the surface .thus , we expect that our main results remain unchanged for any density distribution , which is approximately power - law close to the surface .in particular , according to our results , all models of type having approximately the same behaviour of the density distribution close to the surface should have approximately the same eigenspectrum .the formalism developed here can be extended for an approximate analytic evaluation of different quantities associated with inertial modes , such as overlap integrals characterising interaction of inertial waves with different physical fields , growth rates due to the cfs instability and decay rates due to various viscous interactions and non - linear mode - mode interaction .it may provide a basis for a perturbative analytic analysis of more complicated models , such as realistic models of star and planets flattened by rotation or models of relativistic stars . as we discussed above , for a given value of wkbj order , some modes are identified with modes obtained numerically while others remain unidentified .eigenfrequencies of the unidentified modes are always either situated close to the boundaries of the frequency range allowed for inertial modes , or situated close to the origin .we believe that our theory is not applicable to these modes , and they develop a small scale contribution controlled by the closeness of the positions of their eigenfrequencies to and and thus effectively move to higher order than allowed for .accordingly we de not consider these modes when comparing our results with results of direct numerical calculations of the excitation of inertial waves due to a tidal encounter reported in pin .we are grateful to the referee , jeremy goodman , for his comments , which led to improvement of the paper .pbi was supported in part by rfbr grant 08 - 02 - 00159-a , by the governmental grant nsh-2469.2008.2 and by the dynasty foundation .this paper was prepared to the press when both p.b.i .and j. c. b. p. took part in the isaac newton programme dynamics of discs and planets. 99 aldridge , k. d. , lumb , l. i. , 1987 , nature , 325 , 421 andersson , n. , 1998 , apj , 502 , 708 arras , p. , flanagan , e. e. , morsink , s. m. , schenk , a. k. , teukolsky , s. a. , wasserman , i. , 2003 , apj , 591 , 1129 bryan , g. h. , 1889 , rspta , 180 , 187 chandrasekhar , s. , 1939 _ an introduction to the study of stellar structure _ new york : dover chandrasekhar , s. , 1970 , phys .lett . , 24 , 611 dintrans , b. ,ouyed , r. , 2001 , a , 375 , l47 fokin , m. v. , 1994 , sib . adv . in math . , 4 ,n1 , 18 fokin , m. v. , 1994 , sib . adv . in math ., 4 , n2 , 16 fedoryuk , m. v. , 1987 _ asymptotics : integrals and series _ ( in russian ) , nauka , moscow friedman , j. l. , morsink , s. m. , 1998 , apj , 502 , 714 friedman , j. l. , schutz , b. f. , 1978 , apj , 222 , 281 goodman , j. , lackner , g. , 2009 , apj , 696 , 2054 gradshteyn i. s. , ryzhik i. m. , 2000 , _ table of integrals , series and products _ , sixth edition , academic press , san diego , london greenspan , h. p. , _ the theory of rotating fluids _ , 1968 , cambridge university press ivanov , p. b. , papaloizou , j. c. b. , 2004 , mnras , 347 , 437 ivanov , p. b. , papaloizou , j. c. b. , 2007 , mnras , 376 , 682 ( ip ) kokkotas , k. d. , 2008 , rev .astr . , 20 , 140 landau , l. d. , lifshitz , e. m.,1977 , _ quantum mechanics :non - relativistic theory _ , third edition , elsevier science ltd , oxford larichev , v. d. , 1973 , journal of computational mathematics and mathematical physics ( in russian ) , 13 , 1029 lockitch , k. h. , friedman , j. l. , 1999 , apj , 521 , 764 ( lf ) ogilvie , g. i. , lin , d. n. c. , 2004 , apj , 610 , 477 ogilvie , g. i. , 2009 , mnras , 396 , 794 papaloizou , j. c. b. , ivanov , p. b. , 2005 , mnras , 364 , l66 ( pi ) papaloizou , j. c. b. , ivanov , p. b. , 2009 , mnras , submitted ( pin ) papaloizou , j. c. b. , pringle , j. e. , 1978 , mnras , 182 , 423 papaloizou , j. c. b. , pringle , j. e. , 1981 , mnras , 195 , 743 ( pp ) papaloizou , j. c. b. , pringle , j. e. , 1982 , mnras , 200 , 49 prudnikov , a. p. , brychkov , yu .a. , marichev , o. , 1986 , _ integrals and series , vol 1 . elementary functions _, new york : gordon and breach ralston , j. , 1973 , j. of math .anal . and appl ., 44 , 366 savonije , g.j . , papaloizou , j. c. b. , 1997 , mnras , 291 , 633 schaeffer , d. g. , 1975 , studies in applied mathematics , 54 , 269 schenk , a. k. , arras , p. , flanagan , e. e. , teukolsky , s. a. , wasserman , i. , 2002 , phys .d , 65 , 024001 stewartson , k. , rickard , j. a. , 1969 , j. fluid mech ., 35 , 759 wong , r. , 1989 , _asymptotic approximations of integrals _ , academic press , san diego wood , 1977 , proc .a. , 358 , 17 wu , y. , 2005a , apj , 635 , 674 wu , y. , 2005b , apj , 635 , 688in order to find the eigenfrequencies of modes odd with respect to reflection the phase should be chosen in such a way that the corresponding eigenfunctions are equal to zero at the planet equatorial plane , and , accordingly , . from this condition and equation ( [ eqn24 ] )we obtain where is an integer .analogously to the case of even modes the phase is determined by equation ( [ eqn29 ] ) and the form of solution close to the surface is determined by equations ( [ eqn42 ] ) and ( [ eqn43 ] ) . from these equations and equation ( [ aa1 ] )we get the compatibility conditions analogous to conditions ( [ eqn44 ] ) and , solving ( [ aa2 ] ) for and we find that where , and that the expression for is given by equation ( [ eqn46 ] ) , where ( [ aa3 ] ) should be used . comparing equations ( [ eqn45 ] ) and ( [ aa3 ] )we see that both even and odd modes can be described by the same expression for provided that where for the even modes and for the odd ones , respectively .in this appendix we show that the eigenvalues obtained from our wkbj analysis agree with the corresponding values for an incompressible fluid contained in a rigid spherical container in the wkbj limit .it is well known that the spectrum of normal modes for an incompressible fluid in a rotating spherical container can be found analytically , ( see eg .greenspan 1968 , p. 64 ) .the eigenfrequencies are determined from the equation where is a legendre function , is an integer , we recall that and note that although it is inconsequential for the use of ( [ a]1 ) , greenspan s definition of has the opposite sign to that used in this paper .we determine the spectrum in the wkbj limit from ( [ a1 ] ) . in this limit is a large parameter . for our purposesit is important to retain all terms of order of and larger .we use the asymptotic form of legendre functions in the limit of large in the form here we have used the well known properties of gamma functions to transform the asymptotic expression given in gradshteyn ryzhik ( 2000 ) to the form ( [ a2 ] ) .the quantity multiplying in the first term on the right hand side of ( [ a3 ] ) should be close to zero in order that this first term be of the same order as the other terms on the right hand side .this condition gives where is an integer and is a small higher order correction .setting it to zero we have this expression agrees with equation ( [ eqn46 ] ) when we consider the limit of incompressible fluid . to do this we set in ( [ eqn45 ] ) and ( [ eqn46 ] ) , assume that and set comparing ( [ a6 ] ) with ( [ eqn45 ] ) we obtain one can show that the requirement that is an even number determines eigenmodes for which has even symmetry with respect to the reflection , see eg .greenspan 1968 , p. 65 . substituting ( [ a4 ] ) in ( [ a3 ] )we get an expression for the correction now we substitute ( [ a8 ] ) into ( [ a4 ] ) to obtain the last term gives the leading order difference in eigenvalues belonging to eigenfunctions with values of of opposite sign . it is shown below that the expression ( [ a10 ] ) agrees with ( [ eqn79 ] ) provided that ( [ eqn79 ] ) is evaluated in the limit .[ [ b ] ] the limit of the expression ( [ eqn79 ] ) for the frequency correction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ if is directly set to zero , the integral ( [ eqn68 ] ) diverges at . in order to find a limiting expression for ( [ eqn79 ] )it is necessary to consider to be very small and take the limit in the final expression only after potentially divergent terms have been cancelled .thus we assume that but very small and integrate ( [ eqn68 ] ) by parts to obtain where and we remark that for when .we use the well known relation to eliminate the derivative of the bessel function and obtain . \label{a13}\ ] ] it is easy to see that the expression on the right hand side does not diverge when .however , the integral entering the right hand side of ( [ a13 ] ) diverges at large values of . in order to deal with this divergence we use the asymptotic expression for the bessel function , , valid at large values of its argument in the form , from equation ( [ a14 ] ) it follows that the boundary term in ( [ a13 ] ) can be evaluated as here we remark that because where is the dimensionless distance to the surface and is large, may be large when is small corresponding to being close to the surface .thus use of the asymptotic expansions of bessel functions for large values of their arguments can be justified .in addition , setting and ( [ a14 ] ) gives and now , restoring complete precision , we can represent the integral entering ( [ a13 ] ) as where the quantity does not contain any divergences and can be evaluated in the limit and .now we use ( [ a19 ] ) , ( [ a16 ] ) and ( [ a17 ] ) together with equations ( [ a14n ] ) and ( [ a13 ] ) to obtain in the limit of small using this together with ( [ eqn68 ] ) and ( [ eqn70 ] ) we deduce that substituting ( [ a21 ] ) in ( [ eqn79 ] ) and taking the limit we get where we use the fact that as . recalling that see that ( [ a22 ] ) is equivalent to ( [ a10 ] ) .
|
inertial waves governed by coriolis forces may play an important role in several astrophysical settings , such as eg . tidal interactions , which may occur in extrasolar planetary systems and close binary systems , or in rotating compact objects emitting gravitational waves . additionally , they are of interest in other research fields , eg . in geophysics . however , their analysis is complicated by the fact that in the inviscid case the normal mode spectrum is either everywhere dense or continuous in any frequency interval contained within the inertial range . moreover , the equations governing the corresponding eigenproblem are , in general , non - separable . in this paper we develop a consistent wkbj formalism , together with a formal first order perturbation theory for calculating the properties of the normal modes of a uniformly rotating coreless body ( modelled as a polytrope and referred hereafter to as a planet ) under the assumption of a spherically symmetric structure . the eigenfrequencies , spatial form of the associated eigenfunctions and other properties we obtained analytically using the wkbj eigenfunctions are in good agreement with corresponding results obtained by numerical means for a variety of planet models even for global modes with a large scale distribution of perturbed quantities . this indicates that even though they are embedded in a dense spectrum , such modes can be identified and followed as model parameters changed and that first order perturbation theory can be applied . this is used to estimate corrections to the eigenfrequencies as a consequence of the anelastic approximation , which we argue here to be small when the rotation frequency is small . these are compared with simulation results in an accompanying paper with a good agreement between theoretical and numerical results . the results reported here may provide a basis of theoretical investigations of inertial waves in many astrophysical and other applications , where a rotating body can be modelled as a uniformly rotating barotropic object , for which the density has , close to its surface , an approximately power law dependence on distance from the surface . [ firstpage ] hydrodynamics ; stars : oscillations , binaries , rotation ; planetary systems : formation
|
in our current search for a quantum theory of gravity it is widely believed that the final theory should be purely relational .a long - standing thorny issue for a relational theory is the question of how quantities like distances and duration can be defined or emerge in a purely relational manner .this tension first became apparent in the correspondence between clarke and leibniz .we will review that part of the correspondence that is concerned with the nature of space and time and let it be our introduction to the problem of recovering the notions of distance and duration in a relational theory .leibniz stated his position in ( * ? ? ? * third paper , 4 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` i hold space to be something merely relative , as time is ; i hold it to be an order of coexistences , as time is an order of succesions . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ using his principle of the _ identity of indiscernibles _leibniz then goes on to demonstrate that an absolute view of space and time is untenable and that the relative view is the only sensible one .clarke , not at all convinced , offers the following refutation of leibniz s position ( * ? ? ?* third reply , 4 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` if space was nothing but the order of things coexisting ; it would follow , that if god should remove in a straight line the whole world entire , with any swiftness whatsoever ; yet it would still always continue in the same place : and that nothing would receive any shock upon the most sudden stopping of that motion . andif time was nothing but the order of succession of created things ; it would follow , that if god had created the world millions of ages sooner than he did , yet it would not have been created at all sooner . further : space and time are quantities ; which situations and order are not . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to a modern mind this argument given by clarke looks rather vacuous and leibniz s reply could be given by a physicist trained today ( * ? ? ?* fourth paper , 13 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` to say that god can cause the whole universe to move forward in a straight line , or in any other line , without making otherwise any alteration in it ; is another chimerical supposition .for , two states indiscernible from each other , are the same state ; and consequently .tis a change without any change . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ clarke does not acknowledge this argument .instead he concludes that leibniz s position is disproved ( * ? ? ?* fourth reply , 16 and 17 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` that space and time are not the mere order of things , but real quantities has been proven above , and no answer yet given to those proofs . and till an answer be given to those proofs , this learned author s assertion is a contradiction . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ having held to his position for four papers leibniz now commits two grave mistakes within the space of two pages .the first one is the admission that there is an absolute true motion * fifth paper , 53 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` however , i grant there is a difference between an absolute true motion of a body , and a mere relative change of its situation with respect to another body . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if that was not enough leibniz goes on in the next paragraph to say that distances are fundamental ( * ? ? ?* fifth paper , 54 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` as for the objection that space and time are quantities , or rather things endowed with quantity ; and that situation and order are not so : i answer , that order also has its quantity ; there is in it , that which goes before and that which follows ; there is distance or interval . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ all clarke has to do now is to collect his trophy . with the magnanimity of the victor he points out ( * ? ? ?* fifth reply , 53 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` whether this learned author s being forced here to acknowledge the difference between absolute real motion and relative motion , does not necessarily infer that space is really a quite different thing from the situation or order of bodies ; i leave to the judgement of those who shall be pleased to compare what this learned writer here alleges , with what sir isaac newton has said in the principia , '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ somewhat more triumphantly he continues in the next paragraph ( * ? ? ?* fifth reply , 54 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` i had alleged that time and space were quantities , which situation and order are not . to this, it is replied ; that _ order has its quantity ; there is that which goes before , and that which follows ; there is distance and interval_. i answer : going before , and following , constitutes situation or order : but the distance , interval , or quantity of time or space , wherein one thing follows another , is entirely a distinct thing from the situation or order , and does not constitute any quantity of situation or order : the situation or order may be the same , when the quantity of time or space intervening is very different . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus ends the correspondence between leibniz and clarke with a clear defeat for the relativists . if one looks at the arguments that have been presented it is not so much a defeat but more of a self - destruction of leibniz . in this articlewe will propose to resolve the tension between quantity and relation using a simple model from solid state physics . in section [ sec : relation ] we use a background - independent formulation of the heisenberg spin chain as a simple model of the universe . in section[ sec : poincare ] we show that observers _ inside _ the system can use the excitations of the model ( without reference to a lattice spacing ) to define distances purely relationally .we also show that the maps between observers are naturally given by poincar transformations .this leads us to interpret this model as a `` quantum minkowski space '' .we briefly discuss consequences of this argument for the problem of quantum gravity , as well as certain observations about the precise relationship of our model to minkowski space in the conclusions .how else could the leibniz - clarke correspondence have gone ? how could the tension between quantity and relation have been resolved ? how is one to obtain the notion of distance in a purely relational manner ?the first thing to realize is that the tension between quantity and relation can not be resolved by relying on kinematics alone . given a dynamical degree of freedom like a traveling mode one can use it to define the notion of distance by defining how much it travels in a certain amount of time , i.e. by defining its velocity . this is just how we define the unit of length today , namely by setting the speed of light ( see ) .what is needed is a distinctive set of traveling degrees of freedom or excitations , that can be used to define the notion of distance in this way . to be concrete we will look at a particular model , the heisenberg spin chain and its higher dimensional generalizations .its hamiltonian is given by where , and the s are the pauli matrices . for nearest neighbor interactions the lowest lying excitations are of the form where the s and s denote eigenvectors of and the occurs at the -th position .the s take the form and where is the number of lattice sites .solving the schrdinger equation gives the eigenvalues of as a function of : we will denote the corresponding eigenvectors by . this is a set of traveling degrees of freedom of the model . in the next sectionwe will use these to give a purely relational definition of distance . note that the excitations we have introduced are perfectly well - defined _ without the introduction of a lattice spacing_.using the excitations of the model described above we can now proceed to define quantities like distance in a completely relational manner . this can be done by picking one particular excitation and assigning a speed to it .a length is then defined to be the amount the excitation has travelled in a certain time interval .a distinctive excitation in our model is given by the fastest wave packets .these excitations are of the form where is peaked around that value of for which is maximal .these wave packets are distinguished by the fact that no other excitation can overtake them .all observers , which can also be thought of as excitations of the system , will agree on that , independent of the way they themselves move .this characterization is thus completely relational .since observers in the spin model have only the above excitations at their disposal to explore their world there is no way for them to tell whether they are moving or resting with respect to the lattice .it is thus consistent and natural for all of them to assign the _same _ speed to these excitations . between the two coordinate systems given by the mapping of physical events onto each other .this map will have the property that it maps the fastest excitations onto fastest excitations .we find then that this map must be a poincar transformation .[ fig : relativ],height=226 ] what will be the map between the coordinate systems of two observers ? in the limit that the spin system looks smooth to the observers we can answer this question .the map can be constructed by mapping physical events onto each other .this map will in particular map the fastest excitations in one coordinate system onto the fastest excitations in the other system .since these excitations have the same speed in both systems the map can only be a poincar transformation ( see figure [ fig : relativ ] ) .one thus obtains a `` quantum minkowski space '' .as it was pointed out by clarke in his correspondence with leibniz , in a completely relational theory there exists a tension between quantities and relations .we have seen here how this tension can be resolved provided one has access to excitations that can be used to define the notion of distance by defining the speed of these excitations .a consequence of this definition is that the natural mapping between two observers is given by a poincar transformation . to define a notion of distance in a relational way it was necessary to have access to the dynamics of the theory .a purely kinematic approach is not sufficient .it is here where some of the candidate theories of quantum gravity , like loop quantum gravity or causal set theory , face their greatest problems .the arguments presented here suggest that the dynamics of the theory is required to make progress on important issues like the question of the semi - classical limit of the theory .we conclude by remarking that the model presented above does deviate from usual minkowski space in two ways . in order for us to find poincar transformations we assumed that the excitations involved are all well separated from each other . if this is not the case an operational definition of distance and duration can not be given anymore .another deviation occurs when the observers have access to excitations with very high values of ( these excitations are not to be confused with the fastest excitations used above ) . in this casethe observers would notice the spin lattice and would find measurable deviations from poincar invariance .mm h. g. alexander , _ the leibniz - clarke correspondence _ , manchester university press ( 1956 ) .b. n. taylor ( ed . ) , _ the international system of units ( si ) _ , national institute of standards and technology , special publication 330 ( 2001 ). t. thiemann , _ introduction to modern canonical quantum general relativity _ , gr - qc/0110034 .l. bombelli , j. h. lee , d. meyer and r. sorkin , _ space - time as a causal set _ , phys .lett . * 59 * , 521 ( 1987 ) .
|
in a purely relational theory there exists a tension between the relational character of the theory and the existence of quantities like distance and duration . we review this issue in the context of the leibniz - clarke correspondence . we then address this conflict by showing that a purely relational definition of length and time can be given , provided the dynamics of the theory is known . we further show that in such a setting it is natural to expect lorentz transformations to describe the mapping between different observers . we then comment on how these insights can be used to make progress in the search for a theory of quantum gravity .
|
we consider a general problem of adding a budgeted set of new edges to a graph , that each new edge connects an existing node in the graph to a newly introduced _ target node _ , so that the target node can be _ discovered _ easily by existing nodes in the new graph .we refer to this problem as the target node _discoverability optimization problem _ in networks. * motivations . *the problem of optimizing node discoverability in networks appears in a wide range of applications .for example , a youtube video maker may wish his videos to have a large audience and click traffic . in youtube , each video is related to a set of recommended videos , and the majority of youtube videos are discovered and watched by viewers following related videos .hence , if a video maker could make his video related to a set of properly chosen videos , his video may have more chance to be discovered and watched .this task is known as the _ related video optimization problem _ , and in practice , a video maker indeed has some ability to make his video related to some other videos through writing proper descriptions , choosing the right title , adding proper meta - data and keywords . in this application, we can build a video network , where a node represents a video , and a directed edge represents one video relating to another .then , making a target video related to a set of existing videos is equivalent to adding a set of edges from existing nodes to the target node in the video network .therefore , optimizing related videos of a target video can be formulated as a target node discoverability optimization problem in networks . as another application ,let us consider the advertising service provided by many retail websites , such as amazon and taobao .a major concern of product sellers is that whether customers could easily discover their products on these retail websites .one important factor that affects the discoverability of an item in a retail website is _what other items detail pages display this item_. for example , on amazon , a seller s product could be displayed on a related product s detail page in the list `` sponsored products related to this item '' .if an item was displayed on several popular or best selling products detail pages , the item would be easily discovered by many customers , and have good sells .a product seller has some control to decide how strong his item is related to some other items , e.g. , a book writer on amazon can choose proper keywords or features to describe her book , set her interests , other similar books , and cost - per - click bid . in this application, we can build an item network , where a node represents an item , and a directed edge represents one item relating to another .therefore , optimizing the discoverability of an item by relating to other proper items in a retail website can be formulated as the target node discoverability optimization problem in networks . in the third application, we consider the message forwarding processes on a follower network , such as tweet re - tweeting on twitter or status re - posting on douban . in a follower network , a person ( referred to as a _ follower _ ) could follow another person ( referred to as a _ followee _ ) , and then the follower could receive messages posted or re - posted by the followee . in this way, messages diffuse on a follower network through forwarding by users ( with direction from a followee to his followers ) . hence , what followees a person chooses to follow determines what messages he could receive and how soon the messages could arrive at the person . the problem of choosing an optimal set of followees for a new user to maximize information coverage and minimize time delay is known as the _ whom - to - follow problem _ on a follower network . on the other hand , if we consider this problem from the perspective of a message , we are actually optimizing the discoverability that a message could `` _ _ discover _ _ '' the new user , through adding new edges in the follower network .therefore , the whom - to - follow problem could also be formulated as the target node discoverability optimization problem in networks .* present work . * in this work , we study the general problem of target node discoverability optimization in networks .we will formally define a node s discoverability , and propose a unified framework that could address this problem efficiently over large networks .* measuring node discoverability by random walks . * to quantify the target node s discoverability , we propose two measures based on random walks .more specifically , we measure discoverability of the target node by analyzing a collection of random walks that start from existing nodes in the network , and we state ( 1 ) the probability that a random walk could finally hit the target node , and ( 2 ) the average number of steps that a random walk finally reaches the target node . intuitively ,if a random walk starting from a node could reach the target node with high probability , and use few steps on average , then we say that the target node can be easily discovered by node .using random walks to measure discoverability is general , because many real world processes are suitable to be modeled as random walks , e.g. , user watching youtube videos by following related videos , people s navigation and searching behaviors on the internet and peer - to - peer networks , and some diffusion processes such as letter forwarding in milgram s small - world experiment .* efficient optimization via estimating - and - refining . * the optimization problem asks us to add a budgeted set of new edges to the graph that each new edge connects an existing node to the target node , to optimize the target node s discoverability with regard to the two random walk measures .the optimization problem is np - hard , which inhibits us to find the optimal solutions for a large network . while the two objectives are proved to be submodular and supermodular , respectively .the optimization problem thus lends itself to be approximately optimized by a greedy algorithm .the computational complexity of the greedy algorithm is dominated by the time cost of an _ oracle call _ ,i.e. , calculating the marginal gain of a given node . to scale up the oracle call over large networks , we propose an efficient _ estimation - and - refinement _ approach that is empirically demonstrated to be hundreds of times faster than a naive approach based on dynamic programming .* contributions . *we make following contributions in this work : * we formally define a node s discoverability in networks , and formulate the target node discoverability optimization problem .the problem is general and appears in a wide range of practical applications ( ) .* we prove the objectives satisfying submodular and supermodular properties , respectively .we propose an efficient estimation - and - refinement approach to implement the oracle call when using the greedy algorithm to find quality - guaranteed solutions .our implementation is hundreds of times faster than a naive implementation based on dynamic programming ( ) .* we conduct extensive experiments on real networks to verify our method .the experimental results demonstrate the proposed estimation - and - refinement approach has a good trade off between estimation accuracy and computational efficiency , which enables us to handle large networks with millions of nodes / edges ( ) .in this section , we first formally define a node s discoverability .then , we formulate the target node discoverability optimization problem .finally , we analyze several properties of the proposed discoverability measures ..notations[tab : notations ] [ cols="^,<",options="header " , ] in the first experiment , we evaluate the d - ap and d - ht estimation accuracy by different methods .we set , i.e. , connect every node in the graph to target node .this corresponds to the case that is maximum and is minimum .dp is an exact method which hence allows us to obtain the ground truth on each graph .we quantify the estimation accuracy by normalized rooted mean squared error ( nrmse ) .nrmse of an estimator given ground truth is defined by : nrmse . instead of evaluating the nrmse of or , we propose to use the following form of nrmse : avg - nrmse , and avg - nrmse .avg - nrmse is a stricter metric than nrmse , and can distinguish the performance of different methods more clearly according to our experiments .+ + we conduct experiments on the hepth and gowalla graphs respectively , and show the ratios between estimates and ground truths versus the number of walks per node in figs . [fig : fap_nrmse ] and [ fig : fht_nrmse ] .we observe that both the rw estimation approach and the estimation - and - refinement approach provide good estimates about d - ap and d - ht ; the estimates become more accurate when the number of walks per node increases .these results demonstrate the unbiasedness of the two methods . to compare the estimation accuracy of different methods, we also depict the avg - nrmse in figs .[ fig : fap_nrmse ] and [ fig : fht_nrmse ] .the nrmse curves clearly show the difference of performance of the two methods .first , we observe that when the number of walks per node increases , the estimation error of each method decreases , indicating that the estimates become more accurate .second , the estimation - and - refinement approach can provide more accurate estimates than the rw estimation approach .when the refinement depth increases , we could obtain even better estimates of d - ap and d - ht .these results demonstrate that the estimation - and - refinement approach can provide more accurate estimates than the rw estimation approach . in the second experiment, we evaluate the oracle call accuracy and efficiency implemented by different methods . here, oracle call accuracy is still measured by avg - nrmse , and oracle call efficiency is measured by speedup , i.e. , to evaluate the estimation accuracy , we randomly sample nodes as the benchmark nodes from the hepth and gowalla graphs , respectively .we then calculate the marginal gain ground truth of these benchmark nodes using dp , and we set .the avg - nrmse and speedup of different methods are depicted in figs .[ fig : gain_ap ] and [ fig : gain_ht ] .+ + from the avg - nrmse curves , we observe similar results as we estimate d - ap and d - ht in the previous experiment : ( 1 ) when the number of walks per node increases , every method obtains more accurate estimates ; ( 2 ) the estimation - and - refinement approach can obtain more accurate estimates than the rw estimation approach , and the estimation accuracy improves when refinement depth increases . from the speedup curves, we can observe that both the rw estimation approach and the estimation - and - refinement approach are significantly more efficient than dp . on average ,the two estimation approaches are hundreds of times faster than dp .we also observe something interesting : when we increase the refinement depth , the oracle call efficiency decreases in general , as expected ; however , we observe that the estimation - and - refinement approach with is actually more efficient than the rw estimation approach .this is because that when we use the estimation - and - refinement approach , we simulate shorter walks , and this could slightly improve the oracle call efficiency . as we further increase refinement depth to , because we need to expand a large part of a node s neighborhood , the estimation - and - refinement approach becomes slower than the rw estimation .equipped with the verified oracle call implementations , we are now ready to solve the node discoverability optimization problem using the greedy algorithm . in the third experiment , we run the greedy algorithm on each of the four graphs , and choose a subset of connection sources to optimize the target node s discoverability , i.e. , maximizing d - ap , and minimizing d - ht . for each graph , we simulate walks from each node , and we use the estimation - and - refinement approach with to implement the oracle call .we set edge weight if node is connected to target node .to better understand the performance of the greedy algorithm , we compare the results with two baseline methods : ( 1 ) a random approach that randomly pick connection sources from the graph ; and ( 2 ) a top - degree approach that always choose the top- largest degree nodes from the graph as connection sources .we show the results in figs .[ fig : greedy_ap ] and [ fig : greedy_ht ] .+ + we can clearly see that the greedy algorithm indeed performs much better than the two baseline methods on all the four graphs : the greedy algorithm could choose connection sources with larger d - ap , and smaller d - ht . in general , the degree approach is better than random approach , however , we also observe an exception on the hepth graph where the random approach is slightly better than the degree approach .this section is devoted to review some related literature .our work is based on the classic results of absorbing markov chains , and the absorbing markov chain is also used in several recent work .the work most related to ours is the optimal tagging problem , where the goal is to choose a subset of tags for an item in an item - tag network , so that the probability of a user reaching the item is maximized , through searching tags or navigating items .this problem is modeled as maximizing the absorbing probability of nodes in a graph with a cardinality constraint .thus , the optimal tagging problem is actually a special case of d - ap maximization problem when , and the knapsack constraint degenerates to a cardinality constraint . in our work , we consider d - ap and d - ht as two measures of node discoverability , and study the node discoverability optimization under a unified framework . proposes the absorbing random walk centrality , which can be used to measure a node s importance in a graph . studies node reachability in communication networks , and proposes several measures based on hitting times .our work is also related to work considering related problems from the perspective of graph algorithms . studies the web discoverability problem , which aims to connect less popular items to popular items so that less popular items can be discovered by customers , and this problem is formulated as a bipartite graph matching problem .we think that it is more practical to model an item graph as a general graph rather than a bipartite graph , and a customer could discover an item by random walks .we thus propose the d - ht minimization problem which uses hitting time to measure how easily one could reach the target node . hitting timeis also used in measuring node similarity , reachability , and finding dominating sets of a graph .this work considers a general problem of node discoverability optimization problem , that appears in a wide range of applications . to measure the discoverability of a node, we propose two measures : d - ap based on absorbing probabilities , and d - ht based on hitting times . while optimizing a target node s discoverability with regard to the two measures is np - hard, we find that the two measures satisfy submodularity and supermodularity respectively .this enables us to use the greedy algorithm to find provably near - optimal solutions to the optimization problem .we propose an efficient estimation - and - refinement implementation of the oracle call .experiments conducted on real graphs demonstrate that our method provides a good trade - off between estimation accuracy and computational efficiency , and our method achieves hundreds of times faster than an exact method using dynamic programming .10 [ 1]#1 url [ 2]#2 [ 2]l@#1=l@#1#2 r. zhou , s. khemmarat , and l. gao , `` the impact of youtube recommendation system on video views , '' in _ imc _, 2010 . `` how to optimize youtube related videos , '' http://tubularinsights.com/optimize-youtube-related-videos , ( accessed feb 2017 ) .`` grow your audience , '' https://creatoracademy.youtube.com/page/course/get-discovered , ( accessed feb 2017 ) .`` amazon , '' https://www.amazon.com , ( accessed feb 2017 ) .`` taobao , '' https://www.taobao.com , ( accessed feb 2017 ) .a. antikacioglu , r. ravi , and s. sridhar , `` recommendation subgraphs for web discovery , '' in _ www _ , 2015 .`` amazon marketing services for kdp authors : attract readers , build fans , sell books , '' https://advertising.amazon.com/kindle-select-ads , ( accessed feb 2017 ) .p. gupta , a. goel , j. lin , a. sharma , d. wang , and r. zadeh , `` wtf : the who to follow service at twitter , '' in _ www _ , 2013 .j. zhao , j. c. lui , d. towsley , x. guan , and y. zhou , `` empirical analysis of the evolution of follower network : a case study on douban , '' in _ netscicom _ , 2011 .j. zhao , j. c. lui , d. towsley , and x. guan , `` whom to follow : efficient followee selection for cascading outbreak detection on online social networks , '' _ computer networks _ , vol . 75 , pp . 544559 , 2014 .l. lovsz , `` random walks on graphs : a survey , '' _ combinatorics , paul erds is eighty _ , vol . 2 , pp .353397 , 1993 .r. kumar , a. tomkins , s. vassilvitskii , and e. vee , `` inverting a steady - state , '' in _ wsdm _ , 2015 .a. t. scaria , r. m. philip , r. west , and j. leskovec , `` the last click : why users give up information network navigation , '' in _ wsdm _ , 2014 . c. gkantsidis , m. mihail , and a. saberi ,`` random walks in peer - to - peer networks : algorithms and evaluation , '' _ performance evaluation _ , vol .63 , no . 3 , pp .241263 , 2006 .j. travers and s. milgram , `` an experimental study of the small world problem , '' _ sociometry _ , vol .32 , no . 4 , pp . 425443 , 1969 .g. nemhauser , l. wolsey , and m. fisher , `` an analysis of approximations for maximizing submodular set functions - i , '' _ mathematical programming _ , vol .265294 , 1978 .g. l. nemhauser and l. a. wolsey , `` best algorithms for approximating the maximum of a submodular set function , '' _ mathematics of operations research _ , vol . 3 , no . 3 , pp .177188 , 1978 .m. minoux , `` accelerated greedy algorithms for maximizing submodular set functions , '' _ optimization techniques _ , vol .7 , pp . 234243 , 1978 .p. sarkar and a. w. moore , `` a tractable approach to finding closest truncated - commute - time neighbors in large graphs , '' in _ uai _ , 2007 .p. sarkar , a. w. moore , and a. prakash , `` fast incremental proximity search in large graphs , '' in _ icml _ , 2008 .n. rosenfeld and a. globerson , `` optimal tagging with markov chain optimization , '' in _ nips _ , 2016 .s. khuller , a. moss , and j. s. naor , `` the budgeted maximum coverage problem , '' _ information processing letters _ , vol .70 , no . 1 , pp . 3945 , 1999 .m. sviridenko , `` a note on maximizing a submodular set function subject to a knapsack constraint , '' _ operations research letters _ , vol . 32 , pp . 4143 , 2004 .p. sarkar and a. w. moore , `` fast nearest - neighbor search in disk - resident graphs , '' in _ kdd _ , 2010 .a. kyrola , `` drunkardmob : billions of random walks on just a pc , '' in _ recsys _ , 2013 .p. g. doyle and l. snell , _ random walks and electric networks _, 1st ed .carus mathematical monographs.1em plus 0.5em minus 0.4em mathematical assn of america , 1984 , vol . 22 .`` snap graph repository , '' http://snap.stanford.edu/data , ( accessed feb 2017 ) .k. s. trivedi , _ probability and statistics with reliability , queuing and computer science applications _ , 2nd ed.1em plus 0.5em minus 0.4emwiley , 2016 .c. mavroforakis , m. mathioudakis , and a. gionis , `` absorbing random - walk centrality : theory and algorithms , '' in _ icdm _ , 2015 .g. golnari , y. li , and z .-zhang , `` pivotality of nodes in reachability problems using avoidance and transit hitting time metrics , '' in _ simplex _ , 2015 .li , j. x. yu , x. huang , and h. cheng , `` random - walk domination in large graphs : problem definitions and fast solutions , '' in _ icde _ , 2014 .because the d - ap maximization problem generalizes the optimal tagging problem , which has been proven to be np - hard .thus , the d - ap maximization problem is np - hard .next , we prove the np - hardness of d - ht minimization problem .we show the decision problem of d - ht minimization problem is np - complete by a reduction from the vertex cover problem .the decision problem asks : given a graph and some threshold , does there exist a solution such that ?we will prove that , given threshold , there exists a solution for the decision problem iff a vertex cover problem has a cover of size at most .the vertex cover problem is defined on an undirected graph , where , and .let denote a subset of vertices of size .we construct an instance of the d - ht minimization problem on directed graph , where and edge set includes both and for each edge . contains additional edges : for each , we add an edge with proper weight to make the transition probabilities ; we add self - loop edges to vertices and , and thus and become two absorbing vertices , i.e. , transition probabilities .for this particular instance of d - ht minimization problem , we need to choose connection sources from ; once a source is selected , we set transition probability , which is equivalent to set edge weight .assume is a vertex cover on graph .then , for each vertex , a walker starting from hits using one step with probability . for each vertex ,a walker starting from hits and becomes absorbed on with probability ( the corresponding hitting time is ) ; the walker passes a neighbor in , which must be in , and then hits , with probability ( the corresponding hitting time is ) .this achieves the minimum d - ht , denoted by ] , and note that .the hoeffding inequality yields . letting the probability be less than , we get . similarly , to show the bound of in case of d - ht , we can define another random variable ] , and notice that .applying the hoeffding inequality , we obtain this implies applying the union bound , we obtain letting the upper bound be less than , we get .
|
many people dream to become famous , youtube video makers also wish their videos to have a large audience , and product retailers always hope to expose their products to customers as many as possible . do these seemingly different phenomena share a common structure ? we find that fame , popularity , or exposure , could be modeled as a node s _ discoverability _ in some properly defined network , and all of the previously mentioned phenomena can be stated as : a target node wants to be discovered easily by existing nodes in the network . in this work , we explicitly define the _ node discoverability _ in a network , and formulate a general _ node discoverability optimization _ problem . while the optimization problem is np - hard , we find that the defined discoverability measures satisfy good properties that enable us to use a greedy algorithm to find provably near - optimal solutions . the computational complexity of a greedy algorithm is dominated by the time cost of an _ oracle call _ , i.e. , calculating the marginal gain of a given node . to scale up the oracle call over large networks , we propose an _ estimation - and - refinement _ approach , that provides a good trade - off between estimation accuracy and computational efficiency . experiments conducted on real graphs demonstrate that our method is hundreds of times faster than an exact method using dynamic programming , thereby allowing us to solve the optimization problem on large networks .
|
though our understanding of the core - collapse supernova ( ccsn ) explosion mechanism remains incomplete , recent simulations indicate that it is likely to involve multi - dimensional effects .in fact , in all proposed mechanisms neutrino - driven convection plays an important , if not vital , role .motivated by these results , we present a theoretical framework to investigate the role of turbulence in launching successful explosions .furthermore , we lay down the foundation for this framework by deriving self - consistent steady - state equations for the background and turbulent flows .the fundamental problem of ccsn theory is to determine how the stalled shock transitions into a dynamic explosion . within a few milliseconds after bounce , the core - bounce shock wave stalls into an accretion shock .unchecked , continued accretion through the shock would form a black hole .however , the preponderance of observed neutron stars and supernova ( sn ) explosions dictates that the stalled shock is revived into an explosion most of the time . for more than two decades ,the favored mechanism for core collapse supernovae has been the delayed - neutrino mechanism . in this model ,a neutrino luminosity of several times 10 erg / s cools the protoneutron star and heats the region below the shock . under the correct conditions ,this heating by neutrinos can revive the shock and produce an explosion .unfortunately , except for the least massive stars which undergo core collapse , most of the detailed 1d neutrino - transport - hydrodynamic simulations do not produce solutions containing explosions . while most 1d simulations fail to produce explosive solutions , recent 2d simulationsshow promising trends .these simulations capture multi - dimensional instabilities that aide the neutrino mechanism in driving explosions .these instabilities include neutrino - driven convection and the standing accretion shock instability ( sasi ) .besides these two instabilities , other multi - dimensional processes may revive the stalled shock , including magnetohydrodynamic ( mhd ) jets and acoustic power derived from asymmetric accretion and an oscillating pns . though the prevalence of these last two processes is still debated , two points remain clear : ( 1 ) the solution to the ccsn problem is likely to depend on multi - dimensional effects , and ( 2 ) in all proposed mechanisms , neutrinos and turbulence play an important , if not central , role .hence , whatever the mechanism , it is important to understand the role of neutrinos and turbulence . proposed that if a critical neutrino luminosity is exceeded for a given mass accretion rate , the neutrino mechanism succeeds .these authors developed a steady - state model for an accretion shock in the presence of a parametrized neutrino heating and cooling profile .the two most important parameters of the steady - state model are the neutrino luminosity and mass accretion rate .they found no steady - state solutions for luminosities above a critical curve , and interpreted this curve as separating steady - state ( or failed supernova ) solutions from explosive solutions .however , this work did not prove that the solutions above the critical curve are in fact explosive , nor did they consider multidimensional effects . using a similar neutrino parametrization as in the work , showed in 1d and 2d simulations that the solutions above the critical luminosity are in fact explosive .moreover , they found that the critical luminosity in the 2d simulations is % of that in 1d for a given .additional investigations by show that the critical luminosity is even further reduced in 3d .these results suggest that the critical luminosity is a useful theoretical framework for describing the conditions for successful explosions .initial investigations by also suggest that the reduction in the critical luminosity is caused by turbulence .an alternative but related condition for explosion is a comparison of the advection and heating timescales .the heating timescale is the time it takes to significantly heat a parcel of matter , and the advection timescale is the time to advect through this region .if the advection timescale is long compared to the heating timescale , then explosion ensues . in 1d , as matter accretes onto the pns , it is limited to advect through the gain region with one short timescale . in 2d , convective motions increase the dwell time , which leads to more heating for the same neutrino luminosity and a lower critical luminosity . have recently challenged this explanation and suggest that rather than increasing the heating , turbulence acts to reduce the cooling .regardless , the simulations show that the critical luminosity is lower in the presence of convection .these results suggest that a theory for successful explosions requires a theoretical framework for turbulence and its influence on the critical luminosity . in this paper, we develop the foundation for such a framework .recent developments in turbulence theory have led to accurate turbulence models , and in this paper we use similar strategies to develop a turbulence model appropriate for ccsne .such a turbulence model can then be incorporated into steady - state accretion models to derive reduced critical luminosities for explosion , as well as used in 1d radiation - hydrodynamic ( rhd ) simulations to expedite systematic studies of core - collapse physics . in the present paper , we develop a turbulence model which captures the salient features of 2d core collapse convection , but eventually the model must be calibrated against 3d simulations .to develop a turbulence model appropriate for core - collapse turbulence , we use a general and fully self - consistent approach called reynolds decomposition .the first step in this approach is to decompose the flow variables into averaged and fluctuating components .evolution equations for the mean flow variables are then developed by writing the conservation laws in terms of these mean and fluctuating components and then averaging .the resulting evolution equations for the mean fields contain terms which involve both the mean fields as well as correlations of fluctuating components .these correlations represent the action of turbulence and include the reynolds stress , turbulent kinetic energy , turbulent enthalpy flux , and higher order correlations .these evolution equations are self - consistent and naturally include the effects of a background flow , which is important in core collapse .unfortunately , this procedure always produces evolution equations which depend on a correlation of higher order than the evolution equations .therefore , in order to develop a closed system of equations , the highest order correlations must be modeled in terms of the lower order correlations and mean fields .furthermore , closure depends upon the macroscopic flow itself , so there is no unique closure for turbulence .this is the infamous closure problem of turbulence .fortunately , there is a small class of turbulent flows ( e.g. , shear , buoyancy driven , etc . ) and closure models have been developed that work well for each type under a range of conditions .the general strategy to find closure relations involves an interplay between theory , observations , and numerical simulations .first , terms in the mean - field equations are compared to either observations or numerical simulations .approximations are then proposed for the higher order correlation terms that satisfy the observations / simulations and provide closure .this approach has been used successfully for geophysical flows and is now being applied to stellar structure calculations . following on these successes ,we use this strategy to develop a turbulence model for the core - collapse problem in which buoyancy and a background accretion flow dominate . in [ section : reynoldsdecomp ] , we use reynolds decomposition to formally derive the averaged background and turbulence equations and identify terms that are important for neutrino - driven convection . using 2d simulations , we examine in [ section : compare2d ] the turbulent properties of neutrino - driven convection and show that the turbulence equations which we derive in [ section : reynoldsdecomp ] are consistent with the simulated flows .finding solutions to the mean - field equations requires a closure model .therefore , in [ section : models ] we present several models representative of the literature .however , these fail to reproduce the global profiles of neutrino - driven convection , leading us to develop a novel global model . in [ section : comparemodels ] , we compare the results of the turbulence models ( [ section : models ] ) with the results of 2d simulations and conclude that our global model is the only model to reproduce the global properties of neutrino - driven convection . in [ section : conditions ] , using the mean - field equations and 2d simulations , we investigate the effects of turbulence on the conditions for successful explosions . finally , in [ section : conclusion ] we summarize our findings , and motivate the need for a similar analysis using 3d simulation data .the first step in understanding the effects of turbulence is to derive the governing steady - state equations .therefore , in this section , we use reynolds decomposition to derive exact equations for the steady - state background and turbulent flows .historically , turbulence modelers have used two approaches to derive the models . in one ,ad hoc equations are suggested to model the important turbulence physics .these are often of practical use , but the underlying assumptions often make these of limited use . mixing length theory ( mlt ) is one such approach ( see [ section : algebraic ] for its limitations ) . in the second approach , one derives self - consistent equations for turbulence by decomposing the hydrodynamic equations into background and turbulent parts .reynolds decomposition is an example of this approach .though these equations are exact , they are not complete ; they need a model for closure .if the ad hoc approaches accurately represent nature , then one should be able to derive them by making the appropriate assumptions in the exact equations .therefore , regardless of the technique employed , starting with the self - consistent equations enables a better understanding of the assumptions and limitations . in this paper , we pursue both approaches , but in this section , we use reynolds decomposition to derive and explore the self - consistent equations for the background and turbulent flows . in reynolds decomposition , the hydrodynamic equations are decomposed into background and turbulent flows . consider a generic flow variable , , and its decomposition into average ( background ) and fluctuating ( turbulent ) components : .the mean - field background of , , is obtained by coarse spatial and temporal averages .the interval for the averages must be large or long enough to smooth out short term turbulent fluctuations , but they must not be too large or long so that interesting spatial or temporal trends in the mean - field quantities are completely averaged out .choosing the scales of the averaging window is dependent upon the problem , and in this paper , we define as averaging over the solid angle in the spherical coordinate system and over a fraction of the eddy crossing time of the convective region . by definition ,the coarse average of is and the mean - field average of the fluctuation is identically zero , .therefore , first order moments of turbulent fluctuations are identically zero and only higher order terms survive .for example , the average of the velocity fluctuation is zero , , but the mean - field of the second order term , the reynolds stress , is nonzero .the general equations for mass , momentum , and entropy conservation are and in these equations , the density , velocity , pressure , temperature , and specific entropy are , , , , and . is the gravitational acceleration , and is the specific local heating and/or cooling rate .after reynolds decomposition , averaging , and assuming steady state , the hydrodynamics equations , eqs .( [ eq : masshydro]-[eq : entropyhydro ] ) , become and for all quantities , the turbulent perturbations are denoted by a superscript , and , with the exception of velocity , the background flow is denoted by subscript .the background velocity is , and the perturbed velocity is .equations [ eq : mass]-[eq : entropy ] are very similar to the usual steady - state equations of hydrodynamics , but the last term in all three equations add new turbulence physics .conservation of mass flux is split between the background and the turbulence , . in the momentum equation ,the extra force due to turbulence is the divergence of the reynolds stress , .the entropy equation has two new terms ; the divergence of the entropy flux , , represents entropy redistribution by turbulence , and represents heat due to turbulence dissipation . for isotropic turbulence , the divergence of can be re - cast as the gradient of turbulent pressure : i.e. , where is the turbulent kinetic energy .using thermodynamic relations , we reduce the number of turbulent correlations in eqs .( [ eq : mass]-[eq : entropy ] ) by noting that , where , is the specific heat at constant pressure , and is the logarithmic derivative of density with respect to temperature at constant pressure . while we consider the convective entropy flux , , traditionally , astrophysicists have considered the enthalpy flux , in turbulence models .the enthalpy flux has units of energy flux .therefore , the enthalpy flux is a natural choice for stellar structure calculations in which the enthalpy flux and radiative flux must add to give the total luminosity of the star . because the convective region is semi - transparent to neutrinos, there is no such constraint in the core - collapse problem .furthermore , since we decompose the entropy equation , the entropy flux is the most natural flux to consider .for the instances that require discussing the enthalpy flux , we express it in terms of the entropy flux , .in expressing ( , we have eliminated one of the turbulent correlations , but there still remain three turbulent correlations ( , , and ) , resulting in more unknowns than equations . to close these equations , we derive the turbulence equations in [ section : turbulenceequations ] ) .using the definitions of and and the conservation equations , we re - derive the evolution equations for the reynolds stress ( ) and the entropy flux ( ) . for similar derivations of these equations , see and .the convective reynolds stress equation is ^{{\mathsf{t } } } & \textrm{buoyant production } \\ & & - \left < \rho \mathbf{r } \right > \cdot \nabla { \textbf{\em v } } - [ \left < \rho \mathbf{r } \right > \cdot \nabla { \textbf{\em v}}]^{{\mathsf{t } } } & \textrm{shear production } \\ & & - \langle \nabla \otimes { \textbf{\em f}}_p + [ \nabla \otimes { \textbf{\em f}}_p]^{{\mathsf{t } } } \rangle & \textrm{pressure flux}\\ & & + \langle p^{\prime } \nabla { \textbf{\em v}}^{\prime } + [ p^{\prime } \nabla { \textbf{\em v}}^{\prime}]^{{\mathsf{t } } } \rangle & \textrm{pressure strain}\\ & & - \nabla \cdot \left < \rho { \textbf{\em v}}^{\prime } \otimes \mathbf{r } \right > & \textrm{turbulent transport } \\ & & - \rho_0 \boldsymbol{\varepsilon } & \textrm{dissipation } \ , , \\ \end{array}\ ] ] where the turbulent pressure flux is , the dissipation tensor is ] is the transpose operator . in eq .( [ eq : reynoldsstress ] ) , we separate the terms on the right - hand - side into rows to better illustrate their physical relevance . they are buoyant and shear production , redistribution by the turbulent pressure flux , the pressure - strain correlation , turbulent reynolds stress transport , and the turbulent dissipation . in neutrino - driven convection of core collapse , buoyancy is the most important turbulent production . in terms of driving turbulence ,the shear production term is less important .however , this term and the pressure - strain term are primarily responsible for redistributing stress among the components . for example, gravity acts mostly on the vertical stress components , but the shear production and pressure - strain terms redistribute stress to the horizontal components . also important in redistributing stressis the turbulent transport term .in fact , in the next paragraph , we show that this term is in effect the divergence of the turbulent kinetic energy flux , which is very important in vertical kinetic energy transport .taking the trace of the reynolds stress equation gives the convective kinetic energy equation : where is the trace operator , the turbulent dissipation becomes , and is the turbulent kinetic energy flux .once again on the right - hand - side , we have the familiar terms : buoyancy and shear production , turbulent redistribution by the turbulent kinetic energy flux , the divergence of the pressure flux , work done by turbulent pressure , and turbulent dissipation .the corresponding equation for convective entropy flux is where is the variance of the entropy perturbation .once again , we separate the terms in eq .( [ eq : entropyflux ] ) into rows to highlight their physical significance .the terms in the entropy flux equation are analogous to the terms in the reynolds stress equation .they are buoyant and gradient production terms , the pressure - covariance , turbulent transport , and heat production . unlike the reynolds stress equation, we find that buoyant production , gradient production , pressure covariance , and turbulent transport are all equally relevant in determining the entropy flux .the first term on the right hand side of eq .( [ eq : entropyflux ] ) is the buoyant production .this term is an important source in eq .( [ eq : entropyflux ] ) , but it depends upon yet another correlation , the variance of the entropy perturbation .the corresponding equation for the entropy variance is equations ( [ eq : reynoldsstress]-[eq : entropyvariance ] ) are an exact set of evolution equations for the 2 order correlations ( i.e. reynolds stress , entropy flux , and entropy variance ) .while these equations are exact , they are not complete . each equation depends upon 3 order correlations , necessitating further evolution equations for the higher order correlations . however , it is impossible to close the turbulence equations in this way , as each set of evolution equations depends upon yet higher order correlations .the only solution is to develop a closure model to relate higher order moments to lower order moments .this is analogous to the closure problem in deriving the hydrodynamics equations . in the hydrodynamics equations ,the equation of state ( eos ) is a microphysical closure model which relates the pressure ( a higher moment ) to the density and internal energy .because of the vast separation of scale , the eos depends upon microphysical processes only and is independent of the macroscopic hydrodynamical flows .hence , as a closure model , the eos enables the hydrodynamic equations to be relevant for a wide range of macroscopic flows .conversely , turbulence occupies the full range of scales from the microscopic to the largest bulk flows . in some cases , turbulence is the dominant macroscopic flow .consequently , closure is necessarily dependent upon the macroscopic flow , making it impossible to derive a generic closure relation for turbulence . to find solutions to the turbulence equations , we need to construct a turbulence closure model that is appropriate for core collapse .the standard approach is to develop a turbulence closure model for each macroscopic flow .fortunately , this task is not as daunting as it first appears .turbulence can be divided into several classes that are characterized by the driving mechanism ( i.e. shear , buoyancy , magnetic ) and closure models have been constructed that are appropriate for each class . for core collapse , buoyancy is the primary driving force , and the rest of this paper is devoted to finding an appropriate buoyancy closure model for core - collapse turbulence . assuming a spherically symmetric background , the equations for the background flow ( eqs .[ eq : mass]-[eq : entropy ] ) become and where the , , and subscripts refer to the radial and angular components in spherical coordinates . therefore , is the partial derivative with respect to , and is the radial part of the divergence . since we assume steady state , the mass accretion rate , , is a constant . for isotropic turbulence , the last three terms of eq .( [ eq : momentumr ] ) reduce to , the gradient of turbulent pressure .however , note that buoyancy - driven turbulence in spherical stars is not isotropic but is most consistent with and . in this work, we adopt this later assumption where convenient , but we retain the general expression in eq .( [ eq : momentumr ] ) as a reminder that the relationships among the reynolds stress components must be determined by theory , simulation , or experiment .the equivalent steady - state and spherically symmetric equations for reynolds stress , entropy flux , and entropy variation are and finally , the spherically symmetric kinetic energy equation is derived the mean - field equations and identified the important turbulent correlations , we now characterize the background and turbulent profiles of 2d simulations .most importantly , we validate that the reynolds averaged equations are consistent with the 2d results .section [ section:2dsimulations ] briefly describes the 2d simulations , highlighting general qualities that are relevant for turbulence analysis such as the location and extent of turbulence and neutrino heating and cooling . then in [ section:2dcorrelations ] , we characterize the turbulent correlations in the 2d simulations . finally , in [ section : validateequations ] , we validate the averaged equations . the 2d results presented here were calculated using bethe - hydro and are the same simulations that were used in to develop a gravitational wave emission model via turbulent plumes . while considered a large suite of simulations , for clarity , we focus on one simulation that simulated the collapse and explosion of a solar metallicity , 15 m progenitor model and used a driving neutrino luminosity of erg s. see for more details on the technique and for the setup of this particular 2d simulation . to demonstrate the evolution of turbulence , most figures of this paper highlight three phases after bounce .these three stages correspond to modest steady - state convection ( 404 ms ) , growing convection and sasi ( 518 ms ) , and strong convection and sasi ( 632 ms ) , and the entropy color maps in fig .[ convectionsasistills ] provide visual context for the shock location , heating and cooling , and location and extent of the turbulence .our focus is on the most obvious turbulent region , which extends from km to the shock ( km ) .this postshock turbulence is driven by neutrino heating , and in fig .[ heating3times ] , we show neutrino heating ( red ) , cooling ( blue ) , and net heating ( heating minus cooling , black lines ) profiles for 1d ( dashed - lines ) and 2d ( solid - lines ) simulations .these local heating and cooling rates are calculated using eqs .( 4 - 5 ) of and a neutrino luminosity of erg s . below the gain radius , km , cooling dominates heating , but above the gain radius , heating dominates cooling .this latter region is called the gain region and drives turbulent convection .after matter accretes through the shock , it advects downward through the gain region , producing a negative entropy gradient . in turn , this negative entropy gradient drives buoyant convection .though the region below the gain radius has a positive entropy gradient and is formally stable to convection , momentum carries plumes well into the cooling region .this is a well known phenomenon in stellar convection and is called overshoot . in neutrino - driven convection , the depth of overshoot can be quite large , - 40 km , .figures [ fskinetic3times ] and [ transport3times ] show the radial profiles of the primary correlations in the averaged equations , eqs .( [ eq : mass]-[eq : entropyvariance ] ) .the lowest moment correlations are the turbulent enthalpy flux ( ) , the turbulent kinetic energies ( & ; or reynolds stresses ) , and the entropy variance ( ) , and all three are shown in the top , middle , and bottom panels of fig . [ fskinetic3times ] .other important higher order correlations are the turbulent transport terms , which are the transport of entropy flux , , the turbulent kinetic energy flux , , and the entropy variance flux , .the radial profiles of these higher order correlations are in fig .[ transport3times ] . in general, all turbulent correlations increase over time , and the radial profiles indicate that turbulence is dominated by coherent rising and sinking large - scale buoyant plumes . the enthalpy ( or entropy ) flux has two broad physical interpretations .most obviously , the entropy flux indicates the direction and magnitude of entropy transport due to turbulent motions .naturally , positive and negative entropy fluxes correspond to upward and downward entropy transport .in addition to indicating the direction of entropy transport , the sign of the flux also indicates the direction of the buoyancy forces driving the turbulence . to understand this second interpretation ,consider that the entropy flux is defined using the correlation of the velocity and entropy perturbations ( i.e. ) . in regions where convection is actively driven ,high entropy plumes rise buoyantly and low entropy plumes sink .in other words , the entropy and velocity perturbations are either both positive or negative .hence , the correlation and entropy flux are always positive in regions of actively driven convection . at boundaries , where the plumes are decelerated due to a stable background , the correlation or entropy flux ,is always negative .for example , as a low entropy plume penetrates into the lower stable layer , the sinking plume becomes immersed in a background that has even lower entropy . while the velocity perturbation remains negative , the entropy perturbation changes sign , becoming positive .consequently , the correlation and entropy flux are negative in the bounding stabilizing regions . with these interpretations of the entropy ( enthalpy ) flux , the top panel of fig .[ fskinetic3times ] shows where convection is actively driven , where plumes are decelerated by bounding stable regions , and the magnitude of each as a function of time . at the shock ,the entropy flux is zero .this is in contrast with the results of , who find an appreciable enthalpy flux at the shock . whether the enthalpy flux is zero at the shock has consequences for the stalled - accretion shock solution ( see [ section : model1 ] for further discussion ) .naively , one would expect the gain radius ( km ) to mark the boundary between actively driven convection in the heating region and the stabilizing effects in the cooling layers below .though this is roughly correct , careful inspection of the enthalpy profiles show that the transition from positive to negative entropy flux does not correspond exactly with the gain radius .instead , the gain radius corresponds best with the change in slope of the enthalpy profile . above the gain radius , where convection is actively driven, the enthalpy profile has a negative gradient , and below the gain radius the gradient is positive .this profile can be best understood considering a single low entropy plume originating at the shock .first of all , as this plume accelerates downward , both the entropy and velocity perturbations grow in magnitude .this explains the negative entropy flux gradient above the gain radius . below the gain radiusthe background entropy gradient is positive .therefore , the entropy perturbation diminishes as the background entropy reduces to the level of the low entropy plume . at this point , both the entropy perturbation and entropy flux are zero . consequently , enthalpy flux gradient is positive below the gain radius . with a negative gradient above the gain region and a positive gradient below it ,the enthalpy flux is maximum at the gain radius .as the plume s inertia carries it beyond this radius , the enthalpy flux becomes negative in the stabilizing layers .these general characteristics are observed at all times , but with increasing magnitude at later times when convection is more vigorous .the same simple model can explain the radial ( , green lines ) and tangential ( , orange lines ) kinetic energy profiles in the middle panel of fig .[ fskinetic3times ] .in fact , like the enthalpy flux , peaks at the gain radius . because the sinking plumes have higher speeds than the rising plumes and the kinetic energy is weighted by the square of the speed , is dominated by the sinking plumes. again , the radial profile of is consistent with low entropy plumes originating at the shock , accelerating downward to a maximum speed at the gain radius , and decelerating in the stabilizing region below the gain radius .this is also consistent with the results of .the tangential component , , on the other hand , shows a maximum just 10s of km away from the shock .this is where the rising plumes encounter material that has just passed through the shock and both turn their trajectories in the -direction .interpretation of the entropy variance ( and the bottom panel of fig .[ fskinetic3times ] ) is less clear . like the enthalpy flux and kinetic energies, increases with time .the most naive interpretation is that the variation of total heating among the sinking and rising plumes increases over time .however , it is not obvious whether this increase in variance is due to more heating or less cooling in either the rising or sinking plumes ... or both . while it seems clear that the profiles of and are dominated by the sinking plumes , there is circumstantial evidence that might be dominated by the behavior of rising plumes .for one , there is a gradual rise in from the lower convective boundary to the upper boundary , suggesting growth with the rise of buoyant plumes .but the most telling evidence comes from the entropy color maps of fig .[ convectionsasistills ] . in these maps, it is obvious that the entropy of the sinking plumes is roughly constant over time , while the entropy of rising plumes increases with time .hence , the variance of entropy increases with time because the maximum entropy of the rising plumes increases .all three transport terms in fig .[ transport3times ] , , , and , are negative nearly everywhere .hence , the flow of core - collapse turbulence acts to transport entropy flux , kinetic energy , and entropy variance downward .this is typical of buoyancy driven convection and is observed in most simulations of convection within stellar interiors . for the entropy flux , and turbulent kinetic energy , this fact further supports the notion that the turbulent correlations are dominated by sinking plumes . on the other hand , at first glance , the negative transport of seems to be at odds with our previous conclusion that rising plumes dominate the character of .however , the moment of the entropy variance , , weights only the variance in the entropy , but the entropy variance flux weights the velocity of the plumes as well . in general , the speed of the sinking plumes is larger than the speed of rising plumes .consequently , while rising plumes provide the most weight to , sinking plumes provide the greatest weight to . in [ section : comparemodels ] , we present simple models for these transport terms that assume the dominance of sinking plumes . having presented the turbulent correlations of 2d core - collapse simulations , we now validate the reynolds - averaged equations . specifically , we validate the spherically symmetric background equations , eqs .( [ eq : massr]-[eq : entropyr ] ) . for the sake of brevity, we do not show a plot of mass conservation but simply report that is indeed satisfied in the 2d simulations .figure [ velocity3times ] validates the form of the momentum equation , eq .( [ eq : momentumr ] ) , including the turbulence terms . in the top panel ,we plot the velocity profile of 1d and 2d simulations as a function of radius , and in the bottom panel , we plot the dominant force terms in the momentum equation for the 2d simulations only .specifically , the bottom panel shows the difference in the gravitational and pressure gradient forces , ( dashed - line ) , and the divergence of the reynolds stress , ( dotted line ) .the solid black line shows from the 2d simulation .this last term is the left - hand side of eq .( [ eq : momentumr ] ) , and in steady state , represents the total force per unit area that a lagrangian parcel of matter experiences . if eq .( [ eq : momentumr ] ) represents the correct derivation of the momentum equation including turbulence terms , then the sum of the right - hand side terms ( dot - dashed red line ) should equal the solid black line .away from the shock , they agree quite well .interestingly , the right - hand side is essentially zero in the heating region where convection is actively driven .this implies that the difference in the gravitational force and the pressure gradient is nearly balanced by the divergence of the reynolds stress .figure [ entropy3times ] validates the reynolds - averaged entropy equation , eq .( [ eq : entropyr ] ) .the solid black line corresponds to the 2d results , and the black dashed line shows the results of 1d simulations . for comparison , the red curve is the integration of eq .( [ eq : entropyr ] ) using the 1d density , velocity , and heating profiles .since the 1d simulations are not able to simulate multi - dimensional effects , we omit the turbulent dissipation and the entropy flux terms in this integration .the remarkable agreement between the results of this integration and the 1d simulation bolsters our approach in validating eq .( [ eq : entropyr ] ) .similarly , the solid green line shows the integration of eq .( [ eq : entropyr ] ) using the 2d density , velocity , and heating curves and no convection terms .this curve is similar to the 1d results and clearly under predicts the entropy in the gain region . on the other hand ,including the turbulence terms in the integration of eq .( [ eq : entropyr ] ) ( dot - dashed green curve ) dramatically improves the comparison .therefore , given the right entropy flux and turbulent dissipation , we conclude that eq .( [ eq : entropyr ] ) accurately determines the background flow .additionally , we conclude that the turbulence models of [ section : models ] must , at a minimum , produce accurate entropy flux profiles to accurately describe the effects of convection in 2d simulations .figures [ fskinetic3times]-[entropy3times ] suggest that convection grows monotonically , but these figures sparsely sample convection and its effects at three times . in fig .[ convectiondata ] , we illustrate that convection indeed grows monotonically from core bounce until explosion . to provide some context with shock position , we plot in the top panel shock radii as a function of time after bounce .we show the 1d and average 2d shock radii using dashed and solid black lines , respectively . before the onset of vigorous sasi and convection ( ms ) , both stall at km . afterward, the 2d average shock radius climbs to km , at which point all measures of the shock unambiguously expand in an explosion . to illustrate the asymmetries in the shock we also plot the shock radius at the poles ( =0 and = ) .the shock radii at the poles oscillate about the average shock position until explosion ( ms ) .the middle panel shows the maximum of the total , radial , and transverse kinetic energies .in general , the turbulent kinetic energy steadily grows until explosion .finally , the bottom panel shows the maximum of 1d ( black ) and 2d ( green ) average entropy profiles . for comparison , we plot the maximum enthalpy flux in orange and the maximum entropy resulting from integrating eq .( [ eq : entropyr ] ) in green . as in fig .[ entropy3times ] , the 2d simulation consistently shows higher entropy at all times , and the results of integrating eq .( [ eq : entropyr ] ) are consistent with the simulations . in conclusion , the results of figs .( [ fskinetic3times]-[convectiondata ] ) indicate that reynolds decomposition of the hydrodynamics equations , eqs .( [ eq : mass]-[eq : entropy ] ) , is consistent with 2d simulations .in [ section : reynoldsdecomp ] , we derived the turbulence equations , eqs .( [ eq : mass]-[eq : entropyvariance ] ) and showed that they suffer from a closure problem .therefore , finding solutions to the background and turbulence equations requires a turbulence closure model . in this section, we present several turbulence models and identify their strengths and weaknesses .in sections [ section : model1]-[section : model4 ] , we present four turbulence models .these models , with the exception of model 1 , require a model for turbulent dissipation and transport .therefore , our discussion of the models begins in sections [ section : dissipationmodel ] & [ section : transportmodels ] with a model for turbulence dissipation and transport .the next four sections , [ section : model1]-[section : algebraic ] , present models for the primary 2 order turbulent correlations , , , and . the first model , model 1 , is a reproduction of a model presented by and assumes that both and are zero .the resulting model is simple , but it provides no equation for the turbulent kinetic energy .model 2 is a closure model for the reynolds stress , entropy flux , and entropy variance and has been designed and calibrated to simulate isolated buoyant plumes .model 3 is an algebraic model which is akin to mlt . to model the higher order correlations , models 2 & 3 use expressions involving the local values of 2-order correlationshence , some of the most important terms in a nonlocal problem are modeled using local approximations .while these local models are adequate in some locations , they can be factors off in other locations . rather than relying on these local models ,we develop in model 4 ( [ section : model4 ] ) a novel global model that uses global conservation laws to constrain the scale of convection .later , in [ section : comparemodels ] , we compare the results of these models to the turbulent kinetic energies and entropy flux of the 2d simulations . starting with kolmogorov s hypotheses for turbulent dissipation , construct a model for turbulent dissipation which they validate with 2d and 3d stellar evolution calculations .buoyed by these successes , we construct a similar model for turbulent dissipation .one of the primary hypotheses of kolmogorov s theory is that turbulent energy is injected at the largest scales and cascades to smaller scales .consequently , the rate of turbulent energy dissipation is governed by the largest scales and dimensionally is proportional to , where is an appropriate length scale , usually the largest eddy size .therefore , our model for turbulent dissipation is where is the trace of the reynolds stress tensor . in most astrophysical calculations ,the length scale is assumed to be proportional to the pressure scale height , . on the other hand, found that stellar convection fills the total available space and that this also corresponds to the largest eddy size .for very large convection zones , they found that is at most .therefore , we set the length scale as , where is the size of the region unstable to convection . given kolmogorov s hypothesis , this model for is the most basic model that one can assume . later in [ section : model4 ] , we propose a model for that satisfies kolmogorov s hypothesis but apparently represents the dissipation of buoyant plumes via entrainment . to satisfy global and local constraints for convection, we find that is best modeled by a linear function of distance from the shock . in 3d simulations of stellar convection in which negatively buoyant plumes dominate, found a similar result .these results suggest that entrainment between rising and sinking plumes govern dissipation .the gradient diffusion approximation is proportional to the gradient of the quantities density , through . ]has found wide spread usage for closing the turbulent transport terms ( e.g. , * ? ? ?* ; * ? ? ?for example , model 2 , which we introduce in [ section : model2 ] , uses this assumption .the theoretical basis for this closure model is two fold : first , the transported quantity behaves like a scalar ; second , scale separation is a good approximation in the sense that transport is mediated by fluctuations on scales small compared to the largest scales characterizing the turbulent flow .the turbulent transport in a buoyant convection zone , however , is generally recognized to be mediated by large scale coherent plumes ( e.g. * ? ? ?* and [ section:2dcorrelations ] ) , thus challenging the theoretical underpinning for a gradient diffusion approximation .furthermore , the shortcomings of the gradient diffusion approximation for thermal convection were explicitly illustrated through a series of 3d simulations of turbulent stellar interiors by . rather than a gradient diffusion approximation, we propose flux models that are proportional to , where is the density of the transported quantity .this model is built on the advective nature of the transport in which .we propose the following models for turbulent transport of the entropy flux , turbulent kinetic energy , and entropy variance and except for the entropy variance flux , the comparison with 2d simulations ( [ section : comparemodels ] and fig .[ transportmodels ] ) indicates that the constant of proportionality is .the constant of proportionality for the entropy variance flux is found to be /2 .the first turbulence model that we consider is a simple model for presented by . assumed steady state and that the entropy gradient in eq .( [ eq : entropyr ] ) is zero . from these assumptions, they derived a simple differential equation for the enthalpy flux . here ,we reproduce this equation , expressing it in terms of the entropy flux : a simple integral of this equation with a boundary condition leads to an expression for the entropy flux . assumed zero flux at the lower boundary and integrated upward resulting in a nonzero flux at the shock .a major advantage of this model is that it is simple and straightforward to find a solution for .unfortunately , the assumptions which lead to simplicity also lead to inaccurate turbulent profiles ( see [ section : comparemodels ] ) .for one , there is no solution for the kinetic energies .secondly , this model completely ignores the velocity of the background flow .as we show in [ section : comparemodels ] and fig .[ compsimmodels_fs ] , this is inconsistent with the characteristics of neutrino - driven convection in 2d simulations .thirdly , this model produces flawed entropy flux profiles . for example , it results in a nonzero entropy flux at the shock , while 2d simulations show zero entropy flux at the shock ( figure [ fskinetic3times ] ) .in fact , the entropy flux is zero at both boundaries . to compensate for this nonzero flux at the shock , modified the shock jump conditions .however , the solutions to eqs .[ eq : mass]-[eq : entropyvariance ] are quite sensitive to the form of the boundary conditions at the shock .incorrect boundary conditions will lead to erroneous solutions .model 2 is a reynolds stress and heat flux closure model that has been designed and calibrated to model isolated buoyant plumes .though core - collapse turbulence involves more than just isolated plumes , neutrino - driven convection is in fact buoyancy - driven .therefore , it is plausible that the reynolds stress and heat flux model might be an appropriate closure model . the model for the reynolds stress equation , eq .( [ eq : reynoldsstress ] ) , is where and are the buoyant and shear production terms , is the pressure - strain term , is the diffusion term , and the final term is turbulent dissipation .all but the first two terms require models , and the expressions for the first two production terms are easily read from eq .( [ eq : reynoldsstress ] ) .the first of the modeled terms is the pressure - strain correlation , and it acts to redistribute energy among the reynolds stress components . for buoyant flows , the pressure - strain correlation is generally modeled by three contributions where the first term is proportional turbulent stress , the second term is proportional to the interaction between turbulent stress and the mean strain , and the last term is proportional to buoyancy . explicitly , they are and where the parameters , , and are 3 , 0.3 , and 0.3 respectively . suggest using the gradient - diffusion hypothesis to model the diffusion term .specifically , where the constant has been calibrated by experiments and simulations to be 0.22 .this diffusion term inherently assumes that the reynolds stress ( or kinetic energy ) flux is proportional to the gradient in the reynolds stress .this is most likely relevant in shear dominated flows . however , the buoyancy dominated flows of core - collapse convection are characterized by large scale plumes . as a consequence ,the transport of kinetic energy flux is directly proportional to these bulk turbulent motions ( i.e. ) rather than the gradient .the final term to be modeled is the turbulent dissipation , . another differential equation that includes more production , diffusion , dissipation terms , and constants to be calibrated . to avoid these complications, we simply adopt the turbulence model of [ section : dissipationmodel ] . to derive the model turbulent kinetic energy equation ,we take the trace of the reynolds stress model equation , eq . ([ eq : reynoldsstressmodel ] ) , except for two differences , this model kinetic energy equation is very similar to the full kinetic energy equation , eq .( [ eq : kinetic ] ) .the first difference is that the work done by turbulent pressure perturbations , , is absent in the model equation . because the pressure - strain correlation , , is designed to only redistribute energy among the components , the trace of this term is identically zero .hence , the model assumes that this term is zero .because low mach number turbulence typically has negligible pressure perturbations , this is a standard assumption in turbulence modeling .our 2d simulations confirm this assumption .the second difference is that is assumed to be proportional to .this is a consequence of the gradient - diffusion approximation , but in [ section : comparemodels ] , we find that is not proportional to the gradient but is best modeled as . the model for the entropy flux equation , eq .( [ eq : entropyflux ] ) , is where and the pressure - entropy correlation term ( ) is the analog of the pressure - strain correlation and is modeled by three terms in general , these terms act to dissipate and represent pure turbulence , turbulence and mean strain , and buoyancy interactions : and where the constants are .once again , the transport term in eq .( [ eq : entropyfluxmodel ] ) is modeled using the gradient - diffusion approximation .the final equation models the equation for entropy variance , eq .( [ eq : entropyvariance ] ) : where and . equations ( [ eq : reynoldsstressmodel]-[eq : entropyvariancemodel ] ) represent a model for the turbulence equations that is complete and has been extensively tested and calibrated with experiment and simulations .unfortunately , there are several disadvantages to using this turbulence model .for one , this model depends upon a large number of calibrated constants .in addition , these equations were designed and calibrated to calculate the profile of a single isolated buoyant plume .the macroscopic flow of fully developed convection is very different .therefore , it is quite possible that the closure relations presented in this model are not appropriate for core - collapse convection .furthermore , the dissipation terms make eqs .( [ eq : reynoldsstressmodel]-[eq : entropyvariancemodel ] ) a stiff numerical problem , requiring careful numerical treatment .consequently , the solutions to these equations are extremely sensitive to the uncertain dissipation models . in general , there are two strategies to finding turbulent solutions ( in [ section : model4 ] , we present a third ) . in the first , the turbulent correlations are solutions of differential equations. models 1 & 2 are of this type . within the second strategy, a few key assumptions allow the differential equations to be converted into a set of algebraic equations , and the turbulence correlations are solutions to these algebraic equations .an example of a turbulence model which uses a set of algebraic equations is the mixing length theory ( mlt ) ; it is used extensively through out astrophysics , including stellar structure calculations . to transform the differential equations into algebraic equations ,the temporal and spatial derivatives must either vanish or be approximated by algebraic expressions . as an example of the first method, one can assume that the temporal and advective derivatives are zero , ( the l. h. s. of the evolution equations eqs .( [ eq : reynoldsstress]-[eq : entropyvariance ] ) ) .this assumption is valid in most circumstances because even though the terms on r.h.s .are large they often sum to zero .therefore , the primary assumptions of this approach are steady state and local balancing . in the second approach ,the spatial derivatives are replaced with ratios of the variable to be differentiated and a length scale .for example , the divergence of the kinetic energy flux is roughly , . in effect, a global boundary value problem is reduced to a set of local algebraic expressions . in some respects, these algebraic equations still retain nonlocal characteristics .the nonlocality is merely hidden in the assumptions .for example , in mlt , local gradients are used to calculate the local buoyancy force , but the eddies are assumed to remain coherent until a mixing length at which point they dissipate their energy .hence , the finite size of the mixing length is an echo of the true nonlocality of turbulence in a local prescription . where local balancing is important , such as the heating region , these approximations can give reasonable solutions .however , in regions where nonlocal transport is important , such as overshoot regions , these algebraic models fail completely . to derive an algebraic model , we assume that transport balances buoyant driving . applying this assumption to the exact 2 moment equations , eqs .( [ eq : kinetic]-[eq : entropyvariance ] ) , and using the flux models , eqs .( [ eq : entropyfluxtransportmodel]-[eq : entropyvariancetransportmodel ] ) , of [ section : transportmodels ] results in approximate kinetic energy , entropy flux and entropy variance equations , the r.h.s . of the approximate entropy flux equation , eq . ( [ eq : transporteqdriving2 ] ) , is a sum of two important terms : buoyancy driving , , and the pressure covariance , . suggest that this term is best modeled by , and our 2d simulations confirm this model . in effect , the sum of these two source terms reduces buoyancy driving by a factor of two .if we approximate the divergence of a generic flux , , by , where is the downward distance from the shock , then algebraic solutions of the approximate equations are proportional to . to obtain the correct proportionality constants , we assume solutions of the form , , and , and substitute these into eqs .( [ eq : transporteqdriving1]-[eq : transporteqdriving3 ] ) .this results in the following algebraic expressions for the second moment correlations and for the entropy gradient in these equations , we assume that , and to get the radial component of reynolds stress we assume if instead of assuming a balance between local driving and transport , we assume a balance between local driving and dissipation , then we obtain dimensionally similar results . however , they differ in the constants of proportionality and the length scale involved . with dissipation ,the natural length scale is , which is a fixed scale and is either the full size of the convective region ( i.e. a constant ) or is proportional to the local scale height . in either case , using a fixed length scale results in algebraic expressions that are inconsistent with our 2d simulations . instead , in [ section :comparemodels ] , we show that the convective profiles in the driving region of 2d simulations are quite consistent with as the length scale .interestingly , most contemporary formulations of mlt use as the appropriate length scale , while in the original formulation of the mlt , prandtl used .in essence , the result that the downward distance , , is the most appropriate length scale suggests that the properties of neutrino - driven convection are best modeled by negatively buoyant plumes that form at the shock and grow due to buoyancy . while the previous models are adequate in some locations , we show in [ section : comparemodels ] that they fail to reproduce the global properties of neutrino - driven turbulence .motivated by this failure , we propose an original turbulence model which is derived using global considerations .models 2 & 3 employ single - point closure models , which assume that on small scales , the higher order correlations can be modeled using local 2-order correlations .consequently , some of the most important terms in a nonlocal problem are modeled using local approximations .while these local models are adequate in some locations , they can be factors off in other locations . given the stiff nature of the governing equations , these modest errors can lead to significantly flawed turbulent profiles .rather than relying on these local models , we use conservation laws and develop an original global turbulence model . integrating the turbulence equationsleads to global conservation laws for turbulence .for example , integrating the turbulent kinetic energy equation dictates that global buoyant driving equals global dissipation . to satisfy these global constraints ,the turbulent correlations relax into the appropriate profiles .if the same driving , redistribution , and dissipation mechanisms operate in a wide range of conditions , then these global constraints also imply self - similar convection profiles . for model 4 , we adopt characteristic profiles for the convective entropy luminosity and turbulent dissipation .the specific profiles are motivated by the evolution of large scale buoyant plumes and informed by numerical 2d ccsn simulations ( this paper ) and 3d stellar convection .then the scales of these profiles are constrained using the conservation laws . given these scaled profiles, we then calculate the remaining quantities of interest such as the kinetic energy flux and entropy profile .the latter is particularly important in exploring the conditions for successful ccsn explosions .our global model builds upon and generalizes the ideas put forth by . in the context of stellar evolution and no background flow , suggest that the primary role of the entropy ( enthalpy ) flux is to redistribute the entropy such that the entropy gradient is flat and the entropy generation rate is constant throughout . in other words , they found universal profiles for the entropy generation rate and entropy profiles .they then integrated the entropy and turbulent kinetic energy equations to provide integral constraints and solutions for the turbulent scales .similarly , the rate of entropy change in the ccsn context is uniform , in fact during the steady - state phase it is zero everywhere .however , the entropy gradient is nonzero , so the model will not suffice .2d simulations ( see [ section : comparemodels ] ) indicate that the global constraints and redistribution drive the convective entropy luminosity , , and turbulent dissipation to simple self - similar profiles . informed by 2d simulations of ccsne ( [ section : comparemodels ] , fig .[ mdotds_plot ] ) and 3d simulations of stellar convection , we model the convective entropy luminosity with a piecewise linear function , and the turbulent dissipation via where is the distance from the bottom of the convective region , is the total size of the convective region , and is defined by . in these models , the peak of the profile is at , and goes to zero at and the shock .the turbulent dissipation is zero at the shock , and increases linearly downward with a scale of . given the many differences ( i.e dimensionality , accretion , etc . ) between 2d ccsn and 3d stellar convection simulations , the universality of the profiles is remarkable and is a testament to their self - similarity . taken together , this model for and has three parameters , , , and , which set the scale for convection .next , we use the algebraic expression for the entropy flux and global conservation laws to evaluate these scales .operating under the assumption that the growth of negatively buoyant plumes sets the scale , we evaluate the algebraic expression for the entropy flux , eq .( [ eq : entropyfluxalg ] ) , at the peak : we explore two techniques to constrain the position of the peak , . in the first , we assume that the peak of corresponds to where entrainment starts to dominate the evolution of negatively buoyant plumes and that this distance is proportional to the pressure scale height ( i.e. ) . comparing to the 2d simulation , we find that , 2.1 , and 2.0 at 404 ms , 518 ms , and 632 ms after bounce .empirically , with an accuracy of % . while this empirical approach provides a useful diagnostic of , it lacks a physical derivation . in the second technique ,we derive a physically motivated global constraint for .consider the approximate entropy flux and entropy variance equations , eqs .( [ eq : transporteqdriving2 ] & [ eq : transporteqdriving3 ] ) .they represent a set of conservation laws for and in which the source terms are determined by buoyant driving . combining these two equations and integrating over the whole convective region ,we find that . in effect, represents an important buoyant source term for turbulence , where is negative for the growth ( decay ) of negatively ( positively ) buoyant plumes and vice versa . because and at both boundaries , the integral of this buoyant source term must balance outtherefore , we adjust so that satisfies to set , the turbulent dissipation scale , we first rewrite the turbulent kinetic energy equation as an expression for the turbulent flux : where we have neglected the term due to background advection . integrating eq .( [ eq : model4kinetic ] ) over the entire convective region and noting that is zero at the boundaries leads to the second conservation law for the balance of total buoyant work and dissipation : together , eqs .( [ eq : model4alg ] , [ eq : model4constraint2 ] , & [ eq : model4constraint1 ] ) constrain the three scales of our global model .having set the scales of the turbulent profiles , eqs .( [ eq : model4ls ] ) & ( [ eq : model4dissipation ] ) , it is now possible to integrate eq .( [ eq : model4kinetic ] ) to find the turbulent kinetic flux and , most importantly , evaluate the entropy profile including the effects of turbulence . including the time rate of change ,the entropy equation is and integrating this equation over the volume from an arbitrary radius , up to the shock , gives where and . assuming a flat gradient , = 0 , and , where is a mass average , leads to the integral equation that use for stellar convection .for the ccsn problem , we assume steady state , and , resulting in if we momentarily neglect the buoyant term in the kinetic energy equation , eq .( [ eq : model4kinetic ] ) , then the profile for the dissipation implies , which is reminiscent of the entrainment hypothesis for the evolution of isolated buoyant plumes . in kolmogorovs hypothesis , dissipation is assumed to be dominated by mechanisms at the largest scale .therefore , eq . ( [ eq : model4dissipation ] ) implies that entrainment of negatively buoyant plumes is _ the _ mechanism at the largest scales that controls dissipation . in summary, we use self - similar profiles of and and global conservation laws in model 4 . assuming that the growth of negatively buoyant plumes sets the scale for , we evaluate the algebraic model , eq .( [ eq : entropyfluxalg ] ) , at the peak ( ) of . to determine the location of the peak, we set such that and satisfy the global constraint in eq .( [ eq : model4constraint2 ] ) .next , the dissipation scale , , is determined by satisfying the balance between total buoyant work and dissipation , eq .( [ eq : model4constraint1 ] ) .having used global constraints to set the parameters of and , we next evaluate the turbulent kinetic luminosity and entropy profiles using eqs .( [ eq : model4kinetic ] ) & ( [ eq : model4entropy ] ) .finally , we find by inverting our plume model for , eq .( [ eq : fkmodel ] ) .in this section , we critically compare the results of 2d simulations with the turbulence closure models presented in [ section : models ] . of all the turbulence models that we explore , model 4 , the global model ( [ section : model4 ] ) , consistently gives the correct scale , profile , and temporal evolution for the reynolds stress , , and entropy flux , . _dissipation. _ in fig .[ convectionintegral ] we present the time history of integrated buoyancy driving and turbulent dissipation .the integrated buoyancy work should balance the integrated turbulent dissipation in steady state . this can be shown by integrating the turbulent kinetic energy equation ( eq . [ [ eq : kinetic ] ] ) over the volume of the turbulent region , and assuming that the work done by turbulent pressure fluctuations and the reynolds stresses are small , and that the flux of kinetic energy is zero out of the turbulent region , all good approximations for the scenario under investigation .the volume integrated turbulent kinetic energy equation then reads or the overall balance between and presented in fig .[ convectionintegral ] shows that the adopted model expression for the dissipation ( eq . [ [ eq : dissipation ] ] ) leads to an overall consistency with the evolution equation for the turbulent kinetic energy(eq .[ [ eq : kinetic ] ] ) , and the simulation data .the global balance between buoyancy driving and turbulent kinetic energy dissipation tempts one to conclude that all buoyancy work is dissipated by turbulent dissipation .while this is true in the global sense that the net buoyancy work is balanced by turbulent kinetic energy dissipation , it is important to note that the total buoyancy work is an integral over the turbulent convection zone of , which is positive in the active driving region and negative in the bounding stabilizing layer .therefore , some of the work done by buoyancy in the active _ driving _ region ( ) is mitigated by the _ buoyancy breaking _ ( ) that takes place in the stabilizing layer .it is therefore only the net buoyancy work that is balanced by the total turbulent dissipation . during the earliest stages buoyancy breaking is significant and is about half the magnitude of dissipation .as convection builds in strength after 250 ms , the significance of buoyancy breaking steadily diminishes until it is about 1/10 of dissipation ._ turbulent fluxes . _ in fig .[ transportmodels ] we present profiles of the turbulent flux for three quantities , entropy flux , turbulent kinetic energy , and entropy variance .a comparison to the gradient diffusion approximation confirms earlier results that it is a poor model for transport in thermal convection . instead, we find relatively good agreement with the transport models proposed in [ section : transportmodels ] which are proportional to , where is the density of the transported quantity .the agreement between these models and the data confirm the advective nature of the transport ( i.e. ) . in this subsectionwe critically compare each of the turbulence models introduced in [ section : models ] to the simulation data .this comparison is summarized by figures [ compsimmodels_sp2]-[compsimmodels_fk ] .the first three show the radial profiles of , , and for the 2d simulation data and the turbulence models of [ section : model1]-[section : algebraic ] .all three figures are divided into three panels with each panel showing the 2d simulation profile ( solid line ) and the turbulence models at the three times of fig .[ convectionsasistills ] .the last two figures , figs . [ mdotds_plot ] & [ compsimmodels_fk ] , compare the entropy , entropy flux , and kinetic energy flux profiles of the 2d simulations with the profiles produced by our global model ( model 4 , [ section : model4 ] ) ._ model 1 comparison ( [ section : model1]). _ the zero entropy gradient ( model 1 , [ section : model1 ] ) model is shown as dot - dashed lines in figs .[ compsimmodels_sp2]-[compsimmodels_fs ] . though this model gives the correct order of magnitude for , it fails to reproduce the specific radial profile and temporal evolution .in fact , there appears to be no temporal evolution in this turbulence model , while the 2d results clearly grow with time .furthermore , the zero entropy gradient model ( model 1 ) gives a non - zero entropy flux at the shock .this non - zero entropy flux is clearly not a characteristic of the 2d simulations .[ section : model1 ] for a discussion of how this non - zero entropy flux corrupts the solutions for the steady - state accretion shock ._ model 2 comparison ( [ section : model2]). _ the reynolds stress and heat flux closure model is represented by a dotted - line in each plot . of all of the turbulence models presented in this paper , it produces the poorest reproduction of the turbulence profiles . to obtain these profiles we used a shooting method to integrate eqs .( [ eq : reynoldsstressmodel]-[eq : entropyvariancemodel ] ) subject to the background flow and boundary conditions . at presentit is not clear what the specific form for the boundary conditions should be , especially at the shock , so we used the values of , , and at a small distance from the shock .we then integrated eqs .( [ eq : reynoldsstressmodel]-[eq : entropyvariancemodel ] ) inward until at the inner boundary .we adjusted the guess for at the outer boundary so that both and are zero at the lower boundary . for several reasons , we strongly disfavor this model .the most obvious reason is the lack of consistency between the model and the 2d results .in addition , because these equations are stiff , they are quite sensitive to the boundary conditions and the assumptions for dissipation .we adjusted the distance where we sampled the boundary conditions just below shock and found the solutions to be extremely sensitive to this location .others have noted similar convergence problems with boundary conditions to these equations .finally , this model has many parameters that have been calibrated for the solution of isolated buoyant plumes , not for fully developed convection . _ model 3 comparison ( [ section : algebraic ] ) . _ in the region where convection is being actively driven , the algebraic model , eqs .( [ eq : kineticalg]-[eq : entropyvariancealg ] ) , produces reasonably accurate profiles and temporal evolution .of the three turbulent moments shown in figs .[ compsimmodels_sp2]-[compsimmodels_fs ] , and matter most in the background equations , eqs .( [ eq : mass]-[eq : entropy ] ) , and they show the best correlation with 2d simulations . on the other hand , while the algebraic model gives the correct scale for the entropy variance , , the algebraic model does not match exactly the 2d profiles .fortunately , the entropy variance does not directly influence the background equations and so in practice this discrepancy can be ignored .nonetheless , this failure should be a clue to what is missing .furthermore , we note that the profiles for and match the 2d results only in the heating region , where convection is actively driven . below the heating region ( km ) , we set the values to zero because it is not clear how to model this region with the algebraic model where positive buoyancy decelerates the convective plumes. finally , as we discuss in [ section : algebraic ] , the success of this model implies that core - collapse turbulence is best characterized by low entropy plumes that are initiated at the shock , the acceleration of these plumes through the heating region , and the deceleration of these plumes at the lower boundary by stabilizing gradients . _ model 4 comparison ( [ section : model4 ] ) . _ figures [ compsimmodels_rey]-[compsimmodels_fk ] show that the global model ( model 4 ) provides the most accurate turbulent correlations .the reynolds stress , , and enthalpy flux , , ( red solid lines in figs . [ compsimmodels_rey ] & [ compsimmodels_fs ] ) derived from model 4 have profiles that match the 2d simulation data in scale , shape , and temporal evolution .figure [ mdotds_plot ] compares the terms in the entropy equation , eq .( [ eq : model4entropy ] ) , of model 4 with 2d simulation data , and once again , this plot shows that model 4 accurately reproduces the 2d data .the solid blue line represents the change in entropy ( in units of ) due to neutrino heating and cooling alone .we find that convection fills the region where this integral is greater than zero .in the convective region , the neutrino heating and cooling curve accounts for only half of the total entropy change at 404 ms and only one third of the entropy change at 632 ms .heating by turbulent dissipation and redistribution by account for the rest .the total entropy change , , from model 4 ( red - dashed line ) is computed by summing the neutrino heating and cooling integral ( blue line ) , the modeled turbulent dissipation integral ( green - dashed line ) , and the modeled convective entropy luminosity ( black - dashed line ) .the modeled entropy difference is only slightly larger than the 2d simulation results ( red - solid line ) and reproduces the general radial profile and temporal evolution . in [ section : model4 ] , we argue that the global constraints of convection and similarity in driving , distribution , and dissipation mechanisms suggest self - similar profiles for . indeed , the correspondence between our modeled ( green dot - dashed line ) and the 2d data ( black dashed line ) confirms this assumption .moreover , this shape is simply modeled as a piecewise linear , pointed hat .the scale of is set by the entropy flux we derive from the algebraic model ( model 3 , [ section : algebraic ] ) at the position of the peak .since the algebraic model describes the growth of negatively buoyant plumes that originate at the shock , the scale of is in turn set by the growth of these negatively buoyant plumes .the position of the peak is determined such that the integral constraint , , eq .( [ eq : model4constraint2 ] ) , is satisfied . in fig .[ compsimmodels_fk ] , we compare the kinetic energy flux of model 4 ( dashed lines ) with the results of 2d simulations ( solid lines ) . qualitatively , the modeled fluxes exhibit the correct scales , radial profiles and temporal evolution .in this section , we suggest that the entropy equation , eq .( [ eq : entropy ] ) , holds the key to understanding the explosion conditions .furthermore , we use this equation to argue that of all the convective terms , the divergence of the convective entropy flux most affects the critical luminosity for successful explosions .it has been suggested that the critical luminosity condition is equivalent to either a ratio of timescales condition or to an ante - sonic condition . in either case , it is the entropy equation , eq .( [ eq : entropy ] ) , that leads to this result . for example , if we ignore the turbulence terms and integrate the entropy equation over the gain region , then we can derive a ratio of advection to heating timescales where is the change in the entropy between the shock ( ) and gain ( ) radii .numerical results have confirmed that explosion ensues when this ratio exceeds . in the simplest interpretation, this result suggests that explosion occurs roughly when exceeds a critical value , . including turbulent terms , the change in entropy as function of radius is where and recall that and are negative . in our 2d simulations , we find that , of the two turbulent terms in eq .( [ eq : newtimecondition ] ) , the last term contributes the most entropy change .therefore , if the time condition is a relevant explosion condition , then the entropy flux is the turbulent correlation that most affects the critical luminosity .while the timescale condition has proven to be a useful diagnostic for explosions , have found a more precise explosion condition in that explosions occur when exceeds 0.2 , where and are the local sound speed and escape velocities squared .they call this new condition the ante - sonic condition and note that it varies by only a few percent when the neutrino luminosity is changed by two orders of magnitude . over the same range of neutrino luminosities , the timescale ratio at explosion varies from 0.7 to 1.1 .in other words , the timescale condition is of order 1 , but it varies by .6 . hence , while the timescale ratio is a useful diagnostic relating the most important physical processes , the ante - sonic condition is a more precise ( although more obscure ) condition for explosion . using the integrated form of the entropy equation , eq .( [ eq : newtimecondition ] ) , we show that the timescale diagnostic and ante - sonic condition are intimately related .the difference in the sound speed between the shock and an arbitrary radius , is where and are evaluated at , is evaluated at constant density , and is the specific heat at constant volume . the first term on the r. h. s. of eq .( [ eq : soundspeed ] ) gives the increase in sound speed due to adiabatic compression .the second term represents the change in the sound speed due to changes in entropy given by heating and cooling , and by convection , furthermore , since the second term is proportional to the change in entropy , it is also proportional to the ratio of timescales . through , it is apparent that the ante - sonic condition , , is directly related to the timescale diagnostic . in a forthcoming paper, we will provide a more thorough discussion of these conditions , how they relate to the critical luminosity for explosions , and how convection affects all three conditions .for now , these analytics reaffirm the supposition in [ section : validateequations ] that the convective entropy flux most affects the explosion conditions . to see if the ante - sonic condition is consistent with our 2d simulations , we plot as a function of radius and at four times in fig .[ ratiospeeds3times ] .the first three times correspond to the stages shown in fig .[ convectionsasistills ] and sample a range of convective strength from weakest at the earliest time to strongest at the latest time .the final time , 700 ms after bounce , corresponds to the time of explosion , which we define as the time when all measures of shock radii ( see the top panel in fig .[ convectiondata ] ) expand indefinitely . for comparison, we show of 1d models for these times .the 1d profiles show very little evolution .however , in 2d simulations , the maximum of is strongly correlated with the strength of convection and the shock radius . at explosion ,the peak of is .2 , which is consistent with the explosion condition proposed by .it has been argued that convection increases the dwell time in the gain region , which in turn reduces the critical luminosity . , on the other hand , propose that convection acts to rearrange the flow so that there is less cooling , and this reduced cooling is responsible for lower critical luminosities . the heating and cooling profiles in fig .[ heating3times ] and the entropy profiles in fig .[ entropy3times ] offer a way to investigate the merits of each proposal .unfortunately , interpretation is somewhat complicated by the fact that the average 2d cooling is less than 1d cooling for some radii and times but is higher for other radii and times .below km , 2d cooling is always more than 1d cooling by % . above this radius ,2d cooling is generally a few percent less than 1d cooling .however , at later times ( 518 and 632 ms ) and above km , 2d cooling is a few percent larger than 1d again. however , fig .[ entropy3times ] shows that the differences in the average cooling between 1d and 2d are small and do not greatly affect the entropy profile .when the convective terms are ignored , the 2d ( solid green ) and 1d ( solid red ) entropy profiles are quite similar .though there are small differences , the differences in average cooling profiles are dwarfed by the effects of including the convective terms in the entropy equation ( dot - dashed green curve ) .therefore , it is unlikely that changes in the average cooling between 1d and 2d lead to the reduction in the critical luminosity .rather , as we argue in [ section : conditions ] it is more likely that the divergence of the convective entropy flux is responsible for the extra entropy , higher sound speeds , and a reduction in the critical luminosity .recent simulations of ccsne indicate that turbulence reduces the critical neutrino luminosity for successful explosions .this suggests that a theory for successful explosions requires a theoretical framework for turbulence and its influence on the critical luminosity . in this paper , we develop a foundation for this framework , which is represented by the following results : we derive the exact steady - state equations for the background and turbulent flow .we identify the convective terms that most influence the conditions for successful explosions .we have shown that without turbulence , entropy profiles of 2d simulations would be nearly identical to 1d and that the convective terms entirely make up the difference .this further motivates the need to understand turbulence in the context of ccsne . to this end, we cull the literature for a broad sample of turbulence models , but after a quantitative comparison with 2d simulations , we find that none adequately reproduce the global turbulent profiles .these single - point models fail because they use local closure approximations , even though buoyantly driven turbulence is a global phenomenon . motivated by the necessity for an alternate approach, we propose an original model for turbulence which incorporates global properties of the flow .this global model has no free parameters ; instead the scale ( or parameters ) of convection are constrained by global conservation laws .furthermore , this model accurately reproduces the turbulence profiles and evolution of 2d ccsn simulations . using reynolds decomposition , we derive steady - state averaged equations for the background flow and turbulent correlations , eqs .( [ eq : reynoldsstress]-[eq : entropyvariance ] ) .these equations naturally incorporate effects that are important in the ccsn problem such as steady - state accretion , neutrino heating and cooling , non - zero entropy gradients , buoyant driving , turbulent transport , and dissipation .we validate these equations using 2d ccsn simulations .for example , we integrate the entropy equation with and without the convective terms ( see fig . [ entropy3times ] ) .if we neglect the turbulence terms , then we recover the 1d entropy profile .the difference between the 1d and 2d entropy profiles is entirely accounted for by the physics of turbulence .turbulence equations require closure models , but these closure models depend upon the macroscopic properties of the flow . to derive a closure model that is appropriate for ccsne, we compare a representative sample of closure models in the literature with 2d simulations .motivated by the failure of these models , we have developed an original closure model .while the models culled from the literature are single - point closure models and use local closure approximations , our model is distinguished by using global properties of the flow for closure .this global model is further distinguished by reproducing the scale , profile , and evolution of turbulence in 2d simulations .the single - point models use local turbulent correlations to derive closure relations for the higher order correlations .convection is inherently a global phenomenon , and so while it is possible to model the higher - order correlations with local approximations in some locations , these models can be factors off in other locations . given the stiff nature of the reynolds - averaged equations , these errors , even if modest , can lead to significantly flawed global solutions . rather than relying on these local models ,we integrate the turbulence equations and derive global constraints based on conservation laws .we propose that nonlocal turbulent transport relaxes the turbulent profiles to satisfy these global constraints .this relaxation combined with the similarity of buoyant driving , entrainment , and dissipation leads to self - similar profiles for the most important turbulent correlations . in model 4 , we construct a global model in which we define these self - similar profiles and use global conservation laws to determine their scales . locally , we use the differential form of the conservation equations to derive the remaining profiles .our model represents a new approach to turbulence modeling , so we elucidate the assumptions and features that distinguish it from previous models .single point closure models try to employ universal characteristics on the smallest scales to close the problem .we are approaching this from the other direction .the nonlocal nature of plume dominated convection leads us to assume universality on the largest scales and a minimum set of global profiles to close the problem .these two approaches are complimentary . assuming universality on the smallest scales lends itself to dynamic simulations , while the global approach lends itself very well to steady - state problems .the general strategy that we employ is to establish some general characteristic of turbulence and use global conservation laws to constrain the scale . fornow , we identify the apparent self - similar profiles as the general characteristic .in fact , these self - similar profiles are motivated by the generic properties of plume dominated flows and the results of 2d ccsn and 3d stellar convection simulations . in the future , we hope to identify a more fundamental characteristic and physical assumption that leads to these profiles . but until then , our global model is the only model that consistently gives the correct scale , profile , and temporal evolution for the convective kinetic energy flux , , and entropy flux , .the strongest validation of this model is fig .[ mdotds_plot ] , in which we reproduce the entire entropy profile of 2d simulations . in preparation to deriving the reduced critical luminosity , we identify the turbulent terms that most influence the conditions for explosion .three explosion conditions have been explored in the literature . proposed a critical neutrino luminosity for successful explosions and used 1d and 2d simulations to show that this condition indeed separates steady state accretion from dynamic explosions .alternatively , it has been suggested that explosions occur when the advection timescale through the gain region exceeds the heating timescale .more recently , suggest an ante - sonic condition in which explosions occur once exceeds 0.2 . in fig .[ ratiospeeds3times ] , we show that in 2d simulations , indeed reaches 0.2 at explosion .moreover , using our reynolds - averaged equations , we show that the timescale and the ante - sonic conditions are intimately related , and in both conditions , convection aides explosion because turbulence raises the entropy by a term proportional to . in summary , our global turbulence model contains no free parameters , is globally self - consistent , accurately reproduces the mean - field properties of 2d ccsn turbulence , and promises to explain the reduction in the critical luminosity . despite these successes ,closure approximations generically depend upon the properties of the macroscopic flow , making them case - dependent .hence , it is unclear to what extent this turbulence model can accurately describe 3d turbulence , especially in the presence of rapid rotation and/or magnetic fields .preliminary work hints that large rotation rates and magnetic field strengths could aid explosion , but it is uncertain how modest values would alter turbulence and its effects on explosion . under the most extreme rotation rates and/or magnetic fields ,the flow can be severely distorted from spherical symmetry ( e.g. jets , * ? ? ?* ) . in these conditions ,it is best to study the role of rotation and magnetic fields on turbulence using multi - dimensional simulations . on the other hand , for mild rotation and magnetic fields, the reynolds decomposition framework employed in this paper can be applied straightforwardly : mild rotational effects can be included by retaining off - diagonal reynolds stress terms ( e.g. , * ? ? ? * ) and applying reynolds decomposition to ideal mhd introduces terms associated with the fluctuations of magnetic fields such as maxwell stresses and ohmic heating ( e.g. , * ? ? ?* ; * ? ? ?* ) . however , these analyses are beyond the scope of this paper , so for now , we comment on the reliability of our global turbulence model for 3d ccsn turbulence .even though 2d and 3d turbulence are known to behave differently , the global turbulence model reproduces the turbulent characteristics of 2d ccsn _ and _ 3d stellar evolution simulations .we suspect , but have not proven , that this is a testament to the global nature of the turbulence model .though encouraging , there is no guarantee that the model will work so well for 3d ccsn simulations .therefore , a reliable turbulence closure model will require comparison with 3d simulations . in 2d simulations ,steady - state is a valid assumption .however , differences in the plume structure of 3d turbulence could lead to more efficient heating , which in turn could necessitate including time - dependent terms in the turbulence equations .the global nature of turbulence and the similarity of driving and dissipation should lead to self - similar profiles in both 2d and 3d turbulence .however , the exact profiles may differ .whether any of these differences will affect closure approximations is uncertain .only comparison with 3d simulations can clear up this matter .we thank jason nordhaus and ondrej pejcha for their comments on this manuscript .j.w.m . is supported by an nsf astronomy and astrophysics postdoctoral fellowship under award ast-0802315 .the work by meakin was carried out in part under the auspices of the national nuclear security administration of the u.s .department of energy at los alamos national laboratory and supported by contract no .de - ac52 - 06na25396 ., d. r. , faulkner , a. j. , lyne , a. g. , manchester , r. n. , kramer , m. , mclaughlin , m. a. , hobbs , g. , possenti , a. , stairs , i. h. , camilo , f. , burgay , m. , damico , n. , corongiu , a. , & crawford , f. 2006 , , 372 , 777
|
simulations of core - collapse supernovae ( ccsne ) result in successful explosions once the neutrino luminosity exceeds a critical curve , and recent simulations indicate that turbulence further enables explosion by reducing this critical neutrino luminosity . we propose a theoretical framework to derive this result and take the first steps by deriving the governing mean - field equations . using reynolds decomposition , we decompose flow variables into background and turbulent flows and derive self - consistent averaged equations for their evolution . as basic requirements for the ccsn problem , these equations naturally incorporate steady - state accretion , neutrino heating and cooling , non - zero entropy gradients , and turbulence terms associated with buoyant driving , redistribution , and dissipation . furthermore , analysis of two - dimensional ( 2d ) ccsn simulations validate these reynolds - averaged equations , and we show that the physics of turbulence entirely accounts for the differences between 1d and 2d ccsn simulations . as a prelude to deriving the reduction in the critical luminosity , we identify the turbulent terms that most influence the conditions for explosion . generically , turbulence equations require closure models , but these closure models depend upon the macroscopic properties of the flow . to derive a closure model that is appropriate for ccsne , we cull the literature for relevant closure models and compare each with 2d simulations . these models employ local closure approximations and fail to reproduce the global properties of neutrino - driven turbulence . motivated by the generic failure of these local models , we propose an original model for turbulence which incorporates global properties of the flow . this global model accurately reproduces the turbulence profiles and evolution of 2d ccsn simulations .
|
the statistical mechanics of polymers has become a very prolific research area . during the last decades, many models have been proposed to describe the configurational properties of linear and branched polymers and a variety of methods have been employed to study these systems .the classic model for a linear polymer in dilute solution ( or at high temperature ) is the self - avoiding walk ( saw ) .on the other hand , connected clusters ( lattice animals ) provided a good model for branched polymers in the dilute limit .the kinetic growth walk ( kgw ) was proposed as an alternative model to describe the irreversible growth of linear polymers [ 4 - 6 ] . like the saw, every kgw chain does not intercept itself .but whereas in a saw the next step is randomly chosen from among _ all _ nearest - neighbor sites ( excluding the previous one ) , in a kgw the choice is among the _ unvisited _ sites . therefore the kgw is less sensitive to attrition . besides , although both models span the same set of configurations , the n - step saw chains are _ all _ equally weighted while the statistical weights of the kgw chains can be differentnevertheless , it was shown that both models present the same critical exponents .later , lucena _ et al ._ generalized the kgw in order to allow for branching as well as for impurities . this generalized model ( which became known as the branched polymer growth model - bpgm )was found to exhibit an interesting phase transition ( due to competition between hindrances and branching ) separating infinite from finite growth regimes . in the following years, several authors have studied the bpgm [ 8 - 15 ] .the topological and dynamical aspects of the model were investigated . besides , the system was shown to achieve the criticality through a self - organization growth mechanism and to exhibit a transition from rough to faceted boundaries for large values of the branching probability .the model was also studied through an exact enumeration of bond trees and ergodicity violation was discussed .the question of the universality class of the bpgm was also investigated .in contrast to the common belief that branched polymers belong to the universality class of lattice animals , the study of the growth process in chemical with estimates of structural exponents led to the proposal that the bpgm belongs to the universality class of percolation . on the bethe lattice , this proposal is based on analytical results .a further analysis using finite - size scaling techniques led to the conclusion that the bpgm is _ not _ in the same universality class of percolation in two dimensions . in the present paper ,we revisit the bpgm on the square lattice .we point out that , although the input parameter is named branching probability , it does not have effective control of the relative incidence of bifurcations ( branches ) in the simulated polymers .this seems to be an undesirable aspect of the model since the degree of branching is an important quantity that is usually measured in real ramified polymers .besides , according to the growth rules of the bpgm , a monomer which is chosen to bifurcate ( with probability ) but has only one empty nearest neighbor site is compelled to grow linearly thus changing its functionality from to . in real experiments this change is not allowed since the concentrations of polymers with different functionalities are fixed . in order to preserve the functionalities of the monomers as well as to recover the meaning of the branching probability , we propose a subtle modification in the dynamics of the bpgm .so this new version of the model aims to _ adapt _ it to a more realistic scenario and is called the _ adapted branched polymer growth model _ ( abpgm ) . in the following sections , we define the abpgm andpresent the phase diagram in the space .this diagram is found to exhibit a peculiar reentrance .we introduce the concept of _ frustration _ and compare the _ effective branching rate _ with the input parameter for both the original and adapted models .our results concerning the abpgm are shown to be much closer to the ideal behavior .finally , we present a discussion based on the clusters topologies of both models .first of all , let us review the bpgm . consider a square lattice with a certain concentration of sites randomly filled by impurities .at the initial time , the polymer starts growing from a monomer seed located at the center of the lattice towards a random empty nearest - neighbor site . at time , this chosen site is occupied by a monomer ( growing tip ) which now _ may _ bifurcate or follow in one direction ( linear growth ) .the growth directions are randomly chosen among the available ones ( those which lead to empty neighbors ) in a clockwise way . at every time ( ), the process is repeated for all actual monomeric ends following the sequence of their appearances . if a polymer end has _ no _ empty nearest - neighbor site then it is trapped in a cul de sac and stops growing ( it is then called a _dead end _ ) .if only one adjacent site is available then the linear growth is obligatory. if at least two adjacent sites are empty then the growing end bifurcates with probability or follows linearly with probability . in a particular experiment of the bpgm, the polymer growth is simulated according to the above rules . depending on the values of parameters and , the polymer can either grow indefinitely or stop growing ( _ die _ ) at a finite time if all its current tips are dead ends .the experiment is finished either when the polymer touches the frontier of the lattice ( _ infinite _ polymer ) or when it dies ( _ finite _ polymer ) .the ensemble over which averages are performed is constituted by a great number of experiments .each polymer configuration of the bpgm can be identified with a self - avoiding loopless graph ( _ bond tree _ ) although the reciprocal is not always true .thus the bpgm can be mapped into a particular _ subset _ of the ensemble of all possible bond trees .these tree graphs have been applied in models of branched polymers and can be classified by the number of bonds and the number of vertices with bonds . in a typical finite polymer configuration of the bpgm , one may find twofold and threefold sites representing monomers of functionalities and , respectively .the monomer seed and dead ends represent monofunctional monomers .tetrafunctional units are impossible in this model once trifurcations are not allowed . for a tree graph generated by the bpgm one has the following constraints : and the occurrence of bifurcations in the modeldetermines the relative incidence of trifunctional monomers ( branches ) .this is basically the _ degree of branching _ of the polymer , an important quantity that is usually measured in real ramified polymers .however , although the parameter is named _ branching probability _ , we shall see that it does _ not _ have effective control of the relative amount of branches ( bifurcations ) in the bpgm .indeed , according to the rules of the bpgm , the parameter is not the probability that any tip bifurcates but instead it is the conditional probability that a tip _ with two or more empty nearest neighbors _ bifurcates .every tip with just one vacant nearest - neighbor site grows linearly with probability equal to one . from another viewpoint, it would also be desirable to consider that any monomer that is linking to the polymer has an effective probability of being a trifunctional unit .even if this were considered in the bpgm , a problem would become evident .any growing end chosen ( with probability ) as a trifunctional monomer would _ not _ be able to bifurcate if only one vacant site were available . in this case , the functionality of the monomer would not be respected since ( according to the bpgm ) the growing end would forcibly follow a linear growth like a bifunctional monomer . in order to preserve the functionalities of the monomerswe propose a modification in the dynamics of the bpgm .besides to make the model a little more realistic , our proposal turns the branching probability into an effective control parameter of the relative frequency of bifurcations .so it _ adapts _ the model to a scenario closer to reality and experimentation .we denominate it the _ adapted branched polymer growth model _ ( abpgm ) .the abpgm is defined just like the bpgm except for the following differences : 1 . in the process of polymerization , let a _( a growing tip with at least one empty nearest - neighbor site ) be a trifunctional monomer with probability or a bifunctional monomer with probability .if the free end is a trifunctional monomer but there is only one empty nearest neighbor site then it stops growing and becomes a dead end .2 . at every time unit, all current sites on the front of growth of the polymer are visited in a _random _ sequence .we remember that in the bpgm , a free end with _ just one _ empty nearest neighbor grows linear with probability one while in the abpgm , since bifurcation is impossible , the free end stops and becomes a _ frustrated _ dead end .this frustration seems to be preferable and more realistic than forcing a trifunctional monomer to turn into bifunctional monomer as it occurs in the bpgm .so , in our proposal , the relative incidence of bifunctional and trifunctional monomers does not depend on the topology of the cluster anymore and is only controlled by the branching probability parameter .the second difference is also important : in the bpgm all tips are _ sequentially _ visited in a clockwise manner following the sequence of births whereas in the abpgm they are visited in a _ random _ way .the sequential update of the growth front of the polymer ( in the bpgm ) simulates the formation of parallel chains in the infinite phase ( for high and low like in a crystallization process .this mechanism leads to a faceted - to - rough transition that has been already studied .figure 1a shows a typical bpgm configuration with faceted boundary generated with parameters and ( for ) ; the parallel ordering of chains is caused by the clockwise update of the growing tips . if we simulate the bpgm using the same set of parameters but with a random update instead of the clockwise update we get figure 1b . in this case , two subsequent growing tips are probably located far apart so that they can not produce parallel chains . the boundary is less faceted than before .recently it has been shown that any deterministic growth order is _ non - ergodic _ in the sense that it spans only a subset of the space of all possible configurations . moreover , for the present purposes , this clockwise update mechanism is undesirable since the development of parallel chains corresponds to an increasing incidence of bifunctional monomers due mainly to geometrical effects rather than to the probability itself .the relative incidence of bifurcations ( which will be defined as _ the effective branching rate _ in section 4 ) is much smaller than input parameter .indeed , we anticipate that for the cluster of figure 1a whereas for the one of figure 1b ; so the last rate ( corresponding to a bpgm cluster generated with random update ) is closer to the input value .the effective branching rate gets still closer to the input value if we simulate the abpgm .figure 1c is a typical abpgm configuration ( generated with those same parameters and ) and presents ; the faceted front of growth seems to disappear and vacancies can be found inside the cluster . with parameters and according to the bpgm rules with clockwise update of growing tips * ( a ) * , bpgm with random update * ( b ) * and abpgm * ( c)*. the corresponding effective branching rates are , and , respectively.,width=188 ] the main feature of the bpgm is the phase transition separating finite from infinite growth regime ._ have defined a critical branching probability where the mean size of finite polymers diverges as .the critical value depends on the impurity concentration and separates the finite phase from the infinite one .the probability that a polymer grows indefinitely is null for small values of and increases abruptly in the region of .this probability was estimated through the _ fraction _ _ _ of infinite polymers _ _ in the ensemble of configurations of the bpgm . in figure 2a, we reproduce typical plots of versus at different values of ( with experiments and ) for the bpgm . in figure 2b, we present plots of versus at some values of corresponding to simulations of the abpgm with experiments and . for ,the behavior of is analogous to that of the bpgm but now the threshold is little bit higher ( for the abpgm while for the bpgm ) .this difference increases with .the most interesting characteristic of the present model may be observed when . for this value, the curve raises at , then presents a plateau ( where ) and finally falls ! by a finite - size scaling analysiswe have verified that the height of the plateau does not change significatively in the limit for , the behavior of is gaussian shaped .this means that , for certain impurity concentrations , as increases the system goes from a finite to an infinite phase and then becomes finite again ! indeed ,this reentrance is confirmed in the next section when we determine the phase diagram of the abpgm through an analysis of the correlation length . versus for the bpgm * ( a ) * and abpgm * ( b)*.,title="fig:",width=245 ] versus for the bpgm * ( a ) * and abpgm * ( b)*.,title="fig:",width=245 ]the mean size of finite polymers is a measure of the _ correlation length _ of the system .if a finite polymer is generated during a simulation , the sizes and of the smallest rectangle containing the cluster can be determined .the correlation length can then be calculated as where the average is performed over all experiments with finite polymers .we show typical plots of versus corresponding to simulations of the bpgm ( in figure 3a ) and abpgm ( in figure 3b ) for different values of and size ( with and experiments respectively ) .each plot of versus exhibits only one maximum except for plots of the abpgm with where two peaks are detected !it can be verified that all peaks of do diverge when . of finite polymers versus the branching probability for several impurity concentrations in the cases : * ( a ) * bpgm and * ( b ) * abpgm.,title="fig:",width=226 ] of finite polymers versus the branching probability for several impurity concentrations in the cases : * ( a ) * bpgm and * ( b ) * abpgm.,title="fig:",width=226 ] let us first explain the behavior of for the bpgm .for a fixed value of and as increases , the polymer is more likely to escape from steric hindrances and impurities so that the mean size grows to its highest value at some . above this point ,the system is defined to be in the infinite growth regime ( of course , the true critical point is obtained in the thermodynamic limit ) . as continues to increase , the fraction of infinite polymers grows and the finite polymers get smaller so that decreases . as higher impurity concentrations hindrancethe growth , increases with .the critical line of the bpgm is the locus on which diverges and is shown in figure 4 ( dashed line ) . regarding the abpgm, the same reasoning can explain the maximum of ( or the first maximum when there are two peaks ) .but now this peak is located on a higher branching probability that compensates the occurrence of frustrated dead ends . just above system enters the infinite growth regime where it remains unless a second peak appears ( at for ) . in the latter case the system returns to the finite phase ! this reentrance from infinite to finite growth regime is a peculiar feature of the abpgm . for this modified model, there is a certain range of values of where it is very probable that all free ends become frustrated ( and stop growing ) for a sufficiently large so that the polymer growth is finite again .the abpgm phase diagram is also shown in the figure 4 . the reentrant phase only exists for in the small interval $ ] . ) from the finite one ( at higher values of ).,width=226 ] of finite polymers with bonds for simulations of the abpgm ( with and ) at some points on the critical line.,width=226 ] we have also measured the fraction of finite polymers with bonds . in figure 5, we have a log - log plot of the polydispersion distribution of the abpgm on the critical line ( running experiments in a lattice ) .the three sets of data correspond to the critical points : and ( black dots ) ; and ( open circles ) ; and ( squares ) .the data are fitted by a straight line with slope .so , on the critical line , decays with as a power law .we have verified that outside the critical line , decays exponentially with ._ have found a similar behavior in the bpgm .the _ frustration _ is a desirable event which distinguishes the abpgm from the original model .it is defined as the _ interruption _ of the growth of any free end which was chosen ( with probability ) as a trifunctional monomer but is unable to bifurcate since only one nearest neighbor site is available .it prevents such a free end to continue its growth linearly like a bifunctional monomer ( as occurs in the bpgm ) and consequently controls the relative incidence of branches . versus for increasing values of in the abpgm ( and ).,width=226 ]clearly , each polymer configuration of the abpgm is also a bond tree which can be classified by the numbers ( of bonds ) and ( of units ) . besides , every _is a site which ceased to grow and so represents a monofunctional vertex ; it can be subclassified as either a _ trapped site _ ( if it is in a `` cul de sac '' ) or a _ frustrated dead end _( otherwise ) .we denote by the number of frustrated dead ends . for the bpgm , since any free end never stops . in order to measure the relative incidence of frustrated ends among those free sites _ chosen _ as trifunctional monomers , we define the _ frustration rate _ as where the average is performed over _ all _ experiments . for infinite polymers , the current sites on the front of growthshould not be considered in the computation since their functionalities are undetermined . according to the prior definition , for any values of and in the bpgm .we remark that the null frustration of the bpgm does _ not _ mean that all free sites _ chosen _ to bifurcate succeed but only that when they fail they are transformed in monomers with functionality two . for the abpgm , the frustration rate increases with and as it is shown in figure 6 .indeed , both the excluded volume due to self - avoidance ( which increases with ) and the impurities diminish the chance of success of any free end , so is a monotonically increasing function of and . versus the input parameter for simulations of the bpgm * ( a ) * and abpgm * ( b ) * on a large square lattice ( with and experiments respectively ) .the dashed lines represent the ideal behavior .,title="fig:",width=245 ] versus the input parameter for simulations of the bpgm * ( a ) * and abpgm * ( b ) * on a large square lattice ( with and experiments respectively ) .the dashed lines represent the ideal behavior .,title="fig:",width=245 ] the effectiveness of the input parameter is evaluated by comparing it with the relative frequency of branches in a polymer configuration . for this purpose, we define the _ effective branching rate _ as the ratio between and averaged over all experiments : the plots of versus at different values of for simulations of the bpgm and abpgm ( on a large square lattice ) are shown in figures 7a and 7b , respectively .the ideal behavior is indicated as the dashed lines . regarding the bpgm, there is evidently a large discrepancy between the input parameter and the output rate . for , increases with up to a maximum ( at ) and then decreases to zero as .this behavior is explained as follows : for small , self - avoidance is reduced so that most sites trying to bifurcate succeed ; but as increases , parallel linear chains ( like those of figure 1a ) are forcibly generated due to both the increasing excluded volume and the clockwise update . for higher values of ,the presence of impurities hampers the formation of linear chains and thus decreases slower ( as a _ concave _ function ) . anyway , for the bpgm , the discrepancy between and gets more pronounced as . on the contrary , for the abpgm , always increases with as a _ convex _function ( for any value of ) and the difference is very small .in fact , for any , the ratio is approximately equal to for small , decreases until about for intermediate and then returns to as .such results corroborate the assertion that the input parameter controls the relative incidence of branches in the adapted model . and fixed .,width=188 ] and fixed .,width=188 ] and fixed .,width=188 ]the _ branched polymer growth model _ ( bpgm ) was originally proposed as a generalization of the _ kinetic growth walk _ in order to include the possibility of ramification of the polymer as well as the presence of impurities in the medium .the model was found to exhibit a finite - infinite transition due to competition between branching and hindrances . in this paper , we have proposed an alteration in the dynamics of the bpgm so as to _ adapt _ the model to an experimental realism .we have called it the _ adapted branched polymer growth model _( abpgm ) . the main difference between our proposal and the original model regards to the growth mechanism of a monomer which is chosen to bifurcate ( with probability ) but has just one empty nearest neighbor site . in the bpgm, such a monomer is transformed into a bifunctional unit so that it grows linearly ( with probability one ) . in our adapted model ,that monomer stops and becomes a frustrated dead end .this frustration reveals as preferable and more realistic than changing the monomer functionality from to ( as it occurs in the bpgm ) . this subtle change in the algorithm together with a _random _ update of the growing ends lead to the formation of polymers with new topological patterns and adjusted degrees of branching . indeed , we have shown that the effective branching rate is very much closer from the input parameter in the abpgm than in the original model .we have found that the abpgm presents a finite - infinite transition in the space with a peculiar reentrant phase in the small interval . at this instance , we compare some typical graphs of both the bpgm and abpgm at the _ fixed _ impurity concentration for three increasing values of : , and ( figures 8 , 9 and 10 , respectively ) . for the bpgm , all the three corresponding configurations are _ infinite _ clusters whose boundaries change from rough to faceted as increases . on the other hand ,the typical abpgm graph for is a _ finite_ cluster ( due to the occurrence of _ both _ trapped sites and frustrated dead ends ) ; if the cluster is _ infinite _ ( since here higher branching overcomes dead ends ) and if the cluster is _ finite _ again ( due to a high frustration rate ) .
|
the branched polymer growth model ( bpgm ) has been employed to study the kinetic growth of ramified polymers in the presence of impurities . in this article , the bpgm is revisited on the square lattice and a subtle modification in its dynamics is proposed in order to _ adapt _ it to a scenario closer to reality and experimentation . this new version of the model is denominated the _ adapted branched polymer growth model _ ( abpgm ) . it is shown that the abpgm preserves the functionalities of the monomers and so recovers the branching probability as an input parameter which effectively controls the relative incidence of bifurcations . the critical locus separating infinite from finite growth regimes of the abpgm is obtained in the space ( where is the impurity concentration ) . unlike the original model , the phase diagram of the abpgm exhibits a peculiar reentrance . + + keywords : branched polymer , critical transition , reentrant phase
|
drug - induced liver injury ( dili ) is a major public health and industrial issue that has concerned clinicians for the past 50 years. reports that many drugs for a diverse range of diseases were either removed from the market or rejected at the pre - marketing stage because of severe dili ( e.g. , iproniazid , ticrynafen , benoxaprofen , bromfenac , troglitazone , nefazodone , etc.).therefore , signals of a drug s potential for dili and early detection can help to improve the evaluation of drugs and aid pharmaceutical companies in their decision making .however , in most clinical trials of hepatotoxic drugs , evidence of hepatotoxicity is very rare and although the pattern of injury can vary , there are no pathognomonic findings that make diagnosis of dili certain , even upon liver biopsy.indeed , most of the drugs withdrawn from the market for hepatotoxicity , fall mainly in the post - marketing category , and have caused death or transplantation at frequencies of less than 1 per 10000 people that have been administered the drug . although the mechanism that causes dili is not fully understood yet , the procedure under which its clinical assessment is performed stems from zimmerman s observation that hepatocellular injury sufficient to impair bilirubin excretion is a revealing indicator of dili ( zimmerman 1978 , 1999 ) , also informally known as hy s law . in other words , a finding of _ alanine aminotransferase _( alt ) elevation , usually substantial and greater than thrice the upper limit of the normal of alt ( uln ) , seen concurrently with _ bilirubin _ ( tbl )greater than twice the upper limit of the normal of tbl ( uln ) , identifies a drug likely to cause severe dili ( fatal or requiring transplant ) .moreover , these elevations should not be attributed to any other cause of injury , such as other drugs , and _ alkaline phosphatase _ ( alp ) should not be greatly elevated so as to explain tbl s elevation . identified the assessment of dili as a multivariate extreme value problem and using the modelling approach , analysed liver - related laboratory data .the use of the model in this context is supported by the flexibility of the model to allow a broad class of dependence structures and the possibility to describe the probabilistic behaviour of a random vector which is extreme in at least one margin . despite its strong modelling potential , complications in terms of parameter identifiability problems and invalid inferencesare experienced with the original modelling procedure of . provided missing constraints for the parameter space of the model that are aimed to overcome these complications .the data we consider in this study relates to observed liver - related variables from a sample of 606 patients who were issued a drug that has been linked to liver injury in a phase 3 clinical trial and can be found in ; see also .the patients were categorised into 4 different dose levels in a randomised , parallel group , double blind phase 3 clinical study .our main question in this paper about the data is whether they support evidence of toxicity with increasing dose .this signal would be justified by a significant positive probability of post - baseline alt and tbl being greater than and , respectively .however , insufficient trial duration and the small sample sizes encountered in most such applications may lead to estimated zero probabilities of dili for all doses .this would stem from the non - occurrence of joint alt and tbl elevations or from inaccurate extrapolation due to the limited source of information .therefore , other patterns that could indicate or be triggered by dili would be helpful and here we consider an alternative approach for assessing evidence of altered liver behaviour .the current understanding of the biology that underpins hy s law , is that liver cells leak alt into the blood as they are damaged . as the amount of damage increases , the amount of alt increases , and so the liver begins to lose its capacity to clear tbl .subsequently , tbl is also expected to start to increase . at levels of damage that do not affect livers ability to clear tbl , dependence is not expected .hence , given that the drug has increasing toxicity with dose , we expect a natural ordering in the joint tail area of alt and tbl .this pattern of tail ordering is the main focus of this paper and is used to aid inference as well as to improve estimation efficiency in the modelling procedure of dili ..conditional spearman s correlation estimates between alt and tbl for four different dose levels and two conditioning levels , i.e. 100% and 20% .the letters , , and represent , in increasing order , the amount of the dose . [ cols= " > , > , > , > , > , > , > " , ]table [ tab : ratio_rmse ] shows the ratio of the monte carlo root mean square error , of the conditional quantile estimates obtained from the three copula models .an increase in efficiency under the imposition of the constrained models ad and so is observed for nearly all conditional quantile estimates in the asymptotically independent models .the highest reduction in rmse is achieved by the so model in the inverted logistic copula , a feature which is also consistent with the higher percentage of change in estimates as shown in table [ tab : perc ] .the conclusion for the asymptotically independent models is that the efficiency of the conditional quantile estimates is , in decreasing order , so , ad and ht .regarding the asymptotically dependent logistic copula , constrained models appear to be less efficient than the ht model and the efficiency of the conditional quantile estimates is , in decreasing order , ht , ad and so .the data that we consider in this study relates to a sample of 606 patients that were issued a drug linked to liver injury in a randomised , parallel group , double blind phase 3 clinical study .alt and tbl measurements were collected from all patients at baseline ( prior to treatment ) and post - baseline ( after 6 weeks of treatment ) periods .let and be the -th baseline and post - baseline laboratory variable respectively , measured at dose and .we use to denote the alt and tbl , respectively . instead of working with the raw data ,the transformation is applied initially to stabilise the heterogeneity observed in the samples . for this datasetwe apply the log - transformation and we denote the transformed data by and .consequently , we use a robust linear regression model of the log - post - baseline on the log - baseline variable to adjust for the baseline effect , i.e. in its simplest form , the robust linear regression of on is where and is a zero mean error random variable . herewe use median quantile regression which is equivalent to assuming that the error random variable follows the laplace distribution with zero location constant scale parameters .the parameter estimates , , and , , were found to be all significantly different from 0 and equal to , and , all indicating positive association of post - baseline with baseline .our approach is based on the basic model structure of , i.e. the extremal dependence of is estimated from the conditional dependence model whereas the log - baseline variables and are modelled independently for each dose . under the assumption of independence between and simulated samples of the post - baseline variables can be generated . in this example , the maximum spearman s correlation observed was 0.10 and corresponds to the pair and , whereas all other combinations gave values lower than 0.07 .the exact procedure of the simulation is straightforward , i.e. residual and baseline samples are generated from their models and are combined in equation ( [ eq : qreg ] ) , with and replaced by their corresponding maximum likelihood estimates , to produce simulated samples for the log - post - baseline variable .the simulated sample is then back - transformed to its original scale using the inverse box - cox transformation .the key differences between our modelling procedure and are related to the modelling of the baseline and the estimation of the conditional dependence model parameters.firstly , for each baseline variable we implement the univariate semi - parametric model of as described in section [ sec : laptrans ] by equation ( [ eq : coletawn ] ) whereas use the empirical distribution function .our motivation for modelling the tail of the baseline variable stems from the fact that it is likely to observe higher baseline alt and tbl in the population ( post - marketing period ) than in the clinical trial ( pre - marketing period ) . therefore , tail modelling of the baseline is key to the simulation process as it incorporates a natural source of extremity through model - based extrapolation .results from the univariate analysis are not presented in this paper but similar analyses can be found in and . in section [ sec : appht ] we test and subsequently select the stochastic ordering model developed in section [ sec : stochorder ] .the effect of the ordering constraints is illustrated via estimates of conditional quantiles for all doses and results are compared with the unconstrained estimates obtained from the ht model .we proceed to the prediction of the probability of extreme joint elevations by simulating post - baseline laboratory data of hypothetical populations of size using the fitted marginal and conditional dependence models .the assessment of the uncertainty of the estimates of extreme quantities of interest is performed via the bootstrap procedure .let and be the transformed , with respect to equation ( [ eq : laplace ] ) , residuals and for each dose .figure [ fig : data ] shows the bivariate scatterplots of against for all dose levels .the tail dependence between the residual alt and tbl variables appears to be very weak for all dose levels and a direct conclusion regarding the stochastic ordering effect can not be made on the basis of figure [ fig : data ] .this is also justified by the estimated and measures of tail dependence which are 0 for all doses . to assess the ordering assumption, we use the likelihood ratio criterion described in section [ sec : hypothesis ] , and test at the significance level of 5% , the hypotheses of ordered dose dependence in the conditional distributions of alt and tbl given that tbl and alt exceed a large threshold , respectively .for the so model we selected a range of values above 5 , the quantile of the laplace distribution .similar results where obtained from all thresholds and here we report the output for . figure [ fig : lrt ] shows the simulated distribution of the likelihood ratio test statistic under the null hypothesis of ordered dependence . both histograms imply that we can not reject the null hypothesis at 5% with stronger evidence for the distribution of tbl given large alt .the p - values are approximately 0.43 and 0.15 , respectively .the effect of constraining the parameter space to impose the stochastic ordering assumption between all dose levels is shown in figure [ fig : condquantiles ] via the conditional quantile estimates obtained from the so model .a weak lack of ordering appears from the estimated conditional quantiles of tbl given alt from the ht model as shown in figure [ fig : condquantiles ] in the standard laplace scale .the estimates of the median conditional quantiles from the ht model are ordered above approximately the conditioning level whereas the minimum and maximum conditional quantile estimates exhibit a lack of ordering for the majority of the conditioning levels .the imposition of the ordering constraints induces changes in all conditional quantile estimates which satisfy the ordering assumption above the conditioning level .the most important change in the quantile estimates is observed for dose which are considerably smaller than the ht estimates , when .the focus is placed now on the prediction of joint elevations of alt and tbl .as stated by and mentioned earlier in section [ sec : intro ] , dili is associated with alt and tbl exceeding the 3 and 2 respectively . for alt ,the uln is taken to be 36 units / litre and for tbl is 21 / litre .let be the joint survival probability of at dose level or , i.e. to estimate the survival probability ( [ eq : survprobs ] ) we follow the approach of , also mentioned earlier in section [ sec : introapp ] and simulate post - baseline samples .for each dose level , baseline samples are generated from the semi - parametric model ( [ eq : coletawn ] ) and are subsequently combined with generated residual samples from the so constrained model in equation ( [ eq : qreg ] ) , with and replaced by their corresponding maximum likelihood estimates , to produce simulated samples for the log - post - baseline variable .the simulated sample is then back - transformed to its original scale and the survival probability ( [ eq : survprobs ] ) is estimated empirically . to assess the uncertainty of the estimates , this procedure is repeated times and equal - tail confidence intervalsare obtained from the bootstrap distribution of each estimate .figure [ fig : predictions ] shows the estimated survival probabilities for and variable . for comparisons ,estimates are reported from the so and ht models .the imposition of the constraints induces changes in all estimates .in particular , the survival probability estimates from the so model are lower than the ht model for all doses , especially in the region .this behaviour also implies changes in the upper tail and in the joint region of dili , i.e. when and .as identified by , liver toxicity can be assessed by the joint extremes of alt and tbl . however , due to the limited sample size and the insufficient duration of the clinical trial ( 6 weeks only ) , extrapolation to the tail area that identifies dili is not feasible for the laboratory data that have been analysed in this paper . found some dose response relationship for the probability of joint extreme elevations but attributed this pattern to the large number of cases with in the higher dose groups rather than an effect on tbl or stronger extremal dependence . here ,we have developed methodology for ordered tail dependence across doses , a pattern that is potentially triggered by toxicity but not formally assessed by .based on current biological understanding , we view this pattern as an alternative measure of altered liver behaviour and our aim in this analysis is to formally test ordered dependence in the joint tail area of baseline - adjusted alt and tbl .our model formulation builds on model and extends the conditional approach , to account for stochastic ordering in the tails for assessing dili in multiple dose trials .our approach consists of bounding conditional distribution functions through additional constraints on the parameter space of model .these constraints are used to construct likelihood ratio tests which allow model selection and potential efficiency gains in estimation as shown mainly by our simulations for asymptotically independent models .our main finding that complements analysis is statistical evidence of ordered tail dependence across doses which we view as a signal of altered liver behaviour .our results and conclusions predict slightly higher probabilities of extreme elevations than those predicted originally by but of the same order of magnitude .this is possibly a consequence of the modelling of baseline variables which allows extrapolation in the marginal tails but could also be attributed to the different robust regression approach used here to adjust the baseline effect . also , the predicted survival curves indicate ordering from both unconstrained and constrained modelling approaches . this feature stems primarily from the conditional dependence model estimates of baselineadjusted alt and tbl which show ordering for a range of quantiles .last , there are some caveats with the proposed ordering effect used as a measure of altered liver behaviour , especially when considering highly toxic drugs for prolonged periods .if much damage has been done so that there is no alt left to leak into the blood , we would expect alt to come back down but tbl to remain high .the proposed methodology though could still be used to monitor such patterns in longitudinal trials via tests of dose ordering at consecutive time points .ioannis papastathopoulos acknowledges funding from astrazeneca and the sustain program - epsrc grant ep / d063485/1 - at the school of mathematics , university of bristol .we would particularly like to thank harry southworth of astrazeneca , two referees and the associate editor for helpful discussions and constructive comments on the analysis of the pharmaceutical data .keef , c. , papastathopoulos , i. tawn , j. a. 2012 , ` estimation of the conditional distribution of a multivariate variable given that one of its components is large : additional constraints for the heffernan and tawn model ' , _ j. mult .anal _ * 115 * , 396404 .
|
drug - induced liver injury ( dili ) is a major public health issue and of serious concern for the pharmaceutical industry . early detection of signs of a drug s potential for dili is vital for pharmaceutical companies evaluation of new drugs . a combination of extreme values of liver specific variables indicate potential dili ( hy s law ) . we estimate the probability of joint extreme elevations of laboratory variables using the conditional approach to multivariate extremes which concerns the distribution of a random vector given an extreme component . we extend the current model to include the assumption of stochastically ordered survival curves and construct a hypothesis test for ordered tail dependence between doses , a pattern that is potentially triggered by dili . the model proposed is applied to safety data from a phase 3 clinical trial of a drug that has been linked with liver toxicity . * keywords : * conditional dependence ; drug toxicity ; liver injury ; multivariate extremes ; safety data ; stochastic ordering ;
|
the specific problem motivating this research was to measure the relative length of optical paths within an astrometric stellar interferometer to a high degree of precision .precision in the order of several nanometers allows stellar interferometers of about 100 m baseline to achieve angular precision of several microarcseconds .one potential science goal that can be pursued with such angular precision is the detection of exoplanets through narrow - angle astrometry .the basic principle of searching for exoplanets through narrow angle astrometry is to measure the position of a target star with respect to the position of a reference star on the celestial sphere to an angular precision of tens of microarcseconds . throughan optical long baseline stellar interferometer , light from a resolved pair of stars forms interference fringes at different optical delays and the difference in optical delay is proportional to the projected separation of the two stars on the celestial sphere . for these reasons , the relative position of one star with respect to the othercan be determined by measuring the difference in optical path where the stellar fringes are found .high precision measurement of an optical path length can be conducted by analyzing the interference fringes formed by light ( e.g. from a laser ) traversing the optical path being measured . the maximum optical length measurable by interferometry is limited to the coherence length of the light source because interference fringes are not visible beyond this length .therefore , lasers , which typically have a long coherence length , are usually the preferred light source .however due to this same reason and the periodicity of the interference fringes , measurement of an optical path length can be ambiguous .the fringe patterns formed by two optical path lengths that differ by exactly one laser wavelength are exactly the same .the measurement is unambiguous only if the optical path length to be measured is within one laser wavelength .this limited range of distance where the metrology can measure accurately is also known as the non - ambiguity range ( nar ) . a straightforward solution to measure distances larger than the nar of a single wavelength metrology is to have the optical path length be incremented from zero to the desired length in steps no larger than the nar and incremental measurementis carried out at each step . but due to practical requirements this solution is not always desirable .conventionally , instead of one wavelength , two laser wavelengths are used to obtain a long synthetic optical wavelength by means of heterodyne interferometry .however , instead of using the heterodyne detection technique which requires specialized optical elements and hence has higher cost , the dual - wavelength metrology described here and implemented at the sydney university stellar interferometer ( susi ) employs a simple homodyne fringe counting detection scheme together with a ( relatively ) less precise stepper motor open - loop position control system to extend the range of distance the metrology can accurately measure .the implementation and performance of this metrology , which was found to be easily suitable for our demanding narrow - angle astrometric application with an optical long baseline stellar interferometer , are described hereafter .the diagram in fig .[ fig : optics ] shows a simplified version of the narrow - angle astrometric beam combiner ( musca ) in susi . instead of depicting the entire susi facility for which schematic diagrams can be obtained from , the diagram shows only the optical path relevant to the metrology and the beam combiner which is a pupil - plane michelson interferometer .the light sources for the metrology are two he - ne lasers ; one emits at peak wavelength of m and the other at peak wavelength of m ( * ? ? ?* ; * ? ? ?* converted from wavelengths in standard ( 760 torr , 15 ) dry air ) .the quoted values are wavelengths in vacuum but the two lasers are operated in air .both laser beams are first spatially filtered by pinholes , then collimated and finally refocused into the interferometer .each refocused beam forms an image at a field lens in front of an avalanche photodiode ( apd ) . the optical path along the left arm of the interferometer ( as seen in fig .[ fig : optics ] ) is periodically modulated by a piezo - electrically actuated mirror ( scanning mirror ) to produce temporal fringes , which are then recorded by the pair of apds as a time series of photon counts .the scanning mirror modulates the optical path in 256 discrete steps in about 70ms per scan period per scan direction . on the right arm of the interferometer ,the length of the optical path can be changed by a movable delay line .it is made up of mirrors sitting on a linear translation stage ( zaber t - ls28 m ) which is stepper motor driven and has an open loop position control system .the built - in stepper motor converts rotary motion to linear motion via a leadscrew .the leadscrew based open loop position control system has a nominal accuracy of 15 m .it is important to note that , apart from the lasers and their injection optics , all components in fig .[ fig : optics ] were pre - existing and required for the science goal of the beam combiner . during astronomical observations , the same pair of apds are used to record both the stellar and the metrology fringes .the optical path of the metrology lasers is designed to trace the optical path of the starlight beams in the beam combiner , which propagates into the instrument from the top as seen in fig .[ fig : optics ] through a pair of dichroic filters . in this way ,the optical path probed by the metrology is nearly identical to the optical path of the starlight and the small difference in optical delay ( due to the difference in wavelengths ) is invariant under the controlled atmospheric condition in the laboratory in which the optics are housed .our dual - wavelength metrology is designed to measure the change in optical path length of air brought about by a displacement of the delay line when it is moved from one position to another .the underlying principle of the metrology is to first measure phases of interference fringes of two lasers , operating at wavelengths whose ratio is theoretically not a rational number ( but practically a ratio of two large integers due to finite accuracy of the wavelengths ) , at two different delay line positions and then determine the number of fringe cycles that have evolved as a result of the displacement .the phase measurement is key in this classical two - wavelength approach to displacement metrology .the novel aspect of the metrology described here is the use of optical path modulation to measure the phases of the fringes of the two lasers simultaneously . in an idealized case where the laser wavelengths are perfectly stable and the measurements of the phases are noiseless, one measurement at each delay line position would be enough to uniquely resolve the length of the optical path between the two positions .however , in the real world , due to uncertainties in the phase measurements and laser wavelengths , there are a series of plausible solutions for the optical path length .the span between these plausible solutions is the non - ambiguity range ( nar ) of the metrology and is elaborated in section [ sec : nar ] . in order to extend the range of distancethe metrology can measure an open loop stepper motor position control system is exploited to narrow down the plausible solutions to a single best fit , thereby yielding the displaced optical path length measurement at interferometric precision .the basic requirement for this two - prong approach is that the nar arising from fringe phase measurement must be larger than the uncertainty of the stepper motor positioning system .since the stepper motor positioning system can determine the position of the delay line unambiguously over a large distance range ( in the case of t - ls28 m , 28 mm ) , the delay line can be moved quickly ( / s ) from one position to another and fringe phase measurement does not have to be done on the fly but before and after a move . in order to explain the method in more detail , first , let the distance between a position of the delay line and an arbitrary reference position be expressed in terms of two laser wavenumbers ( and ) as follows , and are the refractive indices of air at the respective wavenumbers while and or and are the number of full ( integer ) and fractional wavelengths that fit within this distance .the subscript represents one position of the delay line and if two different positions are considered , then , from eq ., where represents the difference in optical path length of air between the two delay line positions while , , and .the phases of the laser fringes , and , and their difference , , can be obtained from the photon counts recorded by the apds . and , on the other hand ,can not be directly determined but can be inferred from the following equality , where and .therefore the main observables for the metrology at each delay line position are , , and .the values of and are determined through a model - fitting method based on eq . .first , a range of guess values are generated based on the optical path length estimated from the stepper motor positioning system , , and the nar to evaluate the lhs of eq . .next the result is compared with the rhs of eq . which is obtained from the phase measurement .theoretically , there is a unique set of and values that satisfy the equation because is an irrational number .however , due to uncertainty in the phase measurement this is not the case in practice . instead the set of and that minimizes the error between the rhs and lhs of the equation is the set of values to be used for distance determination in eq . .as previously described , the metrology measures the length of an optical path by calculating the number of laser wavelengths that can be fitted into it .however , since the ratio of the laser wavenumbers , , can be approximated by a ratio of two integers , e.g. or , the phases of the laser fringes will appear ( depending on the uncertainty of the phase measurement ) to realign after several wavelengths as suggested by the numerator and denominator of the fraction .this means that the phase differences between the laser fringes will repeat and become indistinguishable from the previous phase realignment if the optical path length is larger than the distance suggested by the wavelength range .therefore the metrology can only determine the accurate length of the optical path if it is within this range , which is the non - ambiguity range ( nar ) of the metrology .the parameters , and , can take any integer value but in order to determine the value of nar of the metrology , suppose and .then the nar is defined as , provided that the following inequalities are satisfied for all values of and , which is derived from the lhs of eq .when the phases of the lasers fringes are aligned , hence the rhs is zero .the notation $ ] denotes the nearest integer of the real number within the brackets and is the standard error of mean of .the photon counts recorded from the setup in fig . [ fig : optics ] are reduced with a program written in matlab / octave to determine optical path differences based on the model presented in the previous sections . for each set of laser fringes recorded at one position of the delay line the phases of the fringes ( relative to the middle of the scan length ) , namely , , and ,are extracted using a fast fourier transform ( fft ) routine .[ fig : phi ] shows the laser fringes and the phases extracted from them .because the implementation is numerical and the fft routine expresses phases in the range of to , a minor tweak to the value of in eq .may be required to obtained an accurate value of . as a result, the term should be replaced with , where , the adjustment , , which value is obtained from computer simulation , consists of two parts and is summarized below as , where and .the value of is given in table [ tab : met_deln ] if all the expressions in the first five columns in the table are satisfied , otherwise is zero .for example , according to the first row of table [ tab : met_deln ] , if , , , and , then . the differences of phases in eq . and table [ tab : met_deln ]are computed by first expressing the phases in the range of 0 to .the measurement of the phases of the lasers fringes are carried out before and after the delay line is moved for astronomical observation .the displacement of the delay line brings one of the two stellar fringe packets into the scan range of the scanning mirror . by measuring the displacement of the delay line and the position of the fringe packets , the optical delay between them , which is the main science observable of musca ,can be measured .musca spends about 1530 minutes , depending on seeing condition of the night sky , integrating on each fringe packet while the metrology takes about 23 minutes in total to measure phases of the lasers fringes .the time spent by the metrology includes moving the delay line from one position to another .this sequence of astronomical and metrology measurements is repeated at least 3 times for each science target .the precision of the measurement by the dual - wavelength metrology depends on several factors which will be elaborated individually in this section .this is the main source of error affecting the precision of the metrology .errors in measuring and determine the uncertainty in choosing the right value for and and the uncertainty of respectively . the physical processes contributing to this error are photon noise and internal laboratory seeing . at high photon count rates ( about 10 counts per second in the susi setup ) ,the uncertainty of the phase information obtained from a fft ( or more generally a discrete fourier transform ) routine is negligible ( i.e. in the order of radians ) .therefore , internal laboratory seeing is the dominant factor .[ fig : delphi ] shows the standard error of the mean of typical measurements of and .the errors decrease with increasing number of scans .if the uncertainty of is less than 0.002 radian ( with scans ) , then the nar of this metrology is estimated to be m ( ) .given an nar of m the difference between an initial guess optical path length , , and its true value must be less than the nar value. the initial guess value is obtained from the stepper motor positioning system .the characterization of the precision of the system is shown in fig .[ fig : char1 ] .the plot in the figure shows the difference between the position of the delay line indicated by the stepper motor positioning system and the position measured by the dual - laser metrology . the cyclical error as seen from the plot is typical for a leadscrew based linear translation stage .being able to reproduce such a cyclical pattern verifies the accuracy of the dual - laser metrology especially the accuracy of . instead of the specified 15 m accuracy fig .[ fig : char1 ] shows that the leadscrew has a precision of m which is still well within the nar requirement .this requirement is satisfied even though the optical path length change induced by the delay line is twice its actual physical change in position ( refer to fig .[ fig : optics ] ) . based on the longitudinal mode spacing specification of the laser ( 438mhz for the red and for the green laser ) and the theoretical full width half maximum ( fwhm ) of the gain profile at the laser wavelengths ( 1.8ghz for the red and 1.5ghz for the green laser ) , the relative uncertainty of the wavelength , and , of individual laser is better than . herethe notation means one standard deviation of the wavenumber variation .if , then , because the refractive indices are similar ( ) and approximately constant between the time when the laser fringes are recorded at the two delay line positions . in the case of susi , this condition is true because the fluctuations of ambient temperature in the laboratory are designed to be small within a typical duration of an astronomical observation .laser wavelength error of this magnitude is not significant when measuring short optical path length but can lead to substantial error in optical path length measurement if the optical path is long .this and the effect of using frequency - stabilized lasers will be discussed in section [ sec : uncertainty ] .other than being used as light sources for the metrology , the lasers are also used for optical alignment for musca and the rest of the optical setup at susi . in the case of muscathe alignment between the lasers and starlight beams is critical in order to minimize the non - common - path between the metrology and the science channel .several other optical elements in the full optical setup at susi which also play a role in assisting the alignment process ( e.g.retro-reflecting mirrors , lenses , a camera in susi s main beam combiner , etc . )are not included in the simplified version the setup in fig .[ fig : optics ] but can be referred from .the optics put the pupil and the image of the pinhole and a star on the same respective planes through the aperture of the mask . the lasers and starlight beams should ideally be coaxially aligned in order to minimize the non - common - path between them . however , in the actual optical setup there can be a maximum misalignment of 0.5 mm between the pinhole and the image of the star over a distance of about 2 m .this translates to a maximum of 0.3 milliradians of misalignment or of relative metrology error .in absolute terms , this error is negligible ( nm ) for short ( mm ) optical path length measurement .however , a more precise alignment is necessary for measurement of longer optical path .the uncertainty of the optical path length measurement , , can be derived from eq . .the precision of the stepper motor positioning system ( well within the nar ) ensured that the uncertainty of is always zero .the characterization of the delay line in fig .[ fig : char1 ] verified this in practice .therefore the uncertainty of the optical path length measurement , given below , depends only on the uncertainty of the phase measurements and the laser wavenumbers . in order to simplify the equation ,let . if the phase error , , is more than 1 milliradian , which is typical for this metrology setup then it can be shown that the contribution of the wavenumber error is negligible at short optical path ( mm ) .this value is similar to the separation of two fringe packets of two stars with a projected separation of about 0 in the sky observed with a 160 m baseline interferometer .the plot in fig .[ fig : delphi ] shows that a phase error , , of milliradians can be achieved with just 100 scans or more than 500 scans in poor internal ( laboratory ) seeing conditions . with such magnitude of phase error and according to eq . , the uncertainty of an optical path length measurement is in the order of 5 nm or less. the range of optical path where the contribution of towards the overall error is negligible can be extended if frequency - stabilized lasers are used .such lasers usually have wavelengths accurate up to m or or smaller in relative error and cost about 34 times the price of a regular he - ne laser . at that precision , is not dependent on the optical path length until about 1 m which is well beyond the required optical delay for narrow - angle astrometry .however , the extension of the optical path range may also incur other technical cost , which involves improving the precision of optical alignment , increasing the range and speed of the delay line .therefore , an upgrade to the metrology system described here should take all these factors into consideration .a novel , inexpensive dual - wavelength laser metrology system has been presented and demonstrated to deliver nanometer precision in an experimental implementation . the scheme also boasts the significant advantage of propagating the metrology lasers along an optical path which is identical to the science beam and recording both signals with the same detectors , thereby eliminating non - common - path errors . due to much pre - existing common hardware ,this scheme was particularly straightforward to implement within the context of our specific application ( i.e. an optical long baseline astrometric stellar interferometer ) .however , because it does not require additional specialized optics ( e.g. an acoustic - optoelectronic modulator ( aom ) ) or electronics ( e.g. a digital phasemeter ) and furthermore has relaxed requirements on the accuracy and stability of the laser wavelengths , it may appeal to similar application within the optics community , especially for stellar interferometry .this research was supported under the australian research council s discovery project funding scheme .y.k . was supported by the university of sydney international scholarship ( usydis ) .cccccc + & & & & & + & & & & & + f & f & f & f & t & -1 + f & f & t & f & f & -1 + f & t & f & t & t & -1 + f & t & t & t & f & -1 + t & f & f & t & f & 1 + t & f & t & t & t & 1 + t & t & f & f & f & 1 + t & t & t & f & t & 1 +
|
a novel method capable of delivering relative optical path length metrology with nanometer precision is demonstrated . unlike conventional dual - wavelength metrology which employs heterodyne detection , the method developed in this work utilizes direct detection of interference fringes of two he - ne lasers as well as a less precise stepper motor open - loop position control system to perform its measurement . although the method may be applicable to a variety of circumstances , the specific application where this metrology is essential is in an astrometric optical long baseline stellar interferometer dedicated to precise measurement of stellar positions . in our example application of this metrology to a narrow - angle astrometric interferometer , measurement of nanometer precision could be achieved without frequency - stabilized lasers although the use of such lasers would extend the range of optical path length the metrology can accurately measure . implementation of the method requires very little additional optics or electronics , thus minimizing cost and effort of implementation . furthermore , the optical path traversed by the metrology lasers is identical with that of the starlight or science beams , even down to using the same photodetectors , thereby minimizing the non - common - path between metrology and science channels .
|
quantum entanglement is well known to be an essential resource for performing certain quantum information processing tasks such as quantum teleportation .it has also been shown to be essential for achieving an exponential speed - up over classical computation in the case of pure - state based quantum computation . however , in the case of mixed - state quantum computation , such as the model of knill and laflamme , such speed - up can be achieved without a substantial presence of entanglement .this fact has turned the attention to other types and measures of quantum correlations , like the quantum discord ( qd ) , which , while reducing to the entanglement entropy in bipartite pure states , can be non - zero in certain separable mixed states involving mixtures of non - commuting product states .it was in fact shown in that the circuit of does exhibit a non - negligible value of the qd between the control qubit and the remaining qubits . as a result ,interest on the qd and other alternative measures of quantum correlations for mixed states has grown considerably .the aim of this work is to embed measures of quantum correlations within a general formulation based on majorization concepts and the generalized information loss induced by a measurement with unknown result .this framework is able to provide general entropic measures of quantum correlations for mixed quantum states with properties similar to those of the qd , like vanishing just for states diagonal in a standard or conditional product basis ( i.e. , classical or partially classical states ) and reducing to the corresponding generalized entanglement entropy in the case of pure states . butas opposed to the qd and other related measures , which are based essentially on the von neumann entropy and rely on specific associated properties , the present measures are applicable with general entropic forms satisfying minimum requirements .for instance , they can be directly applied with the linear entropy which corresponds to the linear approximation in ( [ s ] ) and is directly related to the purity and the pure state concurrence , and whose evaluation in a general situation is easier than ( [ s ] ) as it does not require explicit knowledge of the eigenvalues of .we will show , however , that the same qualitative information can nonetheless be obtained .the positivity of the qd relies on the special concavity property of the conditional von neumann entropy , which prevents its direct extension to general entropic forms .the concepts of generalized entropies , generalized information loss by measurement and the ensuing entropic measures of quantum correlations based on minimum information loss due to local or joint local measurements are defined and discussed in [ ii ] .their explicit evaluation in three specific examples is provided in [ iii ] , where comparison with the corresponding entanglement monotones is also discussed .conclusions are finally drawn in [ iiii ] .given a density operator describing the state of a quantum system ( , ) , we define the generalized entropies where is a smooth strictly concave real function defined for ] and strictly decreasing in , such that and ) .we will further assume here , which ensures strict concavity .as in ( [ s])([sl ] ) , we will normalize entropies such that for a maximally mixed single qubit state ( ) . while our whole discussion can be directly extended to more general concave or schur - concave functions , we will concentrate here on the simple forms ( [ sf ] ) which already include many well known instances : the von neumann entropy ( [ s ] ) corresponds to , the linear entropy ( [ sl ] ) to , and the tsallis entropy to for the present normalization , which is concave for .it reduces to the linear entropy ( [ sl ] ) for and to the von neumann entropy ( [ s ] ) for .the rnyi entropy is just an increasing function of .the tsallis entropy has been recently employed to derive generalized monogamy inequalities .entropies of the general form ( [ sf ] ) were used to formulate a generalized entropic criterion for separability , on the basis of the majorization based disorder criterion , extending the standard entropic criterion . while additivity amongst the forms ( [ sf ] ) holds only in the von neumann case ( ) , strict concavity and the condition ensure that all entropies ( [ sf ] ) satisfy : i ) , with if and only if ( iff ) is a pure state ( ) , ii ) they are concave functions of ( if , ) and iii ) _ they increase with increasing mixedness _ : where indicates that is _ majorized _ by : here , denote the eigenvalues of and sorted in _ decreasing _ order ( , ) and the dimension of and ( if different , the smaller set of eigenvalues is to be completed with zeros ) .essentially indicates that the probabilities are more spread out than .the maximally mixed state satisfies of dimension , implying that all entropies attain their maximum at such state : of rank .( [ mf ] ) follows from concavity ( and the condition ) as for , iff is a mixture unitaries of ( , , ) , and .moreover , if at least one of the inequalities in ( [ m2 ] ) is strict ( ) , then , as is a strictly decreasing function of the partial sums ( if , ) . while the converse of eq .( [ mf ] ) does not hold in general ( ) , it does hold if valid for _ all _ of the present form ( an example of a smooth sufficient set was provided in ) : hence , although the rigorous concept of disorder implied by majorization ( ) can not be captured by any single choice of entropy , consideration of the general forms ( [ sf ] ) warrants complete correspondence through eq .( [ sfp ] ) .let us now consider a general projective measurement on the system , described by a set of orthogonal projectors ( , ) .the state of the system after this measurement , if the result is unknown , is given by which is just the `` diagonal '' of in a particular basis ( , with the eigenvectors of the blocks ) .it is well known that such diagonals are always more mixed than the original , i.e. , , and hence , for any of the present form , moreover , iff , i.e. , if is unchanged by such measurement ( if , strict concavity implies ) . a measurement with unknown result entails then no gain and most probably a loss of information according to any . the difference quantifies , according to the measure , this loss of information , i.e. , the information contained in the off - diagonal elements of in the basis .it then satisfies , with iff . in the case of the von neumann entropy ( [ s ] ) , eq .( [ dfm ] ) reduces to the _ relative _ entropy between and , since their diagonal elements in the basis coincide : [ is ] the relative entropy is well known to be non - negative , vanishing just if . in the case of the linear entropy ( [ sl ] ) , eq .( [ dfm ] ) becomes instead [ i2 ] where is the hilbert - schmidt or frobenius norm .hence , is just the square of the norm of the off - diagonal elements in the measured basis , being again verified that only if .let us remark , however , that the general positivity of ( [ dfm ] ) arises just from the majorization and the strict concavity of , the specific properties of the measures ( [ dsb])([d2b ] ) being not invoked .in fact , if the off - diagonal elements of in the measured basis are sufficiently small , a standard perturbative expansion of ( [ dfm ] ) shows that .the fraction in ( [ quad ] ) is positive due to the concavity of ( if , it should be replaced by ) .( [ quad ] ) is just the square of a _ weighted _ quadratic norm of the off - diagonal elements . in the case ( [ sl ] ) , eq .( [ quad ] ) reduces of course to eq .( [ d2b ] ) . for generalized measurements leading to eq .( [ sm ] ) and the positivity of ( [ dfm ] ) remain valid if _ both _ conditions i ) and ii ) are fulfilled : if and denote the eigenvectors of and , we then have and hence , i.e. , . while i ) ensures trace conservation , ii ) warrants that the eigenvalues of are convex combinations of those of .if not valid , eq . ( [ sm ] ) no longer holds in general , as already seen in trivial single qubit examples ( , will change any state into the pure state , yet fulfilling i ) ) . for projective measurements , .let us now consider a bipartite system whose state is specified by a density matrix .suppose that a complete _ local _ measurement in system is performed , defined by one dimensional local projectors .the state after this measurement ( eq.([rhop ] ) with ) becomes where ] the reduced state of after such outcome .the quantity will quantify the ensuing loss of information .we can now define the minimum of eq .( [ dfm ] ) amongst all such measurements , which will depend just on : eq .( [ sm ] ) implies , with iff there is a complete local measurement in which leaves unchanged , i.e. , if is already of the form ( [ rhopab ] ) .these states are in general diagonal in a _conditional _ product basis , where is the set of eigenvectors of , and can be considered as _ partially _ classical , as there is a local measurement in ( but not necessarily in ) which leaves them unchanged .they are the same states for which the qd vanishes .( [ dff ] ) can then be considered a measure of the deviation of from such states , i.e. , of quantum correlations .one may similarly define as the minimum information loss due to a local measurement in system , which may differ from . the states ( [ rhopab ] ) are _ separable _ , i.e. , convex superpositions of product states ( , ) . nonetheless , for a general the different terms may not commute , in contrast with ( [ rhopab ] ). ( [ dff ] ) will be positive not only in entangled ( i.e. , unseparable ) states , but also in all separable states not of the form ( [ rhopab ] ) , detecting those quantum correlations emerging from the mixture of non - commuting product states .( [ rhopab ] ) and concavity imply the basic bound .in addition , we also have the less trivial lower bounds [ ineq1] where are the local reduced states .the r.h.s . in ( [ ineq1 ] ) is negative or zero in any separable state , but can be positive in an entangled state .+ proof : _ any _ separable state is more disordered globally than locally , as in a classical system : , , or equivalently , , ) .for the state ( [ rhopab ] ) this implies [ ineq ] since , while is just the diagonal of the actual in the basis determined by the local projectors and hence .( [ ineq ] ) lead then to eqs.([ineq1 ] ) . the same inequalities ( [ ineq1 ] )hold of course for .one may be tempted to choose as the optimal local measurement which minimizes eq .( [ dfm ] ) that based on the eigenvectors of the reduced state , in which case it will remain unchanged after measurement ( ) .although this choice is optimal in the case of pure states ( see [ iib ] ) and other relevant situations ( see [ iii ] ) , it may not be so for a general .for instance , even if local states are maximally mixed , the optimal local measurement may not be arbitrary ( see example 3 in [ iii ] ) .in such a case a minor perturbation can orientate the local eigenstates along any preferred direction , different from that where the lost information is minimum .if is pure ( ) , then i.e. , eq .( [ dff ] ) reduces to the generalized entropy of the subsystem ( _ generalized entanglement entropy _ ) , quantifying the entanglement between and according to the measure . in the von neumann case ( [ s ] ) , eq .( [ xx ] ) becomes the standard entanglement entropy , whereas in the case of the linear entropy ( [ sl ] ) , eq .( [ xx ] ) becomes the square of the _ pure state concurrence _( i.e. , the tangle ) , .+ proof : for a pure state , and both , have the same non - zero eigenvalues .( [ ineq1 ] ) then imply .there is also a local measurement which saturates eqs .( [ ineq1 ] ) : it is that determined by the schmidt decomposition where is the schmidt number and the non - zero eigenvalues of or .choosing the local projectors in ( [ rhopab ] ) as , we then obtain which leads to local states , and hence to implying eq .( [ xx ] ) .for pure states , entanglement can then be considered as _ the minimum information loss due to a local measurement , _ according to _ any _ . just to verify eq .( [ xx ] ) , we note that for an arbitrary local measurement defined by projectors , we may rewrite eq.([sd ] ) as where and , such that in ( [ rhopab ] ) . hence , by concavity , i.e. , . thus , for pure states , a local measurement in the basis where is diagonal ( local schmidt basis ) provides the minimum of eq.([dfm ] ) . for a maximallyentangled state leading to a maximally mixed ( ) eq .( [ dfm ] ) becomes obviously independent of the choice of local basis ( any choice in leads to a corresponding basis in , leaving ( [ sd ] ) unchanged ) .a pure state can be said to be _ absolutely _ more entangled than another pure state if , i.e. , if ( ) .this concept has a clear deep implication : according to the theorem of nielsen , a pure state can be obtained from by local operations and classical communication ( locc ) only if , i.e. , iff is absolutely more entangled than .this condition can not be ensured by a single choice of entropy , requiring the present general measures for an entropic formulation ( the exception being two - qubit or systems , where any is a decreasing function of the largest eigenvalue of and hence iff ) .the _ convex roof extension _ of the generalized entanglement entropy ( [ xx ] ) of pure states will lead to an entanglement measure for mixed states , where , are pure states and is the generalized entanglement entropy of .minimization is over all representations of as convex combinations of pure states .( [ efx ] ) is a non - negative quantity which clearly vanishes iff is separable .it is also an _ entanglement monotone _ ( i.e. , it can not increase by locc ) since is a concave function of invariant under local unitaries , satisfying then the conditions of ref . . in the case of the von neumann entropy , eq.([efx ] ) becomes the entanglement of formation ( eof ) , while in the case of the linear entropy , it leads to the mixed state _ .the general mixed state concurrence ( denoted there as -concurrence ) is recovered for ( in two qubit systems , but not necessarily in general ) .while implies ( as ( [ rhopab ] ) is separable ) the converse is not true since can be non - zero in separable states . nonetheless , and despite coinciding for pure states , there is no general order relation between these two quantities for a general .we now consider the information loss due to a measurement based on _ products _ of one dimensional local projectors , such that is the diagonal of in a _ standard _ product basis : where .such measurement can be considered as a subsequent local measurement in after a measurement in ( if the results are of course unknown ) , implying , where is the measurement in .the ensuing minimum will then satisfy in general with if and only if is of the form ( [ rc ] ) .the state ( [ rc ] ) represents a _ classically correlated state _fur such states there is a local measurement in as well as in which leaves the state unchanged , being equivalent in this product basis to a classical system described by a joint probability distribution .( [ dfab ] ) is then a measure of all quantum - like correlations .the states ( [ rc ] ) are of course a particular case of ( [ rhopab ] ) , i.e. , that where all are mutually commuting . product states are in turn a particular case of ( [ rc ] ) ( ) and correspond to independent of in ( [ rhopab ] ) . in the case of pure stateswe obtain , however , since the state ( [ rap ] ) is already of the form ( [ rc ] ) , being left unchanged by a measurement based on the schmidt basis projectors .pure state entanglement can then be also seen as _ the minimum information loss due to a joint local measurement_. for an arbitrary product measurement on a pure state , the expansion with , leads to in ( [ rc ] ) .eqs.([rc])([xxx ] ) then imply . since , eqs .( [ psiab1 ] ) , ( [ xxx ] ) and ( [ psiab2 ] ) lead to the first relation is apparent as is just the marginal of the joint distribution .the state ( [ rap ] ) can then be rigorously regarded as _ the closest classical state _ to the pure state , since it provides the lowest information loss among _ all _ local or joint local measurements for _ any _ .pure states have therefore an associated _ least mixed classical state _ , such that the state obtained after any local measurement is always majorized by it .let us finally mention that it is also feasible to consider more general product measurements based on conditional product projectors , leading to a diagonal in a conditional product basis , where . the ensuing information loss will satisfy again , as ( [ cond ] ) can still be considered as the diagonal of ( [ rhopab ] ) in a conditional product basis , where the are not necessarily the eigenvectors of .however , if chosen as the latter , we have and hence , as ( [ rhopab ] ) remains unchanged under a measurement in the optimum conditional product basis formed by the eigenvectors of the times the states . if is chosen as the von neumann entropy ( [ s ] ) , eq .( [ dfm ] ) becomes ( see eq .( [ dsb ] ) ) the ensuing minimum is also the _ minimum _ relative entropy between and any state diagonal in a standard or conditional product basis : where denotes a state of the general form ( [ rhopab ] ) with _ both _ the local projectors as well as the probabilities and states being arbitrary . proof : for a given choice of conditional product basis , the minimum relative entropy is obtained when has the same diagonal elements as in that basis ( as is minimized for ) .hence , , where denotes here the post - measurement state ( [ cond ] ) in that basis .the same property holds for if is restricted to states diagonal in a standard product basis : where is here of the form ( [ rc ] ) with arbitrary .( [ iabr ] ) is precisely the bipartite version of the quantity introduced in as a measure of quantum correlations for composite systems .the quantity ( [ ib ] ) is also closely related to the _ quantum discord _ , which can be written in the present notation as , with \label{dmb}\,,\\&=&i^{m_b}(\rho_{ab})-i^{m_b}(\rho_b)\,,\end{aligned}\ ] ] where is the measured state ( [ rhopab ] ) and , the reduced states after and before the measurement .thus , .they will coincide when the optimal local measurement is the same for both ( [ dfm ] ) and ( [ dmb ] ) and corresponds to the basis where is diagonal , such that ( ) .this coincidence takes place , for instance , whenever is maximally mixed ( as in this case for any choice of local basis ) .both and also vanish for the same type of states ( i.e. , those of the form ( [ rhopab ] ) ) and both reduce to the standard entanglement entropy for pure states ( although eq . ( [ dff ] ) requires a measurement in the local schmidt basis whereas ( [ dmb ] ) becomes independent of the choice of local basis , as is pure and hence for any local measurement ) .a direct generalization of ( [ dmb ] ) to a general entropy is no longer positive for a general concave , since the positivity of ( [ dmb ] ) relies on the concavity of the _ conditional _ von neumann entropy , which does not hold for a general .minimum distances between and classical states of the form ( [ rc ] ) were also considered in , where the attention was focused on the decrease of the mutual information after a measurement in the product basis formed by the eigenstates of and . such quantity coincides with present for this choice of basis as and remain unchanged .nonetheless , for a general the minimum ( [ iabr ] ) may be attained at a different basis . if is chosen as the linear entropy ( [ sl ] ) , eq .( [ dfm ] ) becomes ( see eq . ( [ d2b ] ) ) where is just the squared norm of the off - diagonal elements lost after the local measurement .it therefore provides the simplest measure of the information loss .its minimum is the _ minimum _ squared hilbert - schmidt distance between and _ any _ state diagonal in a general product basis : where the last minimization is again over all states of the form ( [ rhopab ] ) , with , and arbitrary .+ proof : for a general product basis , latexmath:[ ] and , the minimum ] .proof : after a local measurement in the basis , the joint state becomes which is diagonal in the schmidt basis . for any other complete local measurement , will be diagonal in a basis , where we set ( eq .( [ psiab1 ] ) ) , with diagonal elements .the latter are always majorized by ( ) since ( eq .( [ may ] ) ) and .hence , is minimum for a measurement in the basis , which leads to eq.([dfx ] ) .moreover , since ( [ rhopx ] ) is diagonal in a standard product basis .( [ rhopx ] ) is again the _closest _ classical state to ( [ rx ] ) , majorizing _ any _ other state obtained after a local or product measurement . to verify the monotonicity , we note that \nonumber\\ & \geq&({\textstyle\frac{n'_s-1}{n}}+\!\!\sum_{p_k<1/n}p_k ) [ f'(\lambda_2^x)-f'(\lambda_1^x)]\geq 0\ , , \end{aligned}\ ] ] since and hence , where and is the number of schmidt probabilities not less than .( [ dfx ] ) is then strictly increasing if is strictly concave and , implying only if or .a series expansion of ( [ dfx ] ) around shows that in agreement with eq .( [ quad ] ) , indicating _ a universal quadratic increase _ of for small ( ) . for the quadratic measure ( [ dlm ] )we obtain in fact a simple quadratic dependence ] , iff is _ absolutely _ more entangled than ( ) .this is apparent as whereas iff ( eq . ( [ rhopx ] ) ) , in which case .let us now explicitly consider the mixture ( [ rx ] ) in the two - qubit case , where can be always written as with and ] at fixed . in particular , for , eq . ( [ dx ] ) becomes we may compare ( [ d2x ] ) with the corresponding entanglement monotone ( [ efx ] ) ( the tangle ) , which coincides here with the squared concurrence of . for a general two - qubit mixed statethe concurrence can be calculated as ] ) , i.e. , , in agreement with the numerical results of . denoting the ensuing quantity as ,we then obtain , for the present normalization , as for any single qubit state , . the inequality ] .in contrast , the von neumann measure is _ smaller _ than the eof ( ) ( fig .[ f2 ] ) . for small have in particular .again , coincides here with the qd as is maximally mixed .let us finally remark that eqs .( [ e23 ] ) and ( [ ifz ] ) also imply it can then be seen that for , ( although the difference is small ) whereas for or ( within the limits allowed by the validity of ( [ eab ] ) ) .these intervals can be corroborated from the expansions for and , ^2 + o(z^3)}\,,\nonumber\\ & = & { \textstyle\frac{1}{4}[-f''(\frac{1}{2})-f'(0)+f'(1)](1-z)+o(1-z)^2 } \nonumber \end{aligned}\ ] ] which imply in these limits iff , leading to in tsallis case .we have constructed a general entropic measure of quantum correlations , which represents the minimum loss of information , according to the entropy , due to a local projective measurement .its basic properties are similar to those of the quantum discord , vanishing for the same partially classical states ( [ rhopab ] ) and coinciding with the corresponding generalized entanglement entropy in the case of pure states .its positivity relies , however , entirely on the majorization relations fulfilled by the post - measurement state , being hence applicable with general entropic forms based on arbitrary concave functions .in particular , for the linear entropy it leads to a quadratic measure which is particularly simple to evaluate and can be directly interpreted as minimum squared distance , yet providing the same qualitative information as other measures .the minimum loss of information due to a joint local measurement , has also been discussed , and shown to coincide with in some important situations , vanishing just for the classically correlated states ( [ rc ] ) . while there is no general order relation between these quantities and the associated entanglement monotones ( [ efx ] ), the use of generalized entropies allows at least to find such a relation in some particular cases : the quadratic measure provides for instance an upper bound to the squared concurrence of the two - qubit states ( [ rx])([psab ] ) ( unlike the von neumann based measures ) and coincides with it in the mixture ( [ st2 ] ) .moreover , generalized entropies such as allow to find in these previous cases an interval of values where an order relationship holds , which requires a delicate balance between the derivatives of at different points .let us finally mention that some general concepts emerge naturally from the present formalism , like that of absolutely more entangled and in particular that of the _ least mixed _ classically correlated state that can be associated with certain states , such as pure states or the mixtures ( [ rx ] ) or ( [ st2 ] ) .this state majorizes any other state obtained after a local measurement , thus minimizing the entropy increase ( [ dfm ] ) or ( [ dfab ] ) for _ any _ choice of entropy .it allows for an unambiguous identification of the least perturbing local measurement .999 c.h .bennett et al . , phys .lett . * 70 * , 1895 ( 1993 ) ; phys .lett . * 76 * , 722 ( 1996 ) .nielsen and i. chuang , _ quantum computation and quantum information _ , cambridge univ .press ( 2000 ) .r. josza and n. linden , proc .r. soc . * a 459 * , 2011 ( 2003 ) ; g. vidal , phys .lett . * 91 * , 147902 ( 2003 ) .e. knill , r. laflamme , phys .lett . * 81 * , 5672 ( 1998 ) .a. datta , s.t .flammia and c.m .caves , phys .a * 72 * , 042316 ( 2005 ) .h. ollivier and w.h .zurek , phys .* 88 * , 017901 ( 2001 ) .l. henderson and v. vedral , j. phys . * a 34 * , 6899 ( 2001 ) ; v. vedral , phys .lett . * 90 * , 050401 ( 2003 ) .a. datta , a. shaji , and c.m .caves , phys .* 100 * , 050502 ( 2008 ) .lanyon , m. barbieri , m.p . almeida and a.g .white , phys .* 101 * , 200501 ( 2008 ) .s. luo , phys .a * 77 * , 042303 ( 2008 ) .sarandy , phys .a * 80 * , 022108 ( 2009 ) .a. shabani , d.a .lidar , phys .lett . * 102 * , 100402 ( 2009 ) .a. ferraro et al , phys .a * 81 * , 052318 ( 2010 ) .s. luo , phys .a * 77 * , 022301 ( 2008 ) .a. datta , s. gharibian , phys .a * 79 * , 042325 ( 2009 ) .s. wu , u.v .poulsen and k. mlmer , phys .a * 80 * 032319 ( 2009 ) .k. modi , t. paterek , w. son , v. vedral , m. williamson , phys .lett . * 104 * , 080501 ( 2010 ) .h. wehrl , rev .phys . * 50 * , 221 ( 1978 ) .r. bhatia , _ matrix analysis _ , springer ( ny ) ( 1997 ) ; + a. marshall and i. olkin , _ inequalities : theory of majorization and its applications _ , academic press ( 1979 ). n. canosa , r. rossignoli , phys .lett . * 88 * , 170401 ( 2002 ) .wootters , phys .lett . * 80 * , 2245 ( 1998 ) . c. rungta and c. caves , phys .a * 67 * , 012307 ( 2003 ) ; c. rungta et al , phys . rev .a * 64 * , 042315 ( 2001 ) . c. tsallis , j. stat52 * , 479 ( 1988 ) ; _ introduction to non - extensive statistical mechanics , _springer ( 2009 ) .kim , phys .a * 81 * , 062328 ( 2010 ) .r.rossignoli , n.canosa , phys .a * 66 * , 042306 ( 2002 ) .r.rossignoli , n.canosa , phys .a * 67 * , 042302 ( 2003 ) .nielsen and j. kempe , phys .* 86 * , 05184 ( 2001 ) .r. horodecki , m. horodecki , phys .a * 54 * , 1838 ( 1996 ) .v. vedral , rev .phys . * 74 * , 197 ( 2002 ) .werner , phys .a * 40 * , 4277 ( 1989 ) .bennett , h.j .bernstein , s. popescu , and b. schumacher , phys .a * 53 * , 2046 ( 1996 ) .nielsen , phys .lett . * 83 * , 436 ( 1999 ) .g. vidal , j. mod .opt . * 47 * , 355 ( 2000 ) .bennett , d.p .divincenzo , j.a .smolin , and w.k.wootters , phys .a * 54 * , 3824 ( 1996 ) .osborne , phys .a * 72 * , 022309 ( 2005 ) .n. li , s. luo , phys .a * 78 * , 024303 ( 2008 ) .g. vidal , r.f .werner , phys .a * 65 * , 032314 ( 2002 ) .l. gurbits and h. barnum , phys .a * 66 * , 062311 ( 2002 ) ; _ ibid _ a * 68 * , 042312 ( 2003 ) .m. horodecki , p. horodecki , phys .rev . a * 59 * , 4206 ( 1999 ) .
|
we propose a general measure of non - classical correlations for bipartite systems based on generalized entropic functions and majorization properties . defined as the minimum information loss due to a local measurement , in the case of pure states it reduces to the generalized entanglement entropy , i.e. , the generalized entropy of the reduced state . however , in the case of mixed states it can be non - zero in separable states , vanishing just for states diagonal in a general product basis , like the quantum discord . simple quadratic measures of quantum correlations arise as a particular case of the present formalism . the minimum information loss due to a joint local measurement is also discussed . the evaluation of these measures in a few simple relevant cases is as well provided , together with comparison with the corresponding entanglement monotones .
|
in recent years , metallic nanoparticles have emerged as potent catalysts for various applications . in particular , the discovery that gold becomes a catalyst when divided to the nanophase has led to an intense research in this field . in many cases the synthesis and the catalytic applications must be handled in the liquid phase , mostly in water .secure handling of nanoparticles in a liquid phase can be achieved by polymeric carriers that have typical dimensions in the colloidal domain .examples thereof include dendrimers or spherical polyelectrolyte brushes . such systems allow one to generate nanoparticles in aqueous phase in a well - defined manner and handle them securely in catalytic reactions .+ more recently , thermosensitive colloidal microgels have been used as carriers for metallic nanoparticles in catalysis .[ f : scheme ] displays the scheme of such a carrier system that may be regarded as a nanoreactor : a thermosensitive network composed of cross - linked chains of poly(n - isopropylacrylamide ) ( pnipam ) has been attached to a solid core made of an inert material as , _e.g. _ , polystyrene or silica .metal nanoparticles are embedded in the network which is fully swollen in cold water .raising the temperature above the critical temperature ( 32 c for pnipam ) , a volume transition takes place within the network and most of the water is expelled . lu _et al . _ have been the first to show that the catalytic activity of the embedded nanoparticles is decreased when shrinking the network by raising the temperature .this effect has been explained by an increased diffusional resistance mass transport within the shrunk network .a similar model has been advanced by carregal - romero _ et al ._ when considering the catalytic activity of a single gold nanoparticle embedded concentrically in a pnipam - network .+ recently , we have shown that the mobility of reactants is not the only important factor : an even larger role is played by the change of polarity of the network when considering mass transport from bulk to the catalyst(s ) through such medium .this theory is based on the well - known seminal paper by debye and considers a single nanoparticle located in the center of a hollow thermosensitive network . here, the substrate that reacts at the surface of the nanoparticle diffuses through a free - energy landscape created by the hydrogel environment . in other words ,the reactants experience a change in the solvation free energy when entering the gels from bulk solvent , which can be equally regarded as adsorption free energy or _ transfer _free energy .for instance , the free energy of a substrate may be lowered upon entering the network . in this waythe number of substrate molecules in the network will be augmented , so that their increased concentration in the vicinity of the catalyst will lead to a higher reaction rate .the free - energy change for the substrate outside and inside the network leads to a nernst distribution for the substrate s concentration within the system .this effect offers a new way to manipulate the catalytic activity and selectivity of metallic nanoparticles .+ in this paper we formulate a more general theory , that is able to account for the geometry of core - shell nanoreactors featuring _ many _ catalysts , as shown schematically in fig .[ f : scheme ] . here, a given number of catalytic centers are encapsulated randomly in a network .we calculate the total rate of the catalytic reaction for a prescribed geometry of the catalysts , given values of and and specified diffusion constants for the substrate in the bulk and in the network .the rate constant computed in this way can be compared to that characterizing an equal number of particles suspended freely in solution .the present model has been designed to describe the well - studied core - shell systems , but is equally adapted to the study of systems where catalytic centers are embedded in homogeneous microgels . up to now , most of the experimental work has been done using the reduction of 4-nitrophenol by borohydride ions in aqueous solution .this reaction can be regarded as a model reaction , since it can be monitored with high precision thus leading to very accurate kinetic data .the rate - determining step proceeds at the surface of the nanoparticles and the mechanism is known .the present theory , however , comprises also the nanoreactors in which enzymes are used as catalytic centers embedded in the network shell .+ in general , diffusion - influenced reactions ( dir ) are ubiquitous in many contexts in physics , chemistry and biology .however , while the mathematical foundations for the description of dir in simple systems have been laid nearly a century ago , many important present - day problems , including the catalytic activity of composite core - shell nanoreactors , require considering complex geometries and multi - connected reactive boundary systems .the first attempts to consider dir featuring many competing sinks date back to the 1970s , while more sophisticated methods have been developed subsequently to deal with arbitrary systems comprising many partially reactive boundaries . along similar lines ,the theory developed in this paper , based on general results proved in ref . , provides a novel , accurate description of dir occurring between a small substrate molecule and the catalytic centers embedded in a large , composite nanoreactor system .our theory is fully general , in that it covers the whole spectrum of rate - limiting steps in catalysis , from reaction - limited to diffusion - limited reactions . while the theory allows one to compute the reaction rate for an arbitrary catalytic surface turnover rate , closed - form analytical expressionare derived for strongly reaction - limited and diffusion - limited reactions . in the limit of adilute random distribution of nps encapsulated in a thick hydrogel shell , we find that the overall diffusion - controlled rate constant of our core - shell composites is described by a langmuir - like isotherm of the form where is the number of nanoparticles , is the ratio of the diffusion constants in the hydrogel ( for inner ) and bulk ( for outer ) , is the ratio of np size ( radius ) to the nanoreactor size and is the smoluchowski rate constant for the nanoreactor as a whole , _i.e. _ the total flux ( in units of bulk substrate concentration ) of substrate molecules to a stationary perfectly absorbing sink of size in the bulk .the above expression is valid for small sizes of the central core .interestingly , for configurations where the core size becomes of the same order of the whole composite ( thin shell ) , our theory shows that in general the rate constant is increased , up to 40 % , depending on the transfer free energy jump and on the reactant mobility in the shell . + in the limit of slow surface substrate - product conversion rate , i.e. for reaction - limited kinetics , we find that \ ] ] where is the intrinsic turnover rate constant that describes the transformation of substrate to product molecules at the nanocatalyst surface ( units of inverse concentration times inverse time ) .this means that when the surface substrate conversion rate constant is weak , the geometrical features of the overall assembly and the mobility of substrate molecules within the hydrogel shell become immaterial . in this case , the crucial control parameter is the transfer free energy jump .the paper is organized as follows . in section [ sec : model ]we describe our mathematical model and pose the associated boundary - value problem . in section [ sec : sol ] , we describe concisely the procedure that leads us to the exact solution of the posed problem ( the mathematical details can be found in the appendix ) .section [ sec : analytic ] illustrates an analytic approximation that provides an extremely good description of the exact solution for small core sizes in the physically relevant range of parameters .in particular , we discuss how this formula can be used to derive practical criteria to design nanoreactors with optimized performances .finally , we wrap up our main results in section [ sec : summary ] .containing spheres : the solid polystyrene ( ps ) core ( radius ) is shown at the center , along with catalytic nanoparticles of radius at positions ( ) .the internal ( microgel ) domain ( with reactant diffusion coefficients ) and the external ( bulk solution ) domain ( with reactant diffusion coefficients ) are indicated explicitly , together with a schematic free - energy radial profile showing the transfer free - energy jump . in our treatmentthe latter can be both repulsive , , or attractive , .[ f : scheme ] ] we model a core - shell nanoreactor consisting of a polystyrene ( ps ) core surrounded by a microgel layer as two concentric spheres centered at the origin of a cartesian 3 frame , as depicted in fig . [ f : scheme ] .we denote with and the core and shell radius , respectively .the shell is assumed to be a homogeneous continuum , carrying small nano - catalysts ( metal nanoparticles or enzymes ) that we model as spheres of radius .for the sake of simplicity , and in accordance to our general multi - sink theory , we label the ps core as the inner sphere with and position vector and denote the position of the nanocatalysts with the vectors , . we want to compute the total reaction rate constant for reactions where a substrate ( or ligand ) molecule is converted to some product species at the surface of the catalyst spheres .these are endowed with a surface rate constant , which is in general a function of temperature due to underlying thermally activated surfaces processes .let us denote with the reference frame with the origin at the nanoreactor center and with the reference frames with the origins at the nanospheres centers and the axes parallel to ( of course ) .this formally defines the following domains , \varphi_{0 } \in(0,2\pi]\ } \setminus \cup_{\a}\overline{\omega}_{\a}\\ & \omega^{-}=\{r_{0}\in ( r_{0},\infty),\theta_{0 } \in [ 0,\pi ] , \varphi_{0 } \in(0,2\pi]\ } \end{aligned}\ ] ] where denotes the interior of the ps core and , , denote the interior of the -th nanosphere .the reactant diffuses with diffusion coefficients and inside the microgel shell and in the bulk , respectively . in generalone can assume due to obstructed or hindered diffusion in the microgel .+ let denote the bulk density of reactants and let us introduce the time - dependent normalized density .we assume that the system relaxation time for the diffusive flux of particles ( the reactants ) , , is small enough to neglect time - dependent effects .hence , in the absence of external forces , the diffusion of reactants with normalized number density is described by the steady - state diffusion equation = 0 \qquad \mbox{in}\text { } \omega = \omega ^{+}\cup \omega ^{- } \label{sp1}\ ] ] with and which should be solved with the customary bulk boundary condition it is well known from the general theory of partial differential equations that the classical solution ( twice continuously differentiable in and continuous on ) of the stationary diffusion equation does not exist in the whole domain .therefore one should consider the function accordingly , we should impose a condition for the substrate concentration field at the bulk / microgel interface , .it has been demonstrated recently that a key factor controlling the overall reaction rate is the transfer free - energy jump , a quantity that describes the partitioning of the reactant in the microgel versus bulk . for a single nanocatalyst at the nanoreactor center , a free - energy jump at the solvent - microgel interfacecan be accounted for through a modified reactant density in the microgel , namely when crossing the bulk / microgel interface .this is also the case for many catalysts in the infinite dilution limit .here we assume that such description is a valid approximation for realistic nanoreactors , where the nanocatalyst packing fraction is indeed very small , as discussed in depth later .accordingly , we require where , being the inverse temperature . furthermore , the following continuity condition for the local diffusion fluxes should also hold at the bulk / microgel interface where we have introduced the diffusion anisotropy parameter finally , reflecting boundary conditions should hold at the surface of the inert ps core , _ we are interested in the pseudo - first - order irreversible diffusion - influenced reaction between the nano - catalysts encapsulated in the microgel and reactants freely diffusing in the bulk and in the microgel {k_{d } } c\cdot b \xrightarrow[]{k^\ast } c+p\ ] ] where denotes the so - called _ encounter complex _ , and are the association and dissociation diffusive rate constants , respectively , and is the surface rate constant of the chemical reaction occurring at the reactive catalysts boundaries .reactions of the kind are customary dealt with by enforcing radiation boundary conditions ( also known as robin boundary conditions ) at the reaction surfaces , , _ i.e. _ {\partial \omega_\a}=0 \qquad \a=2,3,\dots , n+1 \label{e : contrad}\ ] ] thus , we can consider that the nanoreactors effectively act as sinks of infinite capacity according to the pseudo - first - order reaction scheme {k } c+p\ ] ] where the forward diffusion - influenced rate constant ( _ i.e. _ the equivalent of the measured rate constant ) is defined by the formula using this rate constant one can approximately describe the kinetics of the effective reaction as where is the volume concentration of nanocatalysts within the microgel and is the time - dependent effective bulk concentration of ligands .we stress that our schematization of the problem holds under the _ excess reactant _condition .our goal is to compute the rate constant defined in eq . .+ equation with the boundary conditions , , , and completely specify our mathematical problem .it is expedient in the following to use the dimensionless spatial variables , and for .hence , our problem can be cast in the following form [ e : bvp ] the parameter gauges the _ character _ of the reaction .here we have introduced the smoluchowski rate constant for a nanocatalyst embedded in the microgel , . the limit corresponds to considering the boundaries as perfectly absorbing sinks . in this casethe reaction becomes _ diffusion - limited _ , as the chemical conversion from the encounter complex to the product becomes infinitely fast with respect to the diffusive step leading to the formation of .otherwise , for , the chemical conversion step is slow enough compared to diffusion , which makes the reaction overall reaction - limited .we look for solutions for the stationary density of reactants in the bulk and in the microgel as linear combinations of regular and irregular harmonics . given the multi - connected structure of the boundary manifold , we must consider as many _cartesian reference frames as there are non - concentric boundaries .thus , we can look for solutions in the form [ e : upm ] where are spherical harmonics , , , for and and are infinite - dimensional sets of unknown coefficients that can be determined by imposing the boundary conditions and and the pseudo - continuity conditions at the microgel - solvent interface , eqs and .this can be done straightforwardly using known addition theorems for spherical harmonics , which results in an infinite - dimensional linear system of equations for the unknown coefficients ( see appendix [ app : a ] for the details ) .furthermore , making use of known properties of solid spherical harmonics , it is easy to see that the rate constant defined by eq. is simply given by as shown in the appendix [ app : a ] , the exact solution to the steady - state problem can be worked out in principle to any desired precision by keeping an appropriate number of multipoles .remarkably , a simple yet accurate _analytical _ expression can be easily obtained in the monopole approximation ( moa ) , which corresponds to keeping only the term in the multipole expansions . in particular , it is interesting to compute the rate normalized to the smoluchowski rate of an isolated sink of the same size as the whole nanoreactor in the bulk , _i.e. _ .we obtain ( see appendix [ app : b ] for the details ) }\ ] ] where we recall that and .this is the key analytical result derived in this work , that can be readily employed to predict and optimize the geometry and activity of typical core - shell nanoreactors .the quantity stands for the average inverse inter - catalyst separation .this can be computed analytically under the reasonable assumption that spatial correlations in the catalysts configurations are negligible ( see appendix [ app : b ] ) , where denotes the fraction of the nanoreactor size occupied by the ps core and is the non - dimensional size of each catalyst .we see that , since , one has , _i.e. _ , is of the order of unity , ( taking from experiments ) .+ in the limit of vanishing surface reactivity of the embedded nano - catalysts it is immediate to show from eq .that \ ] ] we see that , if the surface substrate conversion rate constant is weak , this becomes the rate - limiting step for the overall rate of the nanoreactor , irrespective of the geometrical features of the assembly and of the mobility properties of the hydrogel shell . in this case , it becomes crucial to control the transfer free energy jump to tune the rate of the composite nanoreactor .conversely , if the catalytic action exerted by the metal nanoparticles encapsulated in the microgel is fast with respect to diffusion , _i.e. _ , expression can be simplified by taking the limit .this yields the expression for the fully diffusion - controlled rate which we will discuss in depth in the following section .note that for eq . coincides with the solution of the debye - smoluchowski problem for a single perfectly absorbing sink located at the center of the shell , with .+ formulas and have been derived in the monopole approximation , which means that any reflecting boundaries in the problem are not taken into account .therefore , these should be used to approximate the rate constant of a composite nanoreactor for small to moderate sizes of the central ps core . in the following sections , we provide a thorough characterization of the rate constant of a composite core - shell nanoreactor , computed exactly by solving eqs . , and we compare it to the approximate moa analytical expression in the physically relevant diffusion - limited regime ( ) .we now discuss in more detail the essential features of the diffusion - controlled rate in eq . . in the monopole approximation ,valid for small to intermediate sizes of the central reflecting ( inert ) core , the role of the latter only enters indirectly through the spatial average , with . in the swollen configuration ,the central core does not occupy a large fraction of the overall nanoreactor volume , with ( as taken from the experiments reported in ref .hence , in this regime we expect that the exact size of the core should not play a significant role for the diffusion - controlled rate for relevant values of the physical parameters , _i.e. _ , ( weak ) attraction to the hydrogel and decreased internal diffusion . + [ cols="^,^ " , ] in certain configurations , such as in the shrunk phase of thermosensitive core - shell nanoreactors past the lower critical solution temperature ( lcst ) , the core size can become comparable to the overall size of the nanoreactor . in these circumstances ,the moa breaks down and the full solution should be used instead .[ f : versusrs ] reports an analysis of the rate constant as the core sized is varied for different values of the geometrical and physico - chemical parameters . as a first observation ,the plots confirm and substantiate the discussion laid out in the previous section , as it can be appreciated that the core size does not influence the overall rate until .more generally , one can recognize that the rate constant tends to increase as the shell shrinks ( increasing values of ) .the only exception is for low and attractive transfer free energy , where a non - monotonic trend is observed ( top left panel ) .this is a typical screening effect , which originates from the subtle interplay between diffusive interactions among the nanocatalysts and individual screening due the reflecting ps core .it turns out that the transfer free energy is the prime parameter that controls the increase in the rate as the ps core size increases .the more attractive the transfer free energy , the less marked the increase .interestingly , at fixed values of , the less mobile the substrate in the shell , the more marked the rate boosting effect of the shell shrinking .importantly , it is apparent from the plots reported in fig .[ f : versusrs ] that the role of the core size is reduced for loading number of the order of a few tens and small size of the nanocatalysts .all in all , these results confirm the complex intertwining of the structural , geometrical and physico - chemical features underlying the overall catalytic activity of core - shell nanoreactors .in this paper we have developed a detailed theory to compute the total reaction rate of core - shell nanoreactors with multiple catalysts embedded in the shell .the theory is utterly general and allows one to compute the overall reaction rate to any desired accuracy for ( ) given configuration , dimension and surface reactivity of the encapsulated nanocatalysts , ( ) size of the core and the shell , ( ) substrate mobility in the bulk and in the shell and ( ) transfer free - energy jump for substrate molecules .furthermore , we computed analytical expressions in the monopole approximation that provide an excellent interpolation of the exact solution for small to intermediate sizes of the central core in the physically relevant range of parameters , _i.e. _ small size and high dilution of the nanocatalysts .our formulas supply ready - to - use simple tools that can be employed to interpret and optimize the activity of experimentally realizable nanoreactor systems .this shall be particularly useful to estimate the optimal number of embedded nps , that should reflect a compromise between a resource - friendly design and the highest possible catalytic output .our analytical treatment predicts an optimal number of nps given by the following expression where $ ] is the desired efficiency , and are the np and overall nanoreactor sizes , respectively , and , are the substrate mobility in the shell and in the bulk , respectively . for realistic values of these parameters ,one gets of the order of hundreds , a value for which the monopole approximation is still in excellent agreement with the exact solution for core sizes such that . as discussed already previously , eq .makes it clear that a decisive factor in the design of optimized hydrogel - based nanoreactors must be the tuning of the reactant - hydrogel interaction towards attraction ( ) for a specific reaction ( or mix of reactions ) .furthermore , as hydrogel that cause strongly reduced substrate mobility also demand more nps to achieve high efficiency ( ) , the choice of the shell hydrogel should be made so as to privilege smooth longer - ranged interactions ( like electrostatic , hydrophobic , or dispersion ) with respect to short - ranged ones ( like h - bonds ) , in order to avoid too sticky interactions that would slow down the reactant mobility substantially due to activated hopping . +our analytical treatment breaks down if one wishes to push the nanoreactor performances towards full efficiency ( ) , where the loading number of np increases rapidly , or in the case of larger core sizes ( of the order of the whole spherical assembly ) .in such cases , the diffusive interaction between nps can no longer be neglected , as well as the effect of the inert ps core , as diffusive and screening interactions among the different boundaries become important . as a consequence, the full exact solution should be employed to investigate the behavior of the rate and elaborate an optimal design of the composite nanoreactor .interestingly , we have shown that , as a general situation , increasing both the core and the nanocatalyst sizes either has a rather mild effect on the overall performances , or , more generally , causes a rate - boosting effect , with an increase of the overall rate constant of up to 40 % for values of the core size .s. a - u acknowledges financial support from the beijing municipal government innovation center for soft matter science and engineering .j. d. acknowledges funding by the erc ( european research council ) consolidator grant with project number 646659nanoreactor .m. g. and f. p. would like to thank s. d. traytak for insightful discussions .f. p. and d. f. acknowledge funding by the cnrs ( centre national de la recherche scientifique ) under the pics scheme .in order to determine the unknown coefficients in the expansions , we have to express the solution in the local coordinates on every boundary ( the spherical surfaces ) and at the microgel - bulk interface , where we impose the pseudo - continuity conditions for the reactant density field .this can be accomplished by using known addition theorems for spherical harmonics . after some lengthy algebra , we obtain the following linear equations - e_{gq}=0\label{sistemanano3}\end{aligned}\ ] ] where , for and we have introduced characteristic functions . eqs . , , hold with and .the matrices read where ( according to this notation ) and here , for the sake of coherence , we pose ( radius of the ps core ) and , for ( radius of the nanocatalysts ) .the system , , can be expressed more conveniently by subtracting eq . from eq . , which leads to [ sistemananoridotto ] if the multipole expansions are truncated at multipoles , the system ) comprises equations , which can be easily solved numerically .once the the coefficients have been determined , the rate constant can be obtained from eq . .recalling the definitions and making use of known properties of solid spherical harmonics , it is easy to see that the system to be solved has the following structure {c.cccc}{c.cccc } \bigg(\frac{1}{\lambda}+\frac{\zeta q}{q+1}\bigg ) { \mathbb{i}\hspace*{-0.4ex } } & \bigg(\frac{1}{\lambda}-\zeta\bigg)v^1 & \bigg(\frac{1}{\lambda}-\zeta\bigg)v^2 & \dots & \bigg(\frac{1}{\lambda}-\zeta\bigg)v^{n+1}\\ h^1 & - { \mathbb{i}\hspace*{-0.4ex } } & w^{1,2}&\dots&w^{1,n+1}\\ h^2 & w^{2,1}&-{\mathbb{i}\hspace*{-0.4ex}}&\dots & w^{2,n+1}\\ \vdots & \vdots&\vdots&\ddots&\vdots\\ h^{n+1 } & w^{n+1,1 } & w^{n+1,2 } & \dots & -{\mathbb{i}\hspace*{-0.4ex}}\end{bmat } \right ] \times \left [ \begin{bmat}(r)[1pt]{c}{ccc.ccc.c.ccc } a_{00}\\ \vdots\\ a_{n_m n_m}\\ b^{1}_{00}\\ \vdots\\ b^{1}_{n_m n_m}\\ \vdots\\ b^{n+1}_{00}\\ \vdots\\ b^{n+1}_{n_m n_m}\\ \end{bmat } \right ] = \left [ \begin{bmat}(r)[1pt]{c}{ccc.ccc.c.ccc } 1\\ \vdots\\ 0\\ 0\\ \vdots\\ 0\\ \vdots\\ 0\\ \vdots\\ 0 \end{bmat } \right]\ ] ] to solve this system of equation numerically we employ standard linear algebra packages ( lapack ) .the number of multipoles considered to truncate the system was chosen so that the relative accuracy on the rate was less than or equal to , namely .the monopole approximation of the system for a given configuration of the nanocatalysts can be obtained by truncating the expansion to .the ensuing equations read with . recalling the definitions , and , we have , , , and , so that eqs .take the following form since , the overall rate constant of the nanoreactor can be computed simply as ( note that is identically zero as the ps core is modeled as a reflecting sphere ) .moreover , we can average the system over the catalyst configurations , in the reasonable hypothesis that spatial correlations between the positions of the catalysts are negligible .this reduces the many - body average to a two - body problem , namely ^ 2 } \int_{r_s+a}^{r_0-a } r^2 \ , dr \int_{r_s+a}^{r_0-a } \rho^2 \ , d\rho \int_0^\pi \frac{\sin \theta } { \sqrt{r^2 + \rho^2 - 2r\rho\cos\theta}}\,d\theta \nonumber\\ & = & \frac{2(1-\ve)^5 - 5 ( 1-\ve)^2(\gamma+\ve)^3 + 3(\gamma+\ve)^5 } { ( 1-\ve)^6 - 2 ( 1-\ve)^3(\gamma+\ve)^3 + ( \gamma+\ve)^6 } \left ( \frac{3a}{5r_0 } \right ) : = \ve \ , c ( \ve,\gamma ) \end{aligned}\ ] ] where .we therefore get from eqs . = 0 \end{aligned } \right.\ ] ] where we have taken as the catalysts are identical . by eliminating the solutionis easily recovered as the diffusion - limited solution follows straightforwardly in the limit .v. v. pushkarev , z. zhu , k. an , a. hervier and g. a. somorjai , _ topics in catalysis _ , 2012 , * 55 * , 12571275 y. zhang , x. cui , f. shi and y. deng , _ chemical reviews _ , 2012 , * 112 * , 24672505 m. haruta , _ chemical record _ , 2003 , * 3 * , 7587 g. j. hutchings and m. haruta , _ applied catalysis a : general _ , 2005 , * 291 * , 25 r. m. crooks , m. zhao , l. sun , v. chechik and l. k. yeung , _ accounts of chemical research _ , 2001 , * 34 * , 181190 j .- h .noh and r. meijboom , _ applied catalysis a : general _ , 2015 , * 497 * , 107120 m. ballauff , _ progress in polymer science _ , 2007 , * 32 * , 11351151 y. lu and m. ballauff , _ progress in polymer science ( oxford ) _ , 2011 , * 36 * , 767792 y. lu , y. mei , m. drechsler and m. ballauff , _ angewandte chemie - international edition _ , 2006 , * 45 * , 813816 s. carregal - romero , n. j. buurma , j. perez - juste , l. m. liz - marzan and p. herv es , _ chem ._ , 2010 , * 22 * , 30513059 s. wu , j. dzubiella , j. kaiser , m. drechsler , x. guo , m. ballauff and y. lu , _ angewandte chemie - international edition _ , 2012 , * 51 * , 22292233 s. angioletti - uberti , y. lu , m. ballauff and j. dzubiella , _ the journal of physical chemistry c _ , 2015 , * 119 * , 1572315730 p. debye , _ trans .soc . _ , 1942 , * 92 * , 265272 s. shi , q. wang , t. wang , s. ren , y. gao and n. wang , _ the journal of physical chemistry b _ , 2014 , * 118 * , 717786 t. aditya , a. pal and t. pal , _ chemical communications ( cambridge , england ) _ , 2015 , * 51 * , 941031 p. zhao , x. feng , d. huang , g. yang and d. astruc , _ coordination chemistry reviews _ , 2015 , * 287 * , 114136 p. herves , m. prez - lorenzo , l. m. liz - marzn , j. dzubiella , y. lu , m. ballauff , p. hervs , m. prez - lorenzo , l. m. liz - marzn , j. dzubiella , y. lu and m. ballauff , _ chemical society reviews _ , 2012 ,* 41 * , 5577 s. gu , s. wunder , y. lu , m. ballauff , r. fenger , k. rademann , b. jaquet and a. zaccone , _ the journal of physical chemistry c _ , 2014 , * 118 * , 1861818625 n. welsch , a. l. becker , j. dzubiella and m. ballauff , _ soft matter _ , 2012 , * 8 * , 1428 d. f. calef and j. m. deutch , _ annual review of physical chemistry _ , 1983 ,* 34 * , 493524 _ diffusion - limited reactions _ , ed .s. a. rice , elsevier , amsterdam , 1985 , vol .25 a. szabo , _ the journal of physical chemistry _ , 1989 , * 93 * , 69296939 h. x. zhou , g. rivas and a. p. minton , _ annual review of biophysics _ , 2008 , * 37 * , 375397 m. von smoluchowski , _ physik z _ , 1916 , * 17 * , 557571 f. c. collins and g. e. kimball , _ journal of colloid science _ , 1949 , * 4 * , 425437 j. m. deutch , b. u. felderhof and m. j. saxton , _ the journal of chemical physics _ , 1976 ,* 64 * , 4559 b. u. felderhof and j. m. deutch , _ the journal of chemical physics _ , 1976 ,* 64 * , 4551 s. d. traytak , _ chemical physics letters _ , 1992 , * 197 * , 247 254 s. d. traytak , _ the journal of composite mechanics and design _ , 2003 , * 9 * , 495521 e. gordeliy , s. l. crouch and s. g. mogilevskaya , _ international journal for numerical methods in engineering _ , 2009 , * 77 * , 751775 m. galanti , d. fanelli , s. d. traytak and f. piazza , _ phys . chem . chem ._ , 2016 , * 18 * , 1595015954 r. cukier , _ macromolecules _ , 1984 , * 17 * , 252255 o. a. ladyzhenskaya and n. n. uraltseva , _ linear and quasilinear elliptic equations _ , academic press , new york and london , 1968 , vol. 46 y. mei , y. lu , f. polzer , m. ballauff and m. drechsler , _ chemistry of materials _ , 2007 , * 19 * , 10621069 c. yigit , n. welsch , m. ballauff and j. dzubiella , _ langmuir _ , 2012 , * 28 * , 1437314385 m. palasis and s. h. gehrke , _ journal of controlled release _ , 1992 , * 18 * , 111 m. galanti , d. fanelli and f. piazza , 2015 , 5 g. arfken , h. j. weber and f. e. harris , _ mathematical methods for physicists , sixth edition : a comprehensive guide _ , elsevier academic press , 2005 m. j. caola , _ journal of physics a : mathematical and general _ , 2001 , * 11 * , l23l25
|
we present a detailed theory for the total reaction rate constant of a composite core - shell nanoreactor , consisting of a central solid core surrounded by a hydrogel layer of variable thickness , where a given number of small catalytic nanoparticles are embedded at prescribed positions and are endowed with a prescribed surface reaction rate constant . besides the precise geometry of the assembly , our theory accounts explicitly for the diffusion coefficients of the reactants in the hydrogel and in the bulk as well as for their transfer free energy jump upon entering the hydrogel shell . moreover , we work out an approximate analytical formula for the overall rate constant , which is valid in the physically relevant range of geometrical and chemical parameters . we discuss in depth how the diffusion - controlled part of the rate depends on the essential variables , including the size of the central core . in particular , we derive some simple rules for estimating the number of nanocatalysts per nanoreactor for an efficient catalytic performance in the case of small to intermediate core sizes . our theoretical treatment promises to provide a very useful and flexible tool for the design of superior performing nanoreactor geometries and with optimized nanoparticle load .
|
ever since the publication of s _ theory of games and economic behavior _ in 1944 , coalitions have played a central role within game theory .the crucial questions in coalitional game theory are which coalitions can be expected to form and how the members of coalitions should divide the proceeds of their cooperation . traditionally the focus has been on the latter issue , which led to the formulation and analysis of concepts such as the core , the shapley value , or the bargaining set . which coalitions are likely to form is commonly assumed to be settled exogenously , either by explicitly specifying the coalition structure , a partition of the players in disjoint coalitions , or , implicitly , by assuming that larger coalitions can invariably guarantee better outcomes to its members than smaller ones and that , as a consequence , the grand coalition of all players will eventually form .the two questions , however , are clearly interdependent : the individual players payoffs depend on the coalitions that form just as much as the formation of coalitions depends on how the payoffs are distributed ._ coalition formation games _ , as introduced by , provide a simple but versatile formal model that allows one to focus on coalition formation .in many situations it is natural to assume that a player s appreciation of a coalition structure only depends on the coalition he is a member of and not on how the remaining players are grouped .initiated by and , much of the work on coalition formation now concentrates on these so - called _hedonic games_. hedonic games are relevant in modeling many settings such as formation of groups , clubs and societies and also online social networking .the main focus in hedonic games has been on notions of stability for coalition structures such as _ nash stability _ , _ individual stability _ , _ contractual individual stability _ , or _core stability _ and characterizing conditions under which the set of stable partitions is guaranteed to be non - empty ( see , * ? ? ?* ; * ? ? ? presented a taxonomy of stability concepts which includes the _ contractual strict core _, the most general stability concept that is guaranteed to exist . a well - studied special case of hedonic games are two - sided matching games in which only coalitions of size two are admissible .we refer to for a critical overview of hedonic games .hedonic games have recently been examined from an algorithmic perspective ( see , * ? ? ?* ; * ? ? ? surveyed the algorithmic problems related to stable partitions in hedonic games in various representations . showed that for hedonic games represented by _ individually rational list of coalitions _ , the complexity of checking whether core stable , nash stable , or individual stable partitions exist is np - complete .he also proved that every hedonic game admits a contractually individually stable partition .coalition formation games have also received attention in the artificial intelligence community where the focus has generally been on computing optimal partitions for general coalition formation games without any combinatorial structure . proposed a fully - expressive model to represent hedonic games which encapsulates well - known representations such as _ individually rational list of coalitions _ and _ additive separability_. _ additively separable hedonic games ( ashgs ) _ constitute a particularly natural and succinctly representable class of hedonic games . each player in an ashg has a value for any other player and the value of a coalition to a particular player is simply the sum of the values he assigns to the members of his coalition .additive separability satisfies a number of desirable axiomatic properties and ashgsare the non - transferable utility generalization of _ graph games _ studied by . showed that checking whether a nontrivial nash stable partition exists in an ashgis np - complete if preferences are nonnegative and symmetric .this result was improved by who showed that checking whether a core stable , strict core stable , nash stable , or individually stable partition exists in a general ashgis np - hard . positive algorithmic results for subclasses of ashgsin which each player merely divides other players into friends and enemies . examined the tradeoff between stability and social welfare in ashgs .recently , showed that computing partitions that satisfy some variants of individual - based stability is pls - complete , even for very restricted preferences .in another paper , studied the complexity of computing and verifying optimal partitions in ashgs . in this paper, we settle the complexity of key problems regarding stable partitions of ashgs .we present a polynomial - time algorithm to compute a contractually individually stable partition .this is the first positive algorithmic result ( with respect to one of the standard stability concepts put forward by ) for general ashgswith no restrictions on the preferences .we strengthen recent results of and prove that checking whether the core or the strict core exists is np - hard , even if the preferences of the players are symmetric .finally , it is shown that verifying whether a partition is in the contractually strict core ( csc ) is conp - complete , even if the partition under question consists of the grand coalition .this is the first computational hardness result concerning csc stability in hedonic games of any representation .the proof can be used to show that verifying whether the partition consisting of the grand coalition is pareto optimal is conp - complete , thereby answering a question mentioned by .our computational hardness results imply computational hardness of the equivalent questions for _ hedonic coalition nets _ .in this section , we provide the terminology and notation required for our results .a _ hedonic coalition formation game _ is a pair where is a set of players and is a _ preference profile _ which specifies for each player the preference relation , a reflexive , complete , and transitive binary relation on the set . the statement denotes that strictly prefers over whereas means that is indifferent between coalitions and .a _ partition _ is a partition of players into disjoint coalitions .by , we denote the coalition of that includes player .we consider utility - based models rather than purely ordinal models . in_ additively separable preferences _ , a player gets value for player being in the same coalition as and if is in coalition , then gets utility .a game is _ additively separable _ if for each player , there is a utility function such that and for coalitions , if and only if .we will denote the utility of player in partition by .a preference profile is _ symmetric _ if for any two players and is _ strict _ if for all . for any player ,let be the set of friends of player within .we now define important stability concepts used in the context of coalition formation games . *a partition is _nash stable ( ns ) _ if no player can benefit by moving from his coalition to another ( possibly empty ) coalition . *a partition is _ individually stable ( is ) _ if no player can benefit by moving from his coalition to another existing ( possibly empty ) coalition while not making the members of worse off . *a partition is _ contractually individually stable ( cis ) _ if no player can benefit by moving from his coalition to another existing ( possibly empty ) coalition while making neither the members of nor the members of worse off .* we say that a coalition _ strongly blocks _ a partition , if each player strictly prefers to his current coalition in the partition .a partition which admits no blocking coalition is said to be in the _ core ( c)_. * we say that a coalition _ weakly blocks _ a partition , if each player weakly prefers to and there exists at least one player who strictly prefers to his current coalition .a partition which admits no weakly blocking coalition is in the _ strict core ( sc)_. * a partition is in the _ contractual strict core ( csc ) _ if any weakly blocking coalition makes at least one player worse off when breaking off .the inclusion relationships between stability concepts depicted in figure [ fig : relations ] follow from the definitions of the concepts. we will also consider _pareto optimality_. a partition of is _ pareto optimal _ if there exists no partition of such that for all , and there exists at least one player such that .we say that a partition satisfies _ individual rationality _ if each player does as well as by being alone , i.e. , for all , . throughout the paper , we assume familiarity with basic concepts of computational complexity ( see , * ? ? ?it is known that computing or even checking the existence of nash stable or individually stable partitions in an ashgis np - hard . on the other hand ,a potential function argument can be used to show that at least one cis partition exists for every hedonic game .the potential function argument does not imply that a cis partition can be computed in polynomial time .there are many cases in hedonic games , where a solution is guaranteed to exist but _ computing _ it is not feasible . for example , presented a potential function argument for the existence of a nash stable partition for ashgswith symmetric preferences . however there are no known polynomial - time algorithms to _ compute _ such partitions and there is evidence that there may not be any polynomial - time algorithm . in this section ,we show that a cis partition can be computed in polynomial time for ashgs .the algorithm is formally described as algorithm [ alg - cis - general ] .algorithm [ alg - cis - general ] may also prove useful as a preprocessing or intermediate routine in other algorithms to compute different types of stable partitions of hedonic games .* input : * additively separable hedonic game . + * output : * cis partition . [ while - step ] take any player a cis partition can be computed in polynomial time .[ prop : cis - easy ] our algorithm to compute a cis partition can be viewed as successively giving a priority token to players to form the best possible coalition among the remaining players or join the best possible coalition which tolerates the player .the basic idea of the algorithm is described informally as follows .set variable to and consider an arbitrary player .call the _ leader _ of the first coalition with .move any player such that from to .such players are called the _ leader shelpers_. then keep moving any player from to which is tolerated by all players in and strictly liked by at least one player in . call such players _ needed players_. now increment and take another player from among the remaining players and check the maximum utility he can get from among .if this utility is less than the utility which can be obtained by joining a previously formed coalition in , then send the player to such a coalition where he can get the maximum utility ( as long all players in the coalition tolerate the incoming player ) .such players are called _latecomers_. otherwise , form a new coalition around which is the best possible coalition for player taking only players from the remaining players . repeat the process until all players have been dealt with and .we prove by induction on the number of coalitions formed that no cis deviation can occur in the resulting partition .the hypothesis is the following : [ [ base - case ] ] base case + + + + + + + + + consider the coalition .then the leader of has no incentive to leave .the leader s helpers are not allowed to leave by the leader . if they did , the leader s utility would decrease . for each of the needed players ,there exists one player in who does not allow the needed player to leave .now let us assume a latecomer arrives in .this is only possible if the maximum utility that the latecomer can derive from a coalition is less than .therefore once joins , he will only become less happy by leaving .any player can not have a cis deviation to .either is disliked by at least one player in or is disliked by no player in . in the first case , can not deviate to even he has an incentive to . in the second case ,player has no incentive to move to because if he had an incentive , he would already have moved to as a latecomer .[ [ induction - step ] ] induction step + + + + + + + + + + + + + + assume that the hypothesis is true .then we prove that the same holds for the formed coalitions . by the hypothesis , we know that players can not leave coalitions .now consider .the leader of is either not allowed to join one of the coalitions in or if he is , he has no incentive to join it .player would already have been member of for some if one of the following was true : * there is some such that the leader of likes .* there is some such that for all , and there exists such that .* there is some , such that for all , and and for all .therefore has no incentive or is not allowed to move to another for .also will have no incentive to move to any coalition formed after because he can do strictly better in .similarly , s helpers are not allowed to leave even if they have an incentive to .their movement out of will cause to become less happy .also each needed player in is not allowed to leave because at least one player in likes him .now consider a latecomer in .latecomer gets strictly less utility in any coalition . therefore has no incentive to leave .finally , we prove that there exists no player such that has an incentive to and is allowed to join for . by the hypothesis , we already know that does not have an incentive or is allowed to a join a coalition for . since is not a latecomer for , either does not have an incentive to join or is disliked by at least one player in .for ashgs , the problem of testing the core membership of a partition is conp - complete .this fact does not imply that checking the existence of a core stable partition is np - hard .recently , showed that for ashgschecking whether a core stable or strict core stable partition exists is np - hard in the strong sense .their reduction relied on the asymmetry of the players preferences .we prove that even with symmetric preferences , checking whether a core stable or a strict core stable partition exists is np - hard in the strong sense .symmetry is a natural , but rather strong condition , that can often be exploited algorithmically .we first present an example of a six - player ashgwith symmetric preferences for which the core ( and thereby the strict core ) is empty . [ example : symm - core - empty ] consider a six player symmetric ashgadapted from an example by where * ; * ; * ; * ; and * as depicted in figure [ fig : example ] .it can be checked that no partition is core stable for the game .note that if , then and can not be in the same coalition of a core stable partition .also , players can do better than in a partition of singleton players . let coalitions which satisfy individual rationality be called feasible coalitions .we note that the following are the feasible coalitions : , , , , , , , , , , , and .consider partition then , * ; * ; * ; * ; * ; and * . out of the feasible coalitions listed above , the only weakly ( and also strongly ) blocking coalition is in which player 1 gets utility 9 , player 5 gets utility 10 , and player 6 gets utility 11 .we note that the coalition is not a weakly or strongly blocking coalition because player 3 gets utility 9 in it .similarly is not a weakly or strongly blocking coalition because both player 3 and player 5 are worse off .one way to prevent the deviation is to provide some incentive for player not to deviate with and .this idea will be used in the proof of theorem [ prop : corehard ] .we now define a problem that is np - complete is the strong sense : + * name * : exactcoverby3sets ( e3c ) : + * instance * : a pair , where is a set and is a collection of subsets of such that for some positive integer and for each .+ * question * : is there a sub - collection which is a partition of ? + it is known that e3c remains np - complete even if each occurs in at most three members of .we will use this assumption in the proof of theorem [ prop : corehard ] , which will be shown by a reduction from e3c .[ prop : corehard ] checking whether a core stable or a strict core stable partition exists is np - hard in the strong sense , even when preferences are symmetric .let be an instance of e3c where occurs in at most three members of .we reduce to an ashgswith symmetric preferences in which there is a player corresponding to each and there are six players corresponding to each .these players have preferences over each other in exactly the way players have preference over each other as in example [ example : symm - core - empty ] .so , . we assume that all preferences are symmetric .the player preferences are as follows : * for , + ; + ; and + ; * for any , + ; and + ; + * for any for valuations not defined above .we prove that has a non - empty strict core ( and thereby core ) if and only if there exists an such that is a partition of .assume that there exists an such that is a partition of .then we prove that there exists a strict core stable ( and thereby core stable ) partition where is defined as follows : for all , * ; * ; * ; * ; * ; and * also for all and for all .we see that for each player , his utility is non - negative . therefore there is no incentive for any player to deviate and form a singleton coalition . from example[ example : symm - core - empty ] we also know that the only possible strongly blocking ( and weakly blocking ) coalition is for any .however , has no incentive to be part because and .also and have no incentive to join because their new utility will become negative because of the presence of the player .assume for the sake of contradiction that is not core stable and can deviate with a lot of .but , can only deviate with a maximum of six other players of type because is present in a maximum of three elements in . in this case gets a maximum utility of only .therefore is in the strict core ( and thereby the core ) .we now assume that there exists a partition which is core stable .then we prove that there exists an such that is a partition of . for any , the new utilities created due to the reduction gadget are only beneficial to , , , and .we already know that the only way the partition is core stable is if can be provided disincentive to deviate with and .the claim is that each needs to be in a coalition with exactly one such that and exactly two other players and such that .we first show that needs to be with exactly one such that .player needs to be with at least one such . if is only with other , then we know that gets a maximum utility of only . also , player can not be in a coalition with and such that and because both and then get negative utility .each also needs to be with at least 2 other players and where and are also members of .if is with at least three players , and , then there is one element among such that .therefore and hate each other and the coalition is not even individually rational .therefore for the partition to be core stable each has to be with exactly one such that and and least 2 other players and where and are also members of .this implies that there exists an such that is a partition of .in this section , we prove that verifying whether a partition is csc stable is conp - complete . interestingly , conp - completeness holds even if the partition in question consists of the grand coalition .the proof of theorem [ th : csc - hard ] is by a reduction from the following weakly np - complete problem .+ * name * : partition + * instance * : a set of positive integer weights such that .+ * question * : is it possible to partition , into two subsets , so that and and ? + [ th : csc - hard ] verifying whether the partition consisting of the grand coalition is csc stable is weakly conp - complete .the problem is clearly in conp because a partition resulting by a csc deviation from is a succinct certificate that is not csc stable .we prove np - hardness of deciding whether the grand coalition is _ not _ csc stable by a reduction from partition .we can reduce an instance of of partition to an instance where is an ashgdefined in the following way : * , * , * , for all * , * , * for any for which is not already defined , and * .we see that , , for all .we show that is not csc stable if and only if is a ` yes ' instance of partition .assume is a ` yes ' instance of partition and there exists an such that .then , form the following partition then , * ; * ; and * for all . the coalition can be considered as a coalition which leaves the grand coalition so that all players in do as well as before and at least one player in , i.e. , gets strictly more utility .also , the departure of does not make any player in worse off .assume that is a ` no ' instance of partition and there exists no such that .we show that no csc deviation is possible from .we consider different possibilities for a csc blocking coalition : 1 . , 2 . and there exists such that , 3 . , 4 . and , 5 . there exists and such that , , and we show that in each of the cases , is a not a valid csc blocking coalition . 1 .if is empty , then there exists no csc blocking coalition .if is not empty , then and gets strictly less utility when a subset of deviates .2 . in this case, both and gets strictly less utility when leaves .if , then there is no deviation as . if there exists a such that , then and get strictly less utility than in .4 . if , then the utility of no player increases . if , then the utility of and increases but the utility of and decreases . 5 . consider where . without loss of generality, we can assume that and .we know that and gets strictly more utility because they are now in different coalitions . since is a ` no ' instance of partition , we know that there exists no such that . if , then . if , then . thus , if is a ` no ' instance of partition , then there exists no csc deviation . from the proof of theorem [ th : csc - hard ], it can be seen that is not pareto optimal if and only if is a ` yes ' instance of partition .[ th : gc - po ] verifying whether the partition consisting of the grand coalition is pareto optimal is conp - complete .we presented a number of new computational results concerning stable partitions of ashgs .first , we proposed a polynomial - time algorithm for computing a contractually individually stable ( cis ) partition . secondly , we showed that checking whether the core or strict core exists is np - hard in the strong sense , even if the preferences of the players are symmetric .finally , we presented the first complexity result concerning the contractual strict core ( csc ) , namely that verifying whether a partition is in the csc is conp - complete .we saw that considering csc deviations helps reason about the more complex pareto optimal improvements . as a result , we established that checking whether the partition consisting of the grand coalition is pareto optimal is also conp - complete .we note that algorithm [ alg - cis - general ] may very well return a partition that fails to satisfy individual rationality , players may get negative utility .it is an open question how to efficiently compute a cis partition that is guaranteed to satisfy individual rationality .we also note that theorem [ th : csc - hard ] may not imply anything about the complexity of _ computing _ a csc partition . studying the complexity of computing a csc stable partitionis left as future work .h. aziz , f. brandt , and h. g. seedig .optimal partitions in additively separable hedonic games . in _ proceedings of the third international workshop on computational social choice ( comsoc ) _ ,pages 271282 , 2010 .s. barber , w. bossert , and p. k. pattanaik .ranking sets of objects . in s. barber , p. j. hammond , and c. seidl , editors , _ handbook of utility theory _ ,volume ii , chapter 17 , pages 893977 .kluwer academic publishers , 2004 .
|
an important aspect in systems of multiple autonomous agents is the exploitation of synergies via coalition formation . in this paper , we solve various open problems concerning the computational complexity of stable partitions in additively separable hedonic games . first , we propose a polynomial - time algorithm to compute a contractually individually stable partition . this contrasts with previous results such as the np - hardness of computing individually stable or nash stable partitions . secondly , we prove that checking whether the core or the strict core exists is np - hard in the strong sense even if the preferences of the players are symmetric . finally , it is shown that verifying whether a partition consisting of the grand coalition is contractually strict core stable or pareto optimal is conp - complete .
|
a common and substantial problem in hydrology is that of estimating the return period of extreme floods .an accurate estimate of extreme floods is of interest in various circumstances , particularly with respect to important civil infrastructure .the design and construction of bridges and roads is often dependent on accurate understanding of river behavior during extreme events .changes in land use , especially in the urban environment , create increasingly more impervious surfaces .this leads to larger and more frequent floods , putting more stresses on flood control structures , such as levees and dams .climate change alters local precipitation patterns and magnitudes .this influences water resource management of reservoirs and rivers , affecting operation of hydroelectric power plants and river transport .the management , operation , and maintenance of this critical infrastructure relies on accurate flood predictions , including predictions for ungauged catchments based on data from gauged river catchments .one of the first approaches to regional flood estimation was the _ index flood method _ , first proposed by .it was designed to deal with cases where little or no at - site data is available for flood assessment by borrowing strength from similar ( e.g. neighboring ) gauged catchments .the method consists of two main steps , namely , regionalization , which includes the identification of geographically and climatologically homogeneous regions , and the specification of a regional standardized flood frequency curve for a -year return period . in section [ sec : model ]a mathematical formalization of the index flood method is used to motivate some of the elements of our proposed model .the index flood method is still widely used today , and further developments of the method were presented in and . starting with the work of , various bayesian extensions have been proposed .although these papers show the usefulness of bayesian methods , they all derive rather directly from the classical index flood method , their main goal is usually to improve the estimation of the index flood coefficient , and they all rely solely on annual maxima .this work improves on the above studies in many important ways : the power relationship used to estimate the index flood coefficients is instead employed in the priors for the parameters of the gumbel distribution , which we have chosen as the distribution for the observations .we use carefully chosen meteorological and topographical covariates , including catchment areas and covariates based on precipitation and temperature measurements , motivated by the work of . in summary, we believe that our work provides a coherent and comprehensive bayesian model , making better use of the available data and prior knowledge .we propose a bayesian hierarchical model for monthly instantaneous extreme flow data from several river catchments .the topographical and climatic covariates facilitate the process of extrapolating the model to ungauged river catchments .several novelties in statistical modeling and inference for flood data are presented here : we use monthly rather than yearly maxima , making better use of the available data .we use a latent gaussian model ( lgm , see e.g. ) incorporating seasonal dependence , borrowing strength across months .the lgm allows the use of the computationally efficient mcmc split samling algorithm , while still being sufficiently general to allow for realistic modeling .we use penalised - complexity priors for the hyperparameters of the model , which avoids overfitting , letting the prior knowledge together with the data decide the appropriate level of model complexity .we do a thorough prior eliciation for the regression coefficients of our model , making good use of availiable prior knowledge . to demonstrate that the proposed model predicts well for ungauged catchments , we perform a cross - validation study , where we leave river out and predict based on the model estimated from the other rivers except , for each of the eight rivers .we proceed as follows : section [ sec : data ] presents the data and the hydrological aspects of the problem .section [ sec : model ] introduces the full hierarchical model and provides explanations of the modelling assumptions , and a description of the posterior inference .section [ sec : results ] summarizes the results obtained from applying the model to the data .finally , section [ sec : conclusion ] contains the conclusions drawn from the study and some ideas for future research .the streamflow data consist of monthly maximum instantaneous discharges from eight river catchments in iceland .table [ stationtable ] lists the identification number , name and the size of each catchment .even though stations vhm45 and vhm204 have the same name ( vatnsdalsa ) , they correspond to different catchments . the time series were between 20 and 80 years long ( in most cases between 40 and 60 years ) .figure [ fig : iceland ] shows the locations of the eight catchments ..characteristics of the catchments used in the study .the station identifications , river names and catchment areas were provided by the icelandic meteorological office . [cols="<,<,^,<,<,^",options="header " , ] [ stationtable ] figure [ fig : meanplot ] shows the sample mean of the maximum monthly instantaneous flow for each river .the catchments have a seasonal behavior characterised by lower discharge during winter and higher discharge during spring / summer .the high discharge during spring / summer is mainly due to rising temperatures and snow melt , but the specific timing of the snow melt period varies somewhat for these catchments . for each river .] for each catchment , the following topographic and climatic covariates were considered for extrapolating to ungauged catchments : * catchment area : * the area of the river catchment in .* average precipitation : * the averaged monthly precipitation over the entire catchment . to construct this covariate the precipitation on a 1 km by 1 km grid over the whole of icelandwas obtained , which was then integrated over the catchment area .finally , the average over all years was found within each month .* maximum daily precipitation : * daily precipitation over the catchment area within each month was acquired using the same method as for the average precipitation .the value corresponding to the day with the highest precipitation , cumulated over the catchment , was chosen , then the average over all years was found within each month . *accumulated precipitation : * the accumulated precipitation over the catchment since the start of the hydrological year ( september ) .this covariate was potentially useful for explaining high discharge attributed to snow melt .* average positive temperature : * temperature is available on the same grid as precipitation .these values were obtained in the same manner as the average precipitation within each month , with negative values truncated to zero .* maximum positive temperature : * these values were calculated in a similar way to the maximum precipitation values , with the difference being that negative temperature values were truncated to zero .the gumbel distribution is a common choice for extreme value data , due to its theoretical foundations .we performed an anderson darling goodness - of - fit test for the gumbel distribution , for each river and month .the resulting -values are shown in figure [ fig : pvalues ] .the empirical distribution of the -values is close to standard uniform , which suggests that the gumbel distribution fits the observed data reasonably well .we performed a preliminary analysis of the statistical relationship between maximum instantaneous flow and the topographical and meteorological factors described in section [ sec : data ] .the preliminary analysis was carried out as follows .first , maximum likelihood ( ml ) estimates for both the location and scale parameters of the gumbel distribution were obtained at all rivers and every month .we then fitted log - linear models where the ml estimates of the location and scale parameters , respectively , acted as the response , and all combinations of the aforementioned covariates are assessed .this preliminary analysis revealed a strongly significant log - linear relationship between the ml estimates of the location parameter and catchment area , average precipitation , maximum precipitation and accumulated precipitation .the analysis further showed a strong multicollinearity between average precipitation , maximum precipitation and accumulated precipitation .however , non - significant log - linear relationships were observed between the ml estimates and both average and maximum positive temperature . based on these results and by using a step - wise log - linear model selection algorithm based on aic score , it was decided to include both catchment area ( ) and maximum daily precipitation ( ) as predictive covariates for location parameters .analogous results also hold for the scale parameter .-values from a anderson - darling goodness of fit test for the gumbel distribution . ] in this section , we present the proposed three - level bayesian hierarchical model . at the data level , the observed maxima of instantaneous flow for river , month , and year is assumed to follow a gumbel distribution : where and are the location and scale parameters , respectively . as seen from equation ,these parameters are allowed to differ between both months and rivers . at the latent level ,the logarithm of the parameters and are modeled with a linear regression model within each month , incorporating meteorological and topographical covariates .this approach is inspired by the index flood method , where a linear model is specified for the logarithm of the mean yearly flow maxima , and is similar to the model for yearly maxima of .we build seasonal dependence into the model , letting latent parameters in neighboring months be _ a priori _ positively correlated .full details of the model are given below .let and . the linear model for given by where the s are centered log covariates ( except for all and ) and the random effect terms are given a prior enforcing seasonal behavior , described below .this model can be written in matrix form , as follows .collect the covariates in the matrix , such that the first rows of contain the covariates for river over each of the months , the next rows contain the covariates for river , and so on .let denote the columns of , and let and further , , and let , and contain the , and , ordered such that they line up with and .then we may write the model for is similar , with the same covariates , but different coefficients and and error term , and can be written in matrix form as to obtain a latent gaussian model we must specify multivariate normal priors for the coefficients , , and . for and fix , , and and set where the choices of , , and are explained in section [ sec : elic - inform - priors ] .let , be the random intercepts ( ) or slopes ( ) of covariate over the months , and define similarly .we assume the following priors for and , encoding seasonal dependence : where and are unknown variance parameters and is an circular precision matrix that has the vector = s \cdot [ 1 \quad -2(\kappa^2 + 2 ) \quad \kappa^4 + 4\kappa^2 + 6 \quad-2(\kappa^2 + 2 ) \quad 1 ] \ ] ] on its diagonal band , where is a constant ensuring that the inverse of the precision matrix is a correlation matrix .we have fixed to the value , giving the prior correlation of 0.67 between neighboring months , which seems reasonable based on our prior knowledge .note that is a function of , e.g. for , .we here present the priors for the regression coefficients and .for each , and will be given equal priors , since they enter the model in a similar way .the priors specified below are written in terms of .as explained in section [ sec : full - hier - model ] , should be given a multivariate normal prior .we will assume that the elements are _ a priori _ independent , so we need to set independent normal priors for the individual coefficients .we start by considering the coefficient corresponding to the logarithm of the size of the catchment area .first , note that negative values of make little sense , as this corresponds to a larger area giving lower maximum flows than a smaller area , other things being equal . to interpret the effects of varying positive values for , consider precipitation events ( rainy clouds ) moving over the area .each event will have a smaller spatial extent than the catchment area itself , when the catchment area is large ; and a hypothetical increase of the catchment area corresponding to given precipitation event will lead to a smaller fraction of the area being covered by precipitation .this gives a `` clustering effect '' : smaller catchment areas will have a larger proportion covered by precipitation events than larger catchment areas .since the value corresponds to a completely uniform distribution of precipitation ( which is physically implausible ) , this means than is highly likely to be less than one . in other words, values correspond to an effect of area which increases larger than linearly , which is unrealistic for the abovementioned reasons .based on the above , we believe that the most sensible values for are in the interval .we propose that the normal prior density for is such that the probability of negative values is and the probability of values greater than one is .these values result in a prior mean of and a prior standard deviation of .considering the effect of precipitation given a fixed area , a similar line of argument can be given for the parameter corresponding to maximum daily precipitation : higher maximum daily precipitation should result in higher flows , so the parameter should be positive . also , is unrealistic for similar reasons as explained above for : natural clustering effects make super - linear effects of precipitation unlikely .accordingly , is given the same prior as .since the data should provide good information for the intercept parameter , there is less of a need to specify an informative prior here .we have therefore chosen a normal density with mean zero and variance as an uninformative prior for the intercept . in this section ,we describe the selection of priors for the hyperparameters and .we start by considering priors for .note first that can be regarded as a flexibility parameter : corresponds to a restricted model where we set , i.e. the _ base model _ without correlated random effects . provide a useful framework for selecting prior densities for flexibility parameters such as : penalised complexity ( pc ) priors .the ideas behind pc priors are thoroughly described in , but we give a short review here .pc priors are constructed based on four underlying principles .the first principle is occam s razor : we should prefer the simpler base model unless a more complex model is really needed .the second principle is using the kullback - leibler divergence ( kld ) as a measure of complexity , where is used to measure the distance between the base model ( ) and the more complex model corresponding to ( the factor is introduced for convenience , giving simpler mathematical derivations ) .the third principle is that of constant - rate penalisation , which is natural if there is no additional knowledge suggesting otherwise .this corresponds to an exponential prior on the distance scale .note that defining the prior on the distance scale implies that pc priors are invariant to reparameterization .the fourth and final principle is _ user - defined scaling _ ,i.e. that the user should use ( weak ) prior knowledge about the size of the parameter to select the parameter of the exponential distribution . provide both theoretical results and simulation studies showing the pc priors good robustness properties and strong frequentist performance .we shall specify independent priors for each component of and each component of .note that this entails specifying separate base models for each component . while the ideal approach would be to specify an overall multivariate pc prior corresponding to the base model, we view this as beyond the scope of this article .it is easy to derive that the pc prior approach results in exponential priors for both the and in this case , see for details , so it only remains to specify the scaling , i.e. the choices of parameters of the respective exponential distributions .the parameter is the standard deviation of the mean zero monthly intercepts , representing the monthly deviations from the overall intercept . since our model is on a logarithmic scale , the values and correspond to factors and , respectively , for .accordingly , should be considered to be a wide 95% probability interval .the value of giving this interval is .we take as the 0.95 quantile of the prior for , giving a mean of and a rate of for the exponential prior for .a similar argument can be given for , and we give it the same prior as .since , , and have similar roles in the model , they will given identical , independent , priors .we write in terms of below , with the understanding that the three other priors are identical .it is convenient to use a tail - area argument to specify the scaling .first , consider the sum of the `` fixed effect '' parameter and the `` random effect '' parameter for some month .for the reasons described in section [ sec : elic - inform - priors ] , most of the prior mass of this sum should be between zero and one , but the addition of the random effects term will of course increase the variance , so the masses allowed below zero and above one should be larger than the 5% used in section [ sec : elic - inform - priors ] .we consider 10% prior mass below zero ( and 10% above one ) for to give a relatively large mass outside the interval .this corresponds to a prior standard deviation of approximate for each .since this is a high value , it should be in the upper tail of the prior for : we thus specify that 99% of the mass of should be below the value , giving a rate of approximately ( and a mean of approximately ) for the exponential prior for . in lack of prior knowledge suggesting otherwise , we give equal priors to and .the prior for can be specified in a more straightforward manner using a direct tail - area argument : considering the scale of the problem , it seems highly likely that should be less than ten , so we put the 0.99-quantile of the exponential prior at the value ten .the result is a rate of ( and a mean of ) .as latent models were imposed on both the location and scale parameters of the data density , approximation methods such as the integrated nested laplace approximation were inapplicable in our setting .therefore , mcmc methods were necessary to make posterior inference .however , standard mcmc methods such as single site updating converged slowly and mixed poorly since many model parameters were heavily correlated in the posterior . for these reasons ,all posterior inference was carried out by using the more efficient mcmc split sampler .the mcmc split sampler is a two - block gibbs sampling scheme designed for lgms , where tailored metropolis hastings strategies are implemented within in both blocks .the sampling scheme is well suited to infer lgms with non - gaussian data density where latent models are imposed on both the location and scale parameters .the main idea of the mcmc split sampler is to split the latent gaussian parameters into two vectors , called the `` data - rich '' block and the `` data - poor '' block .the data - rich block consists of the parameters that enter directly into the likelihood function , in our case the location parameters and the scale parameters , for and .the data - poor block consists of the remaining parameters ( in our case , including the regression parameters and hyperparameters ) .an efficient block gibbs sampling scheme can then be implemented by sampling from the full conditional distributions of each block . for the data - poor block, it turns out that the full conditional is multivariate gaussian , so sampling can be done quickly using a version of the one - block sampler of .the data - rich block can also be sampled efficiently , for details see .the model described in section [ sec : model ] was fitted using the mcmc split sampler , with 30000 iterations , discarding a burn - in of 10000 .runtime on a modern desktop ( ivy bridge intel core i7 - 3770k , 16 gb ram and solid state hard drive ) , was approximately one hour .all the calculations were done using ` r ` .figure [ fig : priorvspost_regression ] shows prior densities ( in orange ) together with posterior densities ( light blue ) for the regression coefficients and .the posteriors look close to being normally distributed .we see that the intercepts ( figures [ fig : beta0 ] and [ fig : alpha0 ] ) are well identified , with modes close to and , respectively , even though they have a vague prior .this is as expected , since the intercepts correspond to an overall , `` average '' level which should be relatively easy to infer .the posteriors for the regression coefficients and , corresponding to log catchment area , ( figures [ fig : beta1 ] and [ fig : alpha1 ] ) , look similar , though the posterior for ( in the model for log scale ) is slightly wider .both have a mode of around 0.75 , and most of the posterior mass in the region between 0.5 and 1 .posteriors for and , corresponding to maximum daily precipitation ( figures [ fig : beta2 ] and [ fig : alpha2 ] ) are wider than those for and , with most of the mass in the region between 0.4 and 1.5 .the posterior mode of is around 0.9 , while the posterior mode of is close to 1.0 .figure [ fig : priorvspost_hyper ] shows prior and posterior densities for all eight hyperparameters of the model .we see that the hyperparameters for the random effects standard deviations and ( ) are all shrunk somewhat towards zero .however , the posterior mode is larger than zero for all hyperparameters , particularly for , where there is very little mass close to zero . for the standard deviations and most of the posterior mass is between 0 and 0.1 , while and ( corresponding to the random intercepts ) have most of their posterior mass between 0 and 0.5 .posteriors for and ( the two residual noise standard deviations of the model ) are well identified , even though they were given an very weakly informative prior .the posterior modes of and are close to 0.5 .figure [ fig : seasonal ] shows the seasonal effects , together with 80% pointwise credible intervals .it seems like there is some evidence for a seasonal effect for ( the intercept of the location model ) , and and ( corresponding to catchment area ) , while this is not so clear for the other parameters .this is consistent with what was seen in figure [ fig : priorvspost_hyper ] , particularly when comparing the posterior for with the corresponding seasonal effect for .[ fig : sigmatau ] the left panels of figure [ fig : cdfqq ] show empirical cumulative distribution functions ( cdfs ) together with cdfs predicted from the model , for three randomly chosen river / month - combinations .the right panels show corresponding pp plots , i.e. the empirical cdf is plotted against the cdf predicted from the model for each river and each month .uncertainty bands correspond to pointwise 95% credible intervals .the model seems to fit the data reasonably well .finally , we performed a cross - validation study , by leaving each river out in turn , estimating the full model based on the remaining seven rivers , and predicting for the left - out river .figures [ fig : leaveout1 ] and [ fig : leaveout2 ] show the results for all eight rivers . since the aim is to predict extremes , we do not consider prediction of the lower quantiles , but focus on the median and the 90th percentile. the limited number of data points ( around 50 ) for each river - month combination would make estimation of higher sample quantiles such as 0.95 or 0.99 too noisy .the model seems to predict reasonably well overall , particularly when taking into account that the model was fitted based on only seven river catchments , and that these are a purely out - of - sample predictions based on sparse data .the worst prediction is for river vhm19 , which is the smaller river catchment in our data set , and is also somewhat untypical , with smallest discharge levels overall .it is therefore perhaps not surprising that prediction fails somewhat here .for all the other rivers , however , the predicitive accuracy is in our view about as good as can be expected .we have proposed a bayesian hierarchical model for monthly maxima of instantaneous flow .since the number of sites is often small ( as in the data used here ) , the ability to borrow strength between months is very important . rather than performing twelve ( one for each month )independent linear regressions at the latent level , we fitted a linear mixed model using information jointly from all months and all sites .the use of penalised complexity priors was helpful , giving a good balance between prior information and sparse data .a thorough account of the prior elicitation for both regression coefficients and hyperparameters was given .we argue that the use of pc priors make hyperprior elicitation easier : the principle of user - defined scaling gives a useful framework for thinking about priors for hyperparameters in complex models .based on a preliminary analysis , it was shown that the gumbel distribution fits the data well in most cases .however , the generalised extreme value distribution is often selected as a model for block extrema , due to its theoretic basis and it containing the gumbel distribution as a special case . future research on models for monthly maxima of instantaneous flow should involve assuming the generalised extreme value distribution at the data level .assuming the same shape parameter across months would be a sensible starting point .if that is not sufficient , then assuming that each month has its own shape parameter would be a sensible extension .a crucial aspect of the proposed model is its capacity to predict monthly maxima of instantaneous flow at ungauged sites , provided that catchment covariates are available .the model could also be used to predict annual maxima of instantaneous flow at ungauged sites .the bayesian approach allows for taking parameter uncertainty into account , while also helping to reduce uncertainty by using the regularising priors that are selected here .the result is reasonably good predictions compared to observed data .we thank hvard rue , andrea riebler , daniel simpson and philippe crochet for many helpful comments and suggestions .the data was provided by the icelandic meteorological office .the study was partly funded by the university of iceland research fund .philippe crochet . estimating the flood frequency distribution for ungauged catchments using an index flood procedure .application to ten catchments in northern iceland . technical report ,icelandic meteorological office , 2012 .philippe crochet , tmas jhannesson , trausti jnsson , oddur sigursson , helgi bjrnsson , finnur plsson , and idar barstad . estimating the spatial distribution of precipitation in iceland using a linear model of orographic precipitation ._ journal of hydrometeorology _ , 80 ( 6):0 12851306 , 2007 .finn lindgren , hvard rue , and johan lindstrm .an explicit link between gaussian fields and gaussian markov random fields : the stochastic partial differential equation approach ._ journal of the royal statistical society : series b ( statistical methodology ) _ , 730 ( 4):0 423498 , 2011 . hvard rue , sara martino , and nicolas chopin .approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations . _ journal of the royal statistical society : series b ( statistical methodology ) _ , 710 ( 2):0 319392 , 2009. daniel p simpson , thiago g martins , andrea riebler , geir - arne fuglstad , hvard rue , and sigrunn h srbye .penalising model component complexity : a principled , practical approach to constructing priors ._ arxiv preprint arxiv:1403.4630 _ , 2014 .
|
we propose a comprehensive bayesian hierarchical model for monthly maxima of instantaneous flow in river catchments . the gumbel distribution is used as the probabilistic model for the observations , which are assumed to come from several catchments . our suggested latent model is gaussian and designed for monthly maxima , making better use of the data than the standard approach using annual maxima . at the latent level , linear mixed models are used for both the location and scale parameters of the gumbel distribution , accounting for seasonal dependence and covariates from the catchments . the specification of prior distributions makes use of penalised complexity ( pc ) priors , to ensure robust inference for the latent parameters . the main idea behind the pc priors is to shrink toward a base model , thus avoiding overfitting . pc priors also provide a convenient framework for prior elicitation based on simple notions of scale . prior distributions for regression coefficients are also elicited based on hydrological and meteorological knowledge . posterior inference was done using the mcmc split sampler , an efficient gibbs blocking scheme tailored to latent gaussian models . the proposed model was applied to observed data from eight river catchments in iceland . a cross - validation study demonstrates good predictive performance . # 1 0 0 1 0 * a bayesian hierarchical model for monthly maxima of instantaneous flow * _ keywords : _ latent gaussian models , extreme values , hydrology , penalised complexity priors
|
the origin of the moon is one of the most important problems in the planetary science . the giant impact ( gi ) hypothesis is currently the most popular , since it can solve difficulties that other models face , such as the current angular momentum of earth - moon system and moon s small core fraction compared to the other rocky planets . according to the gi hypothesis , at the late stage of the terrestrial planet formation , a mars - sized protoplanet collided with the proto - earth and produced a circumplanetary debris disk , from which the earth s moon is formed . to examine whether this scenario really works ornot , a number of numerical simulations of collisions between two planetary embryos have been carried out ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?most of them were done by using the smoothed particle hydrodynamics ( sph ) method which was a widely used particle - based fluid simulation method developed by and .recently , however , it is pointed out that the results of numerical simulation of gi by sph method should be re - examined from the geochemical point of view .recent high precision measurement of isotope ratio revealed that it is not easy for the gi hypothesis to reproduce observed properties of the moon .the moon and the earth have almost identical isotopic composition for oxygen , and isotopic ratios chromium , titanium , tungsten and silicon .this means that the bulk of the moon should come from the proto - earth , unless very efficient mixing occurred for all the isotopic elements . on the other hand , in previous numerical simulations of gi ,the disk material comes primarily from the impactor , which is likely to have had the different isotopic compositions from that of the earth . to solve this problem ,several models have been proposed and studied numerically .these models have the total angular momentum significantly larger than that of the present earth - moon system .models with a fast rotating proto - earth , a hit - and - run collision and a massive impactor have been proposed .although the excess angular momentum is assumed to be removed by the evection resonance with the sun ( e.g. , * ? ? ?* ) , it may work only a narrow range of tidal parameters .this means that the moon was formed by a fortuitous event .recently , however , it is pointed out that the results of numerical simulations with the standard formulation of sph ( ssph ) is problematic .it turned out that ssph has problems in dealing with a contact discontinuity and a free surface .it is pointed out that these difficulties result in serious problems , such as the treatment of hydrodynamical instabilities ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?this problem arises from the assumption in ssph that the local density distribution is differentiable , though in real fluid , the density is not differentiable around the contact discontinuity . as a result , around the contact discontinuity , the density of the low - density side is overestimated and that of the high - density side is underestimated .thus , pressure is also misestimated around the contact discontinuity and an `` unphysical '' repulsive force appears .this unphysical repulsive force causes a strong surface tension which suppresses the growth of hydrodynamical instabilities . in the gi simulation , since the core - mantle boundary is a contact discontinuity and the planetary surface also has a density jump , the accurate treatment of the contact discontinuities is very important .we have developed a novel sph formulation , density independent sph ( disph ) , to solve the problem for the contact discontinuity .instead of the differentiability of the density , disph requires the differentiability of the pressure . as a result ,disph significantly improves the treatment of the contact discontinuity .thereby , gi must also be re - investigated by disph . in this paper , we present results of gi simulations performed with disph and compare them with those obtained with ssph by focusing on the treatment of contact discontinuity and its impacts on the results .we found that disph produced significantly different debris disk , which should lead to different moon forming process .we concluded that the results of gi is sensitive to the numerical scheme and previous numerical simulations of gi should be re - considered .it is worth noting that reported the results of gi by a three - dimensional grid - base method .they found that the post impact evolution of the disk is different from that of ssph .they pointed out that it is due to the poor description of debris disk .they suggested that the difference may be due to low - resolution for the debris disk in sph calculations .however , it could be rather due to their oversimplified polytropic - like eos .thus , it is not straightforward to compare their results with ssph . reported the comparison of the results of gi between adaptive mesh refinement ( amr ) and sph and concluded that the predicted moon mass of two methods are quantitatively quite similar .although we notice qualitative differences in disk spatial structures in some of these results ( for example , the different clump structure between amr and sph in fig . 4 of ) , our disph also predicts similar moon masses for the collision parameters that they tested , as we will mention in section 5 .comprehensive code - code comparison is needed with grid codes , as well as between disph and ssph . in this paper, we focus on the latter comparison . herewe do not insist that the results of gi simulations by disph are much closer to realistic phenomenon than by ssph .while disph has been improved for treatment of a contact boundary , both disph and ssph have a problem to treat free surface , i.e. , planetary surface .we here stress that only the improvement for treatment of a contact boundary significantly affects properties of circumplanetary disks generated by gi .therefore , we need to be very careful when some definitive conclusions are drawn from the current numerical simulations for gi . to clarify details of moon formation, it is necessary to develop the numerical hydrodynamical scheme for gi that properly treats the planetary surface as well as the core - mantle boundary .this paper is organized as follows . in section 2 ,we briefly describe the numerical technique .we focus on the implementation of disph for non - ideal eos . in section 3 , we describe models of the gi simulations . in section 4 , we show the results and comparisons of the gi simulations with the two methods and clarify the reason for the difference in the properties of the generated disks .we also show the results of single component objects , in addition to those of differentiated objects with core - mantle structure .the former and latter simulations discriminate the differences due to a free surface from a core - mantle boundary between the two methods . in section 5 ,we summarize this paper .in the sph method , the evolution of fluid is expressed by the motions of fluid elements that are called sph particles .the governing equations are written in the lagrangian form of hydrodynamic conservation laws .the equations of motion and energy of the -th particles are written as follows : where , , and are the position vector , the acceleration vector , the specific internal energy and the time , respectively .the subscript denotes the value of -th particle .the superscripts , and mean the contributions of the hydrodynamical force evaluated by sph , viscosity and self - gravity , respectively .the formal difference between ssph and disph is in the form of and .the other terms in the right hand side of eqs .( [ eq : motion ] ) and ( [ eq : energy ] ) have the same forms for both methods . since the equations of ssph can be found in the previous literature ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , here we will only describe those of disph .disph is originally developed by for the ideal gas eos and then extended to an arbitrary eos by .the main advantage of disph is the elimination of unphysical surface tension which rises at the contact discontinuity .the unphysical surface tension in ssph comes from the requirement of the differentiability of the density . developed a new sph formulation which does not require the differentiability of the density , but requires that of ( a function of ) pressure .as a result , their new sph can correctly handle hydrodynamical instability .note that around the shock , neither the pressure nor the density is continuous .thus the assumption of the differentiability of pressure and density is broken across the shock . used the quantity where is as pressure and is a constant exponent for their formulation to improve the treatment of the shock instead of the symbol . ] . applied and show that the treatment the strong shock is improved .however , they considered only ideal gas eos . here , we apply this formulation to fluids with non - ideal eos .the choice of is related with how to treat a free surface such as a planetary surface .disph overestimates the pressure gradient around the free surface , while ssph underestimates it . in appendix[ sec : app1 ] , we show the results of the 3d shock problem simulated with ssph and disph . in the ideal gaseos case ( fig .[ fig : st3d ] ) , the numerical pressure blip at the contact boundary is the smallest for , while the numerical density blip is relatively large for the same . determined as a best choice for ideal gas through more detailed discussions . on the other hand , in the tillotson eos case, it is not clear that which is the best choice . in the case of 3d shock problem ( fig .[ fig : st3d ] ) , is the best choice .however , in the case of strong shock test , may be the best choice .tillotson eos ( fig .[ fig : st3dt ] ) shows the different dependence of the numerical pressure blip at the contact boundary on from that with ideal eos .this indicates that the smaller works better to treat large pressure jump , namely , the strong shock and the free surface .we adopt also in this paper , although more comprehensive tests on the choice of are needed in future study .we will show the effects of improved treatment of the free surface and the core - mantle boundary in section 4.1 and 4.2 , respectively . in appendix[ sec : app2 ] , we also show the results of the keplerien disk test and gresho vortex problem .results of the keplerien disk tests with ssph and disph tell us that both schemes can maintain the disk structure during the first several orbital period , whereas catastrophic breakup takes place before orbital period .previous studies also reported the same results . in our simulations of gi , we only follow about times of the rotation period ( hrs at , where is the current earth s radius ) . we thus consider that the effect of the numerical am transfer is not crucial for our simulation results . from the gresho vortex test, we can see that there is no critical difference between results with two schemes .overall , both ssph and disph are capable of dealing with rotation disks with similar degree , as far as the simulation time is less than orbital periods . the essential difference between disph and ssph is in the way to estimate the volume element of a particle , . as the starting point , following and , we introduce the physical quantity : where bracketsmean `` smoothed '' values .hereafter , we denotes as .the value of is given as follows : where and are the kernel function ( see below ) and the so - called smoothing length , respectively . by using and , we derived the equations of motion and energy for our scheme as follows : , \label{eq :disph_motion}\\ \left ( \frac{d u_i}{d t } \right)^{\rm hydro } & = & \sum_j \frac{y_i y_j}{m_i } \frac{\langle y_i \rangle^{1/\alpha - 2}}{\omega_i } ( { \mbox{\boldmath }}_{i } - { \mbox{\boldmath }}_{j } ) \cdot { \mbox{\boldmath } } w({\mbox{\boldmath }}_i - { \mbox{\boldmath }}_j ; h_i ) , \label{eq : disph_energy}\end{aligned}\ ] ] where and are the mass and the velocity vector of particle , respectively . here is the so - called `` grad- '' term ( e.g. , * ? ? ?* ; * ? ? ?* ) ; here , is the value to determine the smoothing length ; note that the choice of and is arbitrary , as far as has the dimension of volume . in this paper ,following , we chose and , where is the density . note that since the interactions between two particles are antisymmetric , our sph conserves the total momentum and energy .the grad- term improves the treatment of the strong shock ( e.g. , * ? ? ?* ; * ? ? ?* ) . in appendix[ sec : app4 ] , we show the results of strong shock with disph both with and without the grad- term . we show that our disph with the grad- term works well for the strong shock .our disph with the grad- term has enough capability for the problems which include strong shock .note that in order to actually perform numerical integration , we need to determine new values of , by solving a set of implicit equations , eq .( [ eq : smooth_p ] ) combined with the equation of state .thus , as in , we iteratively solve eq .( [ eq : smooth_p ] ) . the number of iterations is set to , following the previous works ( e.g. , sec. 5.6 in * ? ? ?the iteration procedure is the same as that described in , except for the initial guess of .the initial guess of is obtained by the numerical integration of using its time derivative : note that these equations reduce to those of in the case of . for the kernel function , we employ the cubic spline function .note that the use of the cubic spline kernel for the derivative sometimes causes the paring instability . in order to avoid this instability ,we adopt a gradient of the kernel which has a triangular shape . for the artificial viscosity , with both methods , we use the artificial viscosity described in . note that for both methods we use the smoothed density for the evaluation of artificial viscosity .the parameter for the strength of the artificial viscosity is set to be . in order to suppress the shear viscosity ,we apply the balsara switch to the evaluation of the artificial viscosity .the self - gravity is calculated using the standard bh - tree method .the multipole expansion is calculated up to the quadrupole order and the multipole acceptance criterion is the same as .the opening angle is set to be 0.75 . in the sph method ,the timestep is usually determined by the courant condition as follows : where and is the sound speed of particle .here is a cfl coefficient which is set to 0.3 in this paper . in this paper , we consider three additional criteria : fractional changes in the specific internal energy , ( disph only ) , and the accelerations .these are where and are dimensionless timestep multipliers . throughout this paper , we set .note that eqs .( [ eq : dt_u ] ) and ( [ eq : dt_y ] ) are applied when . in order to actually evaluate eqs .( [ eq : disph_motion ] ) , ( [ eq : disph_energy ] ) and ( [ eq : disph_y ] ) , we need the expression of . throughout this paper, we use the tillotson eos , which is widely used in the gi simulations .the tillotson eos contains 10 parameters , which we should choose to describe the given material .the material parameters of the tillotson eos for each material are listed in , page 234 , table aii.3 .note that in the very low density regime , the tillotson eos gives negative pressure which is unphysical on the scale of gi . to avoid numerical instabilities due to negative pressure, we introduce a minimum pressure for the tillotson eos . in the scale of gi ,the typical value of pressure is order of gpa . throughout this paper , thus , we set gpa . also , we impose the minimum timestep to prevent the timestep from becoming too small due to unphysical values of partial derivatives of eos by density .we carefully determine the minimum timestep as one second not to cause poor description of the physical evolution of a system in this paper .in addition , in this case , we do not evaluate hydrodynamical terms in eqs .( [ eq : motion ] ) and ( [ eq : energy ] ) , since this small timestep is actually applied for particles with very low density .we performed numerical simulations of gi from eight initial models . in this section ,we briefly describe how we set up the initial conditions .we first constructed two initial objects , the proto - earth ( target ) and the impactor , which satisfy the given impactor - to - target mass ratio and total mass .we use sph particles in total .following , both objects consist of pure iron cores and granite mantles .first , we place equal - mass sph particles in a cartesian 3d - lattice . then , inner of the object is set up as iron and the remaining outer part is set up as granite .the initial specific internal energy of particles is set to j / kg and the initial velocity of particles is set to zero . here , and are the gravitational constant and the current earth s mass .we let the sph particles relax to the hydrostatic equilibrium by introducing the damping term to the equation of motion .the end time of this relaxation process is set to seconds , which is about ten times of the dynamical time for the target .after this relaxation process , the particle velocities for each particle are of the typical impact velocity ( an order of km / sec in the case of the moon forming impact ) .we constructed eight models .one of them , model 1.10 , corresponds to run # 14 of .they concluded that the moon would form in this run . in this model ,the impactor approaches the proto - earth in a parabolic orbit .another model , model 1.17 , was close to run # 7 of . in this model ,the initial relative orbit is hyperbolic .the remaining models have the same parameters as those of model 1.10 except for the initial angular momentum .high and low angular momentum models correspond to high - oblique and low - oblique collisions . in all models ,initial objects are non - rotating .we integrated the evolution of these models for about 1 day .this duration time of the simulation is smaller than the time scale of numerical angular momentum transfer due to the artificial viscosity ( for detail , see * ? ? ?when we present the result , we set the time of the first contact of two objects as the time zero .table [ tb:1 ] shows the summary of the initial conditions .the columns indicate the initial separation between two objects ( ) , initial angular momentum of the impactor ( ) , velocity at the infinity ( ) , the total number of particles ( ) , the number of particles of target ( ) , the number of particles of impactor ( ) , the mass of target ( ) , the mass of impactor ( ) and the mass of core ( ) . here , is the angular momentum of the current earth - moon system , respectively .we set km , kg and kg m/sec .crrrr & 5.0 & 5.0 & 5.0 & 5.0 & 0.88 & 0.99 & 1.05 & 1.10 ( km / sec ) & 0 & 0 & 0 & 0 & 302,364 & 302,364 & 302,364 & 302,364 & 271,388 & 271,388 & 271,388 & 271,388 & 30,976 & 30,976 & 30,976 & 30,976 & 1.0 & 1.0 & 1.0 & 1.0 & 0.109 & 0.109 & 0.109 & 0.109 & 0.3 & 0.3 & 0.3 & 0.3 [ tb:1 ] crrrrrrrr & 5.0 & 5.0 & 5.0 & 5.0 & 1.15 & 1.17 & 1.21 & 1.32 ( km / sec ) & 0 & 10.0 & 0 & 0 & 302,364 & 305,389 & 302,364 & 302,364 & 271,388 & 279,206 & 271,388 & 271,388 & 30,976 & 26,183 & 30,976 & 30,976 & 1.0 & 1.0 & 1.0 & 1.0 & 0.109 & 0.1 & 0.109 & 0.109 & 0.3 & 0.3 & 0.3 & 0.3 [ tb:2 ]we first show the results of collisions of objects with a single component without core - mantle boundary structure in section [ sec : single comp ] .this set of runs is to discriminate the effects of a free surface from those of core - mantle structure and the core - mantle boundary .then we show the results of collisions of two differentiated objects with core - mantle boundary .we overview the time evolution of eight models obtained with two different methods , disph and ssph in section [ sec : time evolve ] . in all runs, we found the differences between the results with two methods are rather significant . in section [ sec : predicted moon mass ] , we compare the predicted mass of the moon obtained with two methods . in section [ sec : surfacetension ] , we investigate the cause of this difference .we consider collisions between single - component planets consisting of only granite mantle . herewe performed two types of impacts ; one is the collision between equal mass objects , the other is the same target - to - impactor mass ratio as described in section [ initial condition ] , but with single - component objects . since the objects have no core - mantle boundary , the difference between the results of two methods should come from the treatment of the free surface .the initial objects are constructed in a similar way to the prescription in section [ initial condition ] , although there is no iron core in this case . in the run with equal - mass objects ,both objects have mass of and radius of , and the initial specific internal energy is set to be .the initial angular momentum is the same as the current angular momentum of the earth - moon system .the velocity of the impactor at infinity is zero . in this simulationwe employ 300,754 particles in total .figure [ fig : single_comp ] shows the radial profiles of mass and density of the final outcome of the collision of two equal mass objects for both methods . with ssph , a gap in the particle distribution is formed around , while with disph the radial distribution is continuous .this means that ssph produces gap structure between the body and disk and more spreading disk than disph .the gap at is also found in the snapshot on the - plane with ssph ( fig .[ fig : single_comp_snapshots ] ) .there is , however , no physical reason for the formation of this gap . it seems to be natural that the angular momentum distribution is continuous . why the gap is formed in the ssph simulationis most likely the same as the gap formation at the contact discontinuity ( for detail , see section [ sec : surfacetension ] ) .since there is a density jump around the free surface , the free surface is a kind of contact discontinuity .though there is no discontinuity in the density distribution , the slope is steep at 2 - 3 earth radius and the density itself is low .thus , the density difference between two particles radially separated can be very large , resulting in the problem similar to that in the contact discontinuity .disph does not suffer from such a problem .the result in appendix [ sec : app3 ] also suggests that disph is better than ssph for the treatment of the free surface .however , since around the free surface the pressure is not continuous as well as density , it can not be readily concluded that disph is sufficiently improved from ssph for treatment of the free surface .figure [ fig : single_comp_snapshots2 ] shows the snapshots for a set of runs of single - component objects with the mass ratio of 10:1 given by table 1 .they show a similar trend on difference between the two methods to that found in fig .[ fig : single_comp ] .disph generally tends to produce more compact disks than ssph does .note that the sizes of the planet after an impact are roughly the same between ssph and disph . because disph produces more compact disks , the results by disph look as if the planet itself is inflated . in order to compare the results between two methods quantitively, we employ the so - called `` predicted moon mass '' as just a reference value .first , we extract `` disk particles '' from the simulation results . following , we classified sph particles to three categories , namely , escaping particles , disk particles and planet particles .particles whose total ( potential + kinetic ) energies are positive are regarded as escaping .if the total energy of a particle is negative and its angular momentum is greater than that of the circular orbit at the surface of the planet , it is categorized as a disk particle .then , other particles are categorized as planet particles , since these particles should fall back to the target .after the particles are classified , we predict the moon mass using information of disk particles . according to the -body simulations by and , the predicted moon mass , , is given by : where , and are the angular momentum of the disk , the roche radius of the planet , the mass of the disk , respectively .note that is the total mass of disk particles that escape from the disk through scattering by accreting bodies .following previous works ( e.g. , * ? ? ?* ; * ? ? ?* ) , we set to .assuming that materials from the proto - earth and the impactor are well mixed in the disk particles , we estimate the fraction of the moon materials originating from the proto - earth .it has been known that the moon and the earth have identical isotope ratios for several elements .this means that the moon should contain large fraction ( ) of materials from proto - earth mantle ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?. note that since eq .( [ eq : predmoonmass ] ) is an empirical equation , this equation sometimes yields an unphysical moon mass , such as a negative mass or a greater mass than the disk , in particular for high - oblique impacts .however , in eq . ([ eq : predmoonmass ] ) is a good indicator for quantitative comparison between disph and ssph . in this paperwe use in eq .( [ eq : predmoonmass ] ) as a reference value for the comparison .figure [ fig : single_comp_amvsmm ] shows the predicted moon mass ( ) as a function of initial impact angular momentum ( ) , obtained by the collisions of two single - component objects .both methods have a qualitatively similar -dependence ; increases as increases. however , disph produces more compact disks and accordingly smaller than ssph does .since the two objects have no core - mantle boundary , this difference should come from the treatment of the free surface .figure [ fig : temp ] shows the distributions of specific internal energy for model 1.15 with both methods .the difference between two methods are clear .the first two snapshots for each method look fairly similar ; shock heating and the arm - like structure can be clearly shown . in the panels hrs , however , clear difference between two methods can be seen . with disph ,the arm re - collides to the proto - earth and undergoes shock heating again , which results in hot and compact debris disk ( panel hrs ) . on the other hand , with ssph , cold particles are ejected around the arm - like structure ( panels hrs ) .these ejected particles finally become the cold and expanded disk ( panels hrs ) .note that similar cold and expanded disk can be seen in previous studies with ssph .this difference might come from the treatment of free surface or shock . since in this paper we focused on the treatment of core - mantle boundary , further investigation of the origin of this differenceis left for future works .we will show a similar plot for collisions between differentiated objects . as we will show in fig .[ fig : amvsmm ] , the dependance of on is different from that in fig .[ fig : single_comp_amvsmm ] .the difference is due to the contribution from core - mantle structure and its boundary .the core - mantle boundary has to be treated properly as well as the free surface in gi simulations .next we discuss details of collisions between differentiated objects with core - mantle structure .we will show that the contact density discontinuity at the core - mantle boundary may be a problem in calculations with ssph and it is improved with disph .figure [ fig : run1_x ] shows the time evolution of model 1.10 obtained with two methods .what is shown here is the face - on view ( in the - plane ) of particles with negative values of the coordinate , seen from .the two runs in the first frame ( hr ) look fairly similar .in both runs , the core of the impactor ( black ) is significantly deformed .it pushes up the mantle of the proto - earth ( orange ) and the mantle of the impactor ( red ) is left behind . in the first four frames , the results of the two models are qualitatively similar . in both runs ,the core of the impactor forms an arc - like structure at hr , which becomes more extended at hr . however , this arc is more extended for the run with ssph . in the frames of hr through hr , most of the mass of the impactor , which has not escaped from the planet gravitational potential , fall back to the proto - earth in the run with disph , while some of the mass forms extended envelope and disk in the run with ssph .as we will see in the next section , this difference in the structure causes a large difference in the predicted moon mass .figure [ fig : run1_oe ] shows the edge - on views of the two runs of model 1.10 .the vertical distribution of ejected mantle material is also quite different . in the frames of hr and hr , the results are qualitatively similar . in the frames of hr through , however , ssph produces vertically stretched disk . on the other hand , with disph ,the number of the disk particles is much smaller than with ssph .disph produces a thinner disk than ssph .figure [ fig : run1_ang ] shows the distribution of the angular momentum . for hr , the ejecta with high angular momentum ( ) is much abundant in the ssph result ( lower panels ) than in the disph result .figures [ fig : run4_x ] and [ fig : run3_x ] show the same plots as fig .[ fig : run1_x ] for the models with low - oblique collision and high - oblique collision , respectively . from these figures , it is obvious that more extended debris disks are greatly formed in the runs with ssph than in those with disph .figure [ fig : disksnapshots ] shows the snapshots at the end time of simulations for all runs .the azimuthal distributions of the disks are different between ssph and disph .with ssph , particles are widely distributed . in particular , in model 1.17, the distribution is nearly azimuthally symmetric . on the other hand , with disph ,the distribution of particles is clearly asymmetric and no large disk is formed .the results of our ssph runs are similar to those in the previous works . in both cases , disks extended to outside of the roche radiusare formed . on the other hand ,the disk is very thin in our disph runs for these models .we quantify this difference more clearly in the section [ sec : surfacetension ] in order to understand the cause of the difference .+ in this section , we summarize dependence of disk properties on initial impact angular momentum .the conditions of the successful moon forming impact is defined by 1 ) the predicted moon mass is comparable to or larger than the present moon mass , and 2 ) most of materials of the formed moon comes from the target planet ( the proto - earth ) .figure [ fig : moon_mass]a shows the time evolution of predicted moon mass for models and with both methods. figures [ fig : moon_mass]b and c are the disk specific angular momentum and mass used to evaluate the moon mass ( eq . [ eq : predmoonmass ] ) , which are calculated by setting the coordinate origin at the barycenter of the target s core particle .figure [ fig : moon_mass]d shows the time evolution of ( the escape mass by the impact ) .the oscillation in these quantities is due to the post - impact oscillational deformation of the merged planet . for , obtained with disph is significantly smaller than that with ssph , because is smaller . for model 1.21 and model 1.32 , while with disph is similar to or rather smaller than that by ssph , is higher for disph , resulting in larger with disph than that with ssph .in addition , ssph produces larger amounts of escaping mass than disph does .these result in the larger with disph than with ssph .figure [ fig : amvsmm]a shows the dependance of on for runs with ssph and disph .generally , increases with for both methods as in fig .[ fig : single_comp_amvsmm ] ( runs with single - component objects ) .notice that the dependence is much more sensitive in the differentiated objects impacts ( fig .[ fig : amvsmm]a ) than in single component objects impacts ( fig .[ fig : single_comp_amvsmm ] ) . for a high - oblique collision ,the impact momentum is transferred to ejecta from the outer parts of the impactor and the target .the volume of ejecta may be primarily regulated by a geometrical effect if the collision velocity is fixed .if the volume of ejecta and momentum transferred to the ejecta are the same for a fixed between an impact of differentiated objects and that of single component objects , the total ejecta mass from differentiated objects is smaller and its post - impact velocity is higher than that from single component objects .it results in formation of a more spread disk or a hit - and - run collision for the differentiated objects impact at high - oblique impacts . with disph, is an order of magnitude smaller than those with ssph for , the trend of which is also found in fig .[ fig : single_comp_amvsmm ] . on the other hand , in the case of , disph produces larger than ssph .this is because in the runs with ssph , more materials are ejected during the first contact event than in the disph runs ( fig .[ fig : moon_mass]d ) , probably by the artificial tension at core - mantle boundaries of the impactor and the target , as we discuss in details in section [ sec : surfacetension ] . in the panels ofrelatively high ( models 1.17 , 1.21 and 1.32 ) in fig . [fig : disksnapshots ] clearly show that much more materials are scattered away in the runs with ssph , while more compact clumps remain in the runs with disph . as a result, disph produces abrupt transition of the predicted moon mass around .since is sensitive to the distribution of the disk particles , the difference in between ssph and disph is more pronounced than that in the distributions of formed disks . in model 1.21 and model 1.32 , the predicted moon mass with disphare greater than disk mass ( see , fig .[ fig : moon_mass]a and c ) . since eq .( [ eq : predmoonmass ] ) is an empirical equation , it is not proper enough for nearly grazing impacts .the values of should be treated as a reference value .figure [ fig : amvsmm]b shows the material fraction from target in the disk for each model .these results show that the material fraction from the target is significantly smaller than in all models with both ssph and disph .however , in the disph runs , is twice as much as at for .an impact with smaller that produces should have smaller , while the disk material fraction from the target would be significantly increased . because in that case may be smaller than , we can adopt higher impact velocity , which may further increase the disk fraction from the target .we will explore the parameters of the impact with and that produces a disk mostly from the target ( the proto - earth ) in a separate paper . and , respectively . ] in this section we discuss the origin of the difference between two methods by focusing on the treatment of the core - mantle boundary . figure [ fig : close_up ] shows the close - up view of sph particles in model 1.17 at min .clear gaps in the particle distributions are found near the core - mantle boundaries of both the target and the impactor in the ssph results . in the case of the disph run ,such a gap does not exist , and the layers of particles are less clear .the gap visible in the ssph run is due to the unphysical surface tension . in fig .[ fig : a_mean ] , we show the acceleration per particle along the -direction , -direction and torque around the -axis of the impactor s core and mantle particles .the hydrodynamical forces of ssph and disph runs during the impact phase are different . in the first 10 minutes , the accelerations in both directions of ssph are larger than those of disph .this difference results in the gap shown in fig .[ fig : close_up ] . from - minutes, the ssph result shows larger torque than that of disph .figure [ fig : accel ] illustrates the effect of this surface tension . in the lower left panel ( ssph , min ) , none of the impactor s core particle suffers negative -directional force , while in the corresponding snapshot with disph at min ( the upper left panel ) , some particles suffer negative -directional force .the amplitude of the acceleration of impactor s core particles is much larger for the ssph run .thus , the impactor particles gain upward velocity ( in the direction of -axis ) , while losing the forward velocity ( negative direction of -axis ) , compared to the disph run .it is most likely that this difference is due to the numerical error of ssph at the contact density discontinuity ( the core - mantle boundary ) and it results in the difference in the formed disks between the disph and ssph runs .the giant impact ( gi ) is the most accepted model for the origin of the moon .however , it is now being challenged .the identical isotope ratios between the earth and the moon found by recent measurement require a survey of new ranges of impact parameters , because the impact previously referred to as a successful moon forming impacts " produces a moon mostly consisting of materials from the impactor rather than those from the proto - earth .we have re - investigated gi by newly developed `` density independent sph '' ( disph ) scheme with tillotson eos .recently it is recognized that the standard sph scheme ( ssph ) has a serious problem in the treatment of contact discontinuities because ssph assumes differentiability of density .the core - mantle boundary is a contact discontinuity and the planetary surface ( free surface ) also has a density jump .the errors result in the unphysical surface tension around the contact discontinuity and the free surface .since disph assumes differentiability of pressure instead of density , it can properly treat the core - mantle boundary , although the treatment of the free surface is not significantly improved from ssph , compared with the contact discontinuity .several tests of disph in appendix a , b , c and d show advantage to and compatibility with ssph .recent studies have pointed out that the standard sph scheme ( ssph ) has a serious problem in the treatment of contact discontinuities .the errors result in the unphysical surface tension around the contact discontinuity and the free surface . extended s density independent sph ( disph ) scheme to non - ideal eos such as tillotson eos , as summarized in section 2.1 .we have compared the results between disph and ssph , focusing on properties of circumplanetary disks generated by gi . to distinguish between the effects of the core - mantle boundary and the free surface , we performed simulations of the collisions between two single component objects and those between differentiated objects with core - mantle structure . in the case of collisions between single - component objects , compared with ssph, disph always produces more compact disks , for which smaller moon masses are predicted .this is because numerical repulsive force appears around the free surface , in ssph runs ( section [ sec : single comp ] ) .note that since the predicted moon mass is sensitively dependent on the distribution of the disk particles , slight difference in the distribution between ssph and disph can result in significant difference in the predicted moon mass .on the other hand , in the case of collisions between differentiated objects with core - mantle boundary , disph predicts more massive moon masses than ssph does for high - oblique impacts , while it still predicts lower mass moons for low - oblique impacts ( section [ sec : time evolve ] ) . the different dependence on the initial impact angular momentum from the single component objects would come from the transfer of impact momentum to the mantle layer with low density and numerical repulsive force at the core - mantle boundary ( section [ sec : time evolve ] and [ sec : predicted moon mass ] ) .the overall trend that the predicted moon mass increases with the initial impact angular momentum is common between ssph and disph .note that our result is consistent with the conclusion by : ssph and a grid code , amr , produce disks that predict similar moon mass .they did a comparison for impacts with parameters similar to model 1.21 .as we showed in fig .[ fig : amvsmm ] , for model 1.21 , ssph and disph show similar results within 50% of the predicted moon mass .thereby , disph produces consistent results with amr for this parameter .the comparison with grid codes is necessary for other initial impact angular momentum for which disph and ssph significantly differ from each other .however , clump structure looks different among amr , ssph and disph , suggesting that the angular momentum distributions are different among them .what we want to stress in this paper is that properties of circumplanetary disks generated by gi are sensitive to the choice of the numerical scheme .only the difference in the treatment of a contact discontinuity ( core - mantle boundary ) between disph and ssph significantly affects the results .other effects such as the treatment of free surface , shock propagation , heating so on are also likely to change the results .the results of gi also depend on the eos , the initial thermal structure , density profiles , material strength and numerical resolution and so on .we need be very careful when some conclusions are drawn from the numerical simulations for gi , because planets consist of solid layers with different compositions but not uniform gas and current numerical schemes have not been developed enough to treat planets .thus , we need to develop numerical codes suitable for gi between planets , step by step. the next step of disph would be handling of free surfaces and shock propagation that currently has a free parameter .for gi , code - code comparisons are now needed .comparison to experiments or other numerical schemes in the case of simple impact problem is also needed to calibrate the code .these are left for future works .the authors thank matthieu laneuville , prof .melosh and the anonymous referees for giving us helpful comments on the manuscript .this work is supported by a grant for the global coe program , ` from the earth to `` earths ' '' , mext , japan .part of the research covered in this paper research was funded by mext program for the development and improvement for the next generation ultra high - speed computer system , under its subsidies for operating the specific advanced large research facilities .it was also supported in part by a grant - in - aid for scientific research ( 21244020 ) , the grant - in - aid for young scientists a ( 26707007 ) and strategic programs for innovative research of the ministry of education , culture , sports , science and technology ( spire ) .to study the effect of choice of a parameter in the disph scheme , we performed two calculation 3d shock tube problems with ssph and disph with varying the parameter .one calculation is with the ideal gas eos and another is with tillotson eos .the parameter is taken to be , and .the initial condition of the 3d shock tube problem for ideal gas eos is set as follows : the velocity of each side is set to be .we employ a 3d computational domain , , and , and periodic boundary conditions are imposed in all directions .we employ ideal gas eos with specific heat ratio .the particle separation of high density side is set to ; thus , the total number of particles is .the initial condition of the 3d shock tube problem for tillotson eos is set as follows : the parameters of granite are adopted for tillotson eos .the density and specific internal energy are normalized in reference density and reference energy .the computational domain and number of particles are the same as those in the calculation with the ideal gas eos .figure [ fig : st3d ] shows snapshots at of the 3d shock tube problem for the ideal gas eos with both ssph and disph .both methods show similar results , except for the contact discontinuity ( at ) .as expected , disph shows fairly smaller pressure blips than ssph at the contact discontinuity for all values of , while larger jumps are found in density with disph .these results are consistent with those shown in and .figure [ fig : st3dt ] shows snapshots at of the 3d shock tube problem for tillotson eos with both ssph and disph .also in this case , disph shows smaller pressure blips around the contact discontinuity .for all values of , disph shows better treatment of the contact discontinuity .the dependence of the magnitude of the pressure blip on is different between the ideal gas eos and tillotson eos . with tillotson eos ,the pressure blip is higher for smaller value of while the blip is still smaller than that with ssph .however , as shown below , the treatment of a free surface is better for smaller even with disph .note that this pressure blip can be a serious problem when we treat the contact discontinuity . and showed the consequence of this pressure blip ( see fig . 4 in * ? ? ?* ) . in their hydrostatic equilibrium tests, they put the high - density square in the low - density ambient in the pressure equilibrium .with ssph , the high density square , which should remain its initial shape , quickly transforms into circular shape . with disph ,on the other hand , the high density square remains its initial shape .this means that with ssph , simulations suffer from the unphysical momentum transfer . and also showed the results of kelvin - helmholtz instability test , in which the contact discontinuity plays very important role ( see fig . 5 in * ? ? ?as expected , ssph shows unphysical surface tension effect , while disph clearly eliminates it .previous simulations should have suffer from this unphysical effect .the treatment of the region with abrupt change in the pressure that corresponds to a free surface is improved by disph , in particular with small value of . to show this, we show the pressure field around the strong shock region with tilloston eos .the initial pressure distribution is set as follows : the density is uniformly set to be .figure [ fig : strong_shock ] shows the pressure field and the error at the very first step , where the error is defined as follows : this figure clearly shows that taking the parameter small improves the treatment of the large pressure jump , such as free surface . in fig .[ fig : amvsmm_w1 ] , we show the results of three runs of gi with .the results are somewhat different from those with .however , just like the case with , disph produces a smaller moon mass in the model 1.10 and a larger moon mass in the model 1.32 .our modification for disph has good capability for both the shock and contact discontinuity , although disph includes the free parameter .the tests of 3d shock tube and a free surface with ideal gas eos show that may be the best choice , similar to .however , with tillotson eos , different dependence on can be seen .the results for gi with tillotson eos shows slightly different dependence on , though the number of runs is few .thus , more careful calibration for the dependence on should be done , which is left for future work .unless otherwise specified , in the following , we adopt .for the calculation of gi problems , it is important to treat the angular momentum transfer correctly . in both ssph and disph , since the interaction between two particles is pairwise , the global angular momentum is conserved .however , it is often said that ssph does not treat the local angular momentum transfer correctly ; unphysical angular momentum transfer due to the so - called zeroth order error and spurious viscosity appear . in this appendix , to test whether ssph and disph can treat the angular momentum transfer correctly or not , we performed two well - posed tests ; one is the keplerian disk test ( e.g. , * ? ? ?* ; * ? ? ?* ) and the other is the gresho vortex test ( e.g. , * ? ? ?* ) . for the keplerian disk test, we initialize two - dimensional disk whose surface density is set to .the inner and outer edges of the disk are set to and .the initial pressure of the disk is set to and the heat capacity ratio of the ideal gas is set to .the self gravity between particles are ignored , while the gravity from the central star acts on each particle . in this testwe employ particles in total . for the gresho vortex test ,we employ periodic boundary computational domain with the uniform density of unity .the initial pressure and azimuthal velocity distributions are as follows : where . in this testwe employ particles in total .figure [ fig : disk ] shows the snapshots of the keplerian disk test with both methods .neither is accurate enough ; the disk breaks up less than orbits . at the time orbits , both methods show quite similar results , except the inner edge .then , ssph shows catastrophic break up of the rotating disk and makes large filament - like structure , similar to the previous studies ( e.g. , * ? ? ?on the other hand , disph also shows break up of the rotation disk .however , disph does not produce filament - like structure , though the break up of the disk can be seen .with ssph , the outer regions of the disk is still remain while with disph virtually all regions are distorted .we also note that the radial distributions of the mass and angular momentum are similar between two methods at the time orbits ( see , fig .[ fig : bin ] ) .however , as expected , the radial distributions of the mass and angular momentum are quite different at the time orbits . figure [ fig : gresho ] shows the results of the gresho vertex test with both methods .both methods show similar results ; substantial velocity noise appears , similar to the previous studies ( e.g. , * ? ? ?* ; * ? ? ?* ) . from these tests, we should conclude that both methods can handle the local angular momentum transfer to the same degree .note that in gi simulations , we set the end time at hrs , which corresponds to about orbital time at the roche limit .figure [ fig : disk ] shows that both methods can treat the local angular momentum transfer until orbital time .overall , both ssph and disph are capable of dealing with rotation disks with similar degree , as far as the simulation time is less than orbital periods .here we show the results of sedov - taylor blast wave test , which shows the capability of the strong shock .the initial condition of this test is the same as that used in .we employ 3d computational domain .we first place equal - mass particles in the glass distribution with the uniform density of unity .then , the explosion energy is added to the central particles . in this testwe use the ideal gas eos with .figure [ fig : sedov ] shows the results of this test with disph with and without grad- term . without grad- term , clearly , the shock propagates slower than the semi analytic solution .disph with the grad- , on the other hand , shows better results than disph without the grad- term .our disph with the grad- can treat the strong shock .note that these results are consistent to and .to check the capability for the free surface , we performed three simple 2d tests which include free surface with both disph and ssph .the first test is hydrostatic equilibrium test , which is carried by .the second test is the vertical impact of aluminium - to - aluminium test and the third test is the glass - on - water test .the latter two tests are performed by .for the first test , we employ 2d computational domain and particles in total .the periodic boundary condition is imposed on the -direction .we set up the fluid which is initially in pressure equilibrium under the constant gravitational acceleration along the -direction .we fix the positions and internal energies of all particles with . around ,there is a free surface . in this testwe use the following eos for linear elastic material : the material parameters and are set to the granite s values in the tillotson eos . since this system is in a hydrostatic equilibrium , particles should maintain their initial positions . for the second test , we first placed the target particles in . then , we placed projectile particles with the impact velocity km / sec .we employ particles in total .we set the impact angle to ( vertical impact ) .the radius of projectile is set to m. in this test we use the tillotson eos and the material parameters are set to the aluminium s values , similar to .note that in this test , we omit the material strength . in the third test , we followed the early time evolution of the glass - on - water test , following .we first place target particles in .then , we placed projectile particles with the impact velocity km / sec .we employed particles in total .we set the impact angle to ( vertical impact ) the radius of projectile is set to mm .we used the tillotson eos and the material parameters are set to water for target and wet tuff for projectile .figure [ fig : fs ] shows the results of s test . with ssph , at sound crossing time , particles around the free surface clearly move to the different positions from the initial positions and at sound crossing time , particles move downward . with disph ,similar to ssph , the outermost particle layer move downward .however , the other particles virtually keep their initial positions until and the sound crossing time , second outermost particle layer slightly move upward . unlike ssph , disph produces virtually no -directional motions .it is clear that disph can treat the free surface better than ssph .figures [ fig : al_to_al ] and [ fig : raddep ] show the snapshots of the aluminium - to - aluminium test with both methods .both methods produce roughly similar results ; the jetting and excavation of the target is produced around the impact site .the crater size and depth are almost indistinguishable between ssph and disph .disph has similar accuracy / errors for free surface as ssph does .note that there are several differences between two results , e.g. , the height and expansion of impact jetting .figure [ fig : glass_on_water ] shows the results of the glass - on - water test with both methods . unlike the aluminium - to - aluminium test , this test contains the contact discontinuity between water and wet tuff .similar to the aluminium - to - aluminium test , the height and expansion of the ejecta curtain is different between two methods , which could be due to the unphysical surface tension between two different materials arising in ssph calculations .the target particles are pushed up by the projectile particles at the early step of the impact ( - ) .this results in the higher crater rim with ssph than disph . at ,ssph produces oblate projectile , while with disph , the projectile and target are mixed .this clearly due to the unphysical surface tension term which results in an underestimate of material mixing .this difference may be related to the difference in impact - generated disks in the gi simulations between disph and ssph .figure [ fig : eject_mass ] shows the cumulative mass of ejecta with a vertical velocity greater than a given velocity .according to the previous works ( e.g. , * ? ? ?* ; * ? ? ?* ) , the results should have a power - law form with a slope of : where and are the velocity of the impactor , the density of the impactor and the density of the target . here , and are material parameters which are set to be and .both methods reproduced roughly similar results to the experiments shown in .the power - law regime with a slope of is well reproduced with both methods .however , ssph produces high speed jetting component ( ) , which can hardly be seen in the experimental results .this difference should come from the fact that the target particles feel unphysical surface tension from the penetrating projectile , as stated in the previous paragraph .the target particles are pushed up by the projectile particles to acquire high vertical velocity .this could result in the difference of the results of gi between two methods .it is not clear which sph scheme is more correct , especially for the free surface .the tests carried out in this appendix are performed using a 2d cartesian geometry .the results may differ from 2d cylindrical or 3d geometries .thus , it is not straightforward to compare these results with and the experiments . to carry out appropriate comparison , we need to perform 3d impact tests or use 2d axisymmetric domainhowever , note that disph does not show unphysical behavior compared to the results with grid code .we need further investigation to find an appropriate treatment of the free surface , which is left for future work .agertz , o. , and 18 colleagues 2007 .fundamental differences between sph and grid methods .monthly notices of the royal astronomical society 380 , 963 - 978 .balsara , d. s. 1995 .von neumann stability analysis of smooth particle hydrodynamics suggestions for optimal algorithms .journal of computational physics 121 , 357 - 372 .barnes , j. , hut , p. 1986 . a hierarchical o(n log n ) force - calculation algorithm .nature 324 , 446 - 449 .benz , w. , slattery , w. l. , cameron , a. g. w. 1986 .the origin of the moon and the single - impact hypothesis .i. icarus 66 , 515 - 535 .benz , w. , slattery , w. l. , cameron , a. g. w. 1987 .the origin of the moon and the single - impact hypothesis .ii . icarus 71 , 30 - 45 .benz , w. , cameron , a. g. w. , melosh , h. j. 1989 . the origin of the moon and the single impact hypothesis .icarus 81 , 113 - 131 .cameron , a. g. w. 1997 .the origin of the moon and the single impact hypothesis v. icarus 126 , 126 - 137 .cameron , a. g. w. , benz , w. 1991 .the origin of the moon and the single impact hypothesis .icarus 92 , 204 - 216 .cameron , a. g. w. , ward , w. r. 1976 .the origin of the moon .lunar and planetary science conference 7 , .canup , r. m. , asphaug , e. 2001 .origin of the moon in a giant impact near the end of the earth s formation .nature 412 , 708 - 712 .canup , r. m. 2004 .simulations of a late lunar - forming impact .icarus 168 , 433 - 456 .canup , r. m. 2012 . forming a moon with an earth - like composition via a giant impact .science 338 , 1052 .canup , r. m. , barr , a. c. , crawford , d. a. 2013 .lunar - forming impacts : high - resolution sph and amr - cth simulations .icarus 222 , 200 - 219 .uk , m. , stewart , s. t. 2012 . making the moon from a fast - spinning earth : a giant impact followed by resonant despinning .science 338 , 1047 .cullen , l. , dehnen , w. 2010 .inviscid smoothed particle hydrodynamics .monthly notices of the royal astronomical society 408 , 669 - 683 .dehnen , w. , aly , h. 2012 . improving convergence in smoothed particle hydrodynamics simulations without pairing instability .monthly notices of the royal astronomical society 425 , 1068 - 1082 .georg , r. b. , halliday , a. n. , schauble , e. a. , reynolds , b. c. 2007 .silicon in the earth s core .nature 447 , 1102 - 1106 .gingold , r. a. , monaghan , j. j. 1977 . smoothed particle hydrodynamics - theory and application to non - spherical stars .monthly notices of the royal astronomical society 181 , 375 - 389 .gresho , p. m. , chan , s. t. 1990 . on the theory of semi - implicit projection methods for viscous incompressible flow and its implementation via a finite element method that also introduces a nearly consistent mass matrix .ii - implementation .international journal for numerical methods in fluids 11 , 621 - 659 .hartmann , w. k. , davis , d. r. 1975 .satellite - sized planetesimals and lunar origin .icarus 24 , 504 - 514 .hernquist , l. , katz , n. 1989 .treesph - a unification of sph with the hierarchical tree method .the astrophysical journal supplement series 70 , 419 - 446 .holsapple , k. a. 1993 .the scaling of impact processes in planetary sciences . annual review of earth and planetary sciences 21 , 333 - 373 .hopkins , p. f. 2013 .a general class of lagrangian smoothed particle hydrodynamics methods and implications for fluid mixing problems .monthly notices of the royal astronomical society 428 , 2840 - 2856 .hopkins , p. f. 2015 . a new class of accurate , mesh - free hydrodynamic simulation methods . monthly notices of the royal astronomical society 450 , 53 - 110 .hosono , n. , saitoh , t. r. , makino , j. 2013 .density - independent smoothed particle hydrodynamics for a non - ideal equation of state .publications of the astronomical society of japan 65 , .housen , k. r. , holsapple , k. a. 2011 .ejecta from impact craters .icarus 211 , 856 - 875 .ida , s. , canup , r. m. , stewart , g. r. 1997 .lunar accretion from an impact - generated disk .nature 389 , 353 - 357 .kokubo , e. , ida , s. , makino , j. 2000 .evolution of a circumterrestrial disk and formation of a single moon .icarus 148 , 419 - 436 .lucy , l. b. 1977 . a numerical approach to the testing of the fission hypothesis . the astronomical journal 82 , 1013 - 1024 .lugmair , g. w. , shukolyukov , a. 1998 .early solar system timescales according to - systematics .geochimica et cosmochimica acta 62 , 2863 - 2886 .mcnally , c. p. , lyra , w. , passy , j .- c . 2012 . a well - posed kelvin - helmholtz instability test and comparison . the astrophysical journal supplement series 201 , 18 .melosh , h. j. 1989 . book - review - impact cratering - a geologic process .sky and telescope 78 , 382 .monaghan , j. j. , lattanzio , j. c. 1985 . a refined particle method for astrophysical problems .astronomy and astrophysics 149 , 135 - 143 .monaghan , j. j. 1992 . smoothed particle hydrodynamics .annual review of astronomy and astrophysics 30 , 543 - 574 .monaghan , j. j. 1994 .simulating free surface flows with sph .journal of computational physics 110 , 399 - 406 .monaghan , j. j. 1997 .sph and riemann solvers . journal of computational physics 136 , 298 - 307 .nakajima , m. , stevenson , d. j. 2014 .investigation of the initial state of the moon - forming disk : bridging sph simulations and hydrostatic models .icarus 233 , 259 - 267 .okamoto , t. , jenkins , a. ,eke , v. r. , quilis , v. , frenk , c. s. 2003 . momentum transfer across shear flows in smoothed particle hydrodynamic simulations of galaxy formation .monthly notices of the royal astronomical society 345 , 429 - 446 .pahlevan , k. , stevenson , d. j. 2007 .equilibration in the aftermath of the lunar - forming giant impact . earth and planetary science letters 262 , 438 - 449 .pierazzo , e. , and 13 colleagues 2008 .validation of numerical codes for impact and explosion cratering : impacts on strengthless and metal targets .meteoritics and planetary science 43 , 1917 - 1938 .price , d. j. 2012 . smoothed particle hydrodynamics and magnetohydrodynamics .journal of computational physics 231 , 759 - 794 .reufer , a. , meier , m. m. m. , benz , w. , wieler , r. 2012 . a hit - and - run giant impact scenario .icarus 221 , 296 - 299 .saitoh , t. r. , makino , j. 2013 . a density - independent formulation of smoothed particle hydrodynamics . the astrophysical journal 768 , 44 .springel , v. 2010 . smoothed particle hydrodynamics in astrophysics .annual review of astronomy and astrophysics 48 , 391 - 430 .springel , v. , hernquist , l. 2002 .cosmological smoothed particle hydrodynamics simulations : the entropy equation .monthly notices of the royal astronomical society 333 , 649 - 664 .thomas , p. a. , couchman , h. m. p. 1992 . simulating the formation of a cluster of galaxies .monthly notices of the royal astronomical society 257 , 11 - 31 .tillotson , j. h. 1962 .metallic equations of state for hypervelocity impact .ga-3216 ( general atomic report : san diego , calfornia ) , 1 - 142touboul , m. , kleine , t. , bourdon , b. , palme , h. , wieler , r. 2007 . late formation and prolonged differentiation of the moon inferred from w isotopes in lunar metals .nature 450 , 1206 - 1209 .valcke , s. , de rijcke , s. , rdiger , e. , dejonghe , h. 2010 .kelvin - helmholtz instabilities in smoothed particle hydrodynamics .monthly notices of the royal astronomical society 408 , 71 - 86 .wada , k. , kokubo , e. , makino , j. 2006 .high - resolution simulations of a moon - forming impact and postimpact evolution .the astrophysical journal 638 , 1180 - 1186 .wiechert , u. , halliday , a. n. , lee , d .-snyder , g. a. , taylor , l. a. , rumble , d. 2001 .oxygen isotopes and the moon - forming giant impact .science 294 , 345 - 348 .wisdom , j. , tian , z. 2015 .early evolution of the earth - moon system with a fast - spinning earth .icarus 256 , 138 - 146 .zhang , j. , dauphas , n. , davis , a. m. , leya , i. , fedkin , a. 2012 .the proto - earth as a significant source of lunar material .nature geoscience 5 , 251 - 255 .
|
at present , the giant impact ( gi ) is the most widely accepted model for the origin of the moon . most of the numerical simulations of gi have been carried out with the smoothed particle hydrodynamics ( sph ) method . recently , however , it has been pointed out that standard formulation of sph ( ssph ) has difficulties in the treatment of a contact discontinuity such as a core - mantle boundary and a free surface such as a planetary surface . this difficulty comes from the assumption of differentiability of density in ssph . we have developed an alternative formulation of sph , density independent sph ( disph ) , which is based on differentiability of pressure instead of density to solve the problem of a contact discontinuity . in this paper , we report the results of the gi simulations with disph and compare them with those obtained with ssph . we found that the disk properties , such as mass and angular momentum produced by disph is different from that of ssph . in general , the disks formed by disph are more compact : while formation of a smaller mass moon for low - oblique impacts is expected with disph , inhibition of ejection would promote formation of a larger mass moon for high - oblique impacts . since only the improvement of core - mantle boundary significantly affects the properties of circumplanetary disks generated by gi and disph has not been significantly improved from ssph for a free surface , we should be very careful when some conclusions are drawn from the numerical simulations for gi . and it is necessary to develop the numerical hydrodynamical scheme for gi that can properly treat the free surface as well as the contact discontinuity .
|
in this paper we address lossy compression of a binary symmetric source . given any realization of a ber( ) source , the goal is to compress by mapping it to a shorter binary vector such that an approximate reconstruction of is possible within a given fidelity criterion .more precisely , suppose is mapped to the binary vector with and is the reconstructed source sequence .the quantity is called the compression rate . the fidelity or distortion is measured by the hamming distance . the goal is to minimize the average hamming distortion ] where is the binary entropy function .our approach in this paper is based on low - density parity - check ( ldpc ) codes .let be a ldpc code with generator matrix and parity check matrix .encoding in lossy compression can be implemented like decoding in error correction . given a source sequence , we look for a codeword such that is minimized . the compressed sequence is obtained as the information bits that satisfies .even though ldpc codes have been successfully used for various types of lossless data compression schemes , and also the existence of asymptotically capacity - achieving ensembles for binary symmetric sources has been proved , they have not been fully explored for lossy data compression .it is partially due to the long standing problem of finding a practical source - coding algorithm for ldpc codes , and partially because low - density generator matrix ( ldgm ) codes , as dual of ldpc codes , seemed to be more adapted for source coding and received more attention in the few past years . in , martinian and yedidiashow that quantizing a ternary memoryless source coding with erasures is dual of the transmission problem over a binary erasure channel .they also prove that ldgm codes , as dual of ldpc codes , combined with a modified belief propagation ( bp ) algorithm can saturate the corresponding rate - distortion bound .following their pioneering work , ldgm codes have been extensively studied for lossy compression by several researchers , . in a series of parallel work, several researches have used techniques from statistical physics to provide non - rigorous analysis of ldgm codes , and . in terms of practical algorithms ,lossy compression is still an active research topic . in particular , an asymptotically optimal low complexity compressor with nearoptimal empirical performance has not been found yet .almost all suggested algorithms have been based on some kind of decimation of bp or sp which suffers a computational complexity of , and .one exception is the algorithm proposed by murayama .when the generator matrix is ultra sparse , the algorithm was empirically shown to perform very near to the associated capacity needing computations .a generalized form of this algorithm , called reinforced belief propagation ( rbp ) , was used in a dual setting , for ultra sparse ldpc codes over gf( ) for lossy compression .the main drawback in both cases is the non - optimality of ultra sparse structures over gf( ) , , .as we will see , this problem can be overcome by increasing the size of the finite field .our simulation show that -_reduced _ ultra sparse ldpc codes over gf( ) achieve near capacity performance for .moreover , we propose an efficient encoding / decoding scheme based on rbp algorithm . the rest of this paper is organized as follows .section [ sec : ldpcgf(q ) ] reviews the code ensemble which we use for lossy compression .section [ sec : rbpgf( ) ] describes the rbp algorithm over gf( ) .we also discuss briefly the complexity and implementation of the rbp algorithm . in section[ sec : ilc ] we describe iterative encoding and decoding for our ensemble and then present the corresponding simulation results in section [ sec : result ] . a brief discussion on further research is given in section [ sec : fr ] .in this section we introduce the ultra sparse ldpc codes over gf( ) .as we will see later , near capacity lossy compression is possible using these codes and bp - like iterative algorithms .we follow the methods and notations in to construct irregular bipartite factor graphs .what distinguishes gf( ) ldpc codes from their binary counterparts is that each edge ( ) of the factor graph has a label gf( ) . in other words , the non - zero elements of the parity - check matrix of a gf( ) ldpc codes are chosen from the non - zero elements of the field gf( ) .denoting the set of variable nodes adjacent to a check node by , a word with components in gf( ) is a codeword if at each check node the equation holds .a ( ) gf( ) ldpc code can be constructed from a ( ) ldpc code by random independent and identically distributed selection of the labels with uniform probability from gf() ( for more details see ) .it is well known that the parity check matrix of a gf( ) ldpc code , optimized for binary input channels , is much sparser than the one of a binary ldpc code with same parameters . in particular , when , the best error rate results on binary input channels is obtained with the lowest possible variable node degrees , i.e. , when almost all variable nodes have degree two .such codes have been called _ ultra sparse _ or _ cyclic _ ldpc codes in the literature . in the rest of this paperwe call a ldpc code ultra sparse ( us ) if all variable nodes have degree two and the parity check s degree distribution is concentrated for any given rate .it is straightforward to show that for a us - ldpc code defined as above check node degrees has at most two non - zero values and the maximum check node degree of the code is minimized . given a linear code and an integer , a -reduction of is the code obtained by randomly eliminating parity - check nodes of . for reasons to be cleared in section [ sec : ilc ] , we are mainly interested in -reduction of gf( ) us - ldpc codes for small values of ( ) .note that by cutting out a parity check node from a code , the number of codewords is doubled .this increment of the codewords has an asymptotically negligible effect on the compression rate since it only increases by while the robustness may increase .gf( ) us - ldpc codes have been extensively studied for transmission over noisy channels , , .the advantage of using such codes is twofold . on the one hand , by moving to sufficiently large fields , it is possible to improve the code . on the other hand , the extreme sparseness of the factor graph is well - suited for iterative message - passing decoding algorithms . despite the state of the art performance of moderate length gf( ) us - ldpc channel codes , they have been less studied for lossy compression .the main reason being the lack of fast suboptimal algorithms . in the next section we present rbp algorithm over gf( ) and then show that practical encoding for lossy compression is possible by using rbp as the encoding algorithm for the ensemble of -reduced us - ldpc codes .in this section first we briefly review the rbp equations over gf( ) and then we discuss in some details the complexity of the algorithm following declercq and fossorier . the gf( ) belief propagation ( bp ) algorithm is a straightforward generalization of the binary case , where the messages are q - dimensional vectors .let denotes the message vector form variable node to check node at the iteration .for each symbol ( ) , the component of is the probability that variable takes the value and is denoted by .similarly , denotes the message vector from check node to variable node at the iteration and is its component .also let ( ) denote the set of check ( variable ) nodes adjacent to ( ) in a given factor graph .constants are initialized according to the prior information .the bp updating rules can be expressed as follows : * local function to variable : * * variable to local function : * where is the set of all configurations of variables in which satisfy the check node when the value of variable is fixed to .we define the marginal function of variable at iteration as the algorithm converges after iterations if and only if for all variables and all function nodes up to some precision . a predefined maximum number of iterations and the precision parameter are the input to the algorithm .rbp is a generalization of bp in which the messages from variable nodes to check nodes are modified as follows where is the marginal function of variable at iteration and \longrightarrow [ 0,1] ] .note that when , rbp is the same as the algorithm presented in for lossy data compression . in this caseit is easy to show that the only fixed points of rbp are configurations that satisfy all the constraints . ignoring the normalization factor in ( [ spvar2func ] ) , to compute all variable to check - node messages at a variable node of degree we need computations . a naive implementation of gf( )bp has computational complexity of operations at each check node of degree .this high complexity is mainly due to the sum in ( [ spfunc2var ] ) , that can be interpreted as a discrete convolution of probability density functions .efficient implementations of function to variable node messages based on discrete fourier transform have been proposed by several authors , see for example , and the references within .the procedure consists in using the identity , where the symbol denotes convolution .assuming , the fourier transform of each message needs computations and hence the total computational complexity at check node can be reduced into .this number can be further reduced to by using the fact that , or alternatively by using the summation strategy described in which has the same complexity but is numerically more stable .therefore , the total number of computations per iteration is where is the average degree .in the following three subsections we first describe a simple method for identifying information bits of a -reduced us - ldpc code and then present a near capacity scheme for iterative compression ( encoding ) and linear decompression ( decoding ) . for -reduced us - ldpc codes, one can use the _ leaf removal _ ( lr ) algorithm to find the information bits in a linear time . in the rest of this sectionwe briefly review the lr algorithm and show that 1-reduction ( removal of a sole check node ) of a us - ldpc code significantly changes the intrinsic structure of the factor graph of the original code .the main idea behind lr algorithm is that a variable on a leaf of a factor graph can be fixed in such a way that the check node to which it is connected is satisfied . given a factor graph , lr starts from a leaf and removes it as well as the check node it is connected to .lr continues this process until no leaf remains .the residual sub - graph is called the _core_. note that the core is independent of the order in which leaves ( and hence the corresponding check nodes ) are removed from the factor graph .this implies that also the number of steps needed to find the core does not depend on the order on which leaves are chosen .while us - ldpc codes have a complete core , i.e. there is no leaf in their factor graph , the -reduction of these codes have empty core .our simulations also indicate that even 1-reduction of a code largely improves the encoding under rbp algorithm ( see section [ sec : result ] ) .how rbp exploits this property is the subject of ongoing research .it is straightforward to show that a code has empty core if and only if there exists a permutation of columns of the corresponding parity - check matrix such that for and for all .as we have mentioned , lr algorithm can be also used to find a set of information bits of a given us - ldpc code . at any step of lr algorithm ,if the chosen leaf is the only leaf of the check node into which it is connected , then its value is determined uniquely as a function of non - leaf variables of check node . if the number of leaves is greater than 1 , there are configurations which satisfy the check node after fixing the values of non - leaf variables . at each step of lrwe choose a subset of leaves .this set is denoted by and we call it the free subset at step . note that there are free subsets among which we choose only one at each step .it is straightforward to show that the union of all free subsets is a set of information bits for a given us - ldpc code .suppose a code of rate and a source sequence is given . in order to find the codeword that minimizes , we will employ the rbp algorithm with a strong prior centered around .the sequence of information bits of is the compressed sequence and is denoted by . in order to process the encoding in gf( ) , we first need to map into a sequence in gf( ) .this can be simply done by grouping bits together and use the binary representation of the symbols in gf( ) . given the sequence of information bits , the goal of the decoder is to find the corresponding codeword .this can be done by calculating the which in general needs computations .one of the advantages of our scheme is that it allows for a low complexity iterative decoding .the decoding can be performed by iteratively fixing variables following the inverse steps of the lr algorithm ; at each step only one non - information bit is unknown and its value can be determined from the parity check . for a sparse parity - check matrix ,the number of needed operations is .given an initial vector , and a probability distribution over all configurations , the -average distance from can be computed by where is the set of marginals of . on the other hand , the entropy of the distribution is defined by even though it is a hard problem to calculate analytically both marginals and of a given code , one may approximate them using messages of the bp algorithm at a fixed point . assuming the normalized distance is asymptotically a self - averaging quantity for our ensemble , represents the logarithm of the number of codeword at distance from . by applying a prior distribution on codewords given by oneis able to sample the sub - space of codewords at different distances from .[ fig : wef ] demonstrates the wef of random gf(q ) us - ldpc codes for rates 0.3 , 0.5 , and 0.7 and field orders 2 , 4 , 16 , 64 and 256 .the blocklength is normalized so that it corresponds to binary digits .the approximate wef of gf( ) us - ldpc codes as a function of for a same blocklength in binary digits . ,scaledwidth=45.0% ] though bp is not exact over loopy graphs , we conjecture that the wef calculated for us - ldpc codes is asymptotically exact .this hypothesis can be corroborated by comparing the plot in fig .[ fig : wef ] with the simulation results we obtained by using rbp algorithm ( fig .[ fig : qperformace ] ) . in all our simulationsthe parameter of rbp algorithm is fixed to one and therefore the function is constant and does not depend on the iterations .we also fix the maximum number of iterations into .if rbp does not converge after 300 iterations , we simply restart rbp with a new random scheduling .the maximum number of trials allowed in our simulations is .the encoding performance depends on several parameters such as , , the field order , and the blocklength . in the following we first fix , and , in order to see how the performance changes as a function of .our main goal is to show that there is a trade off , controlled by , between three main aspects of the performance , namely : average distortion , average number of iterations and average number of trials .the simulations in this subsection are done for a 5-reduced gf(64 ) us - ldpc code with length and rate .the factor graph is made by _ progressive - edge - growth _ ( peg ) construction .the rate is chosen purposefully from a region where our scheme has the weakest performance .the distortion capacity for this rate is approximately . in fig .[ fig : gamma0 ] we plot the performance as a function of .for we achieve a distortion of needing only 83 iterations in average and without any need to restart rbp for 50 samples . by increasing to 0.96 , one can achieve an average distortion of which is only 0.15 db away from the capacity needing 270 iterations in average .performance as a function of for a peg graph with n=1600 and r=0.33 .the averages are taken over 50 samples.(a ) average distortion as a function of .for the rbp does not converge within 300 iterations .( b)the average number of iterations .( c)the average number of trials .( d ) the average number of iterations needed for each trial . note that even though average number of iterations show a steep increase as a function of , the average number of iterations needed per trial increases only linearly . , scaledwidth=45.0% ] fig .[ fig : qperformace ] shows the distortion obtained by randomly generated 5-reduced gf(q ) us - ldpc codes for , and .the block length is fixed to binary digits . for each given code , we choose and so that the average number of trials does not exceed 2 and the average number of iterations remains less than 300 .such values of and are found by simulations . under these two conditions, we report distortion corresponding to best values of the two parameters averaged over 50 samples . the rate - distortion performance of gf( ) ldpc codes encoded with rbp algorithm for and .the blocklength is 12000 binary digits and each point is the average distortion over 50 samples.,scaledwidth=43.0% ]our results indicate that the scheme proposed in this paper outperforms the existing methods for lossy compression by low - density structures in both performance and complexity .the main open problem is to understand and analyze the behavior of rbp over -reduced us - ldpc codes .as we have mentioned , -reduction of a us - ldpc code not only provides us with simple practical algorithms for finding information bits and decoding , but also largely improves the convergence of rbp .it is interesting to study the ultra sparse ensembles where a certain fraction of variable nodes of degree one is allowed .f.k . wish to thank sergio benedetto , guido montorsi and toshiyuki tanaka for valuable suggestions and useful discussions .g. caire , s. shamai and s. verdu , noiseless data compression with low - density parity - check codes , " _dimacs series in discrete mathematics and theoretical computer science _ , p. gupta and g. kramer edt .american mathematical society , 2004 .
|
in this paper we consider the lossy compression of a binary symmetric source . we present a scheme that provides a low complexity lossy compressor with near optimal empirical performance . the proposed scheme is based on -reduced ultra - sparse ldpc codes over gf( ) . encoding is performed by the _ reinforced belief propagation _ algorithm , a variant of belief propagation . the computational complexity at the encoder is , where is the average degree of the check nodes . for our code ensemble , decoding can be performed iteratively following the inverse steps of the leaf removal algorithm . for a sparse parity - check matrix the number of needed operations is . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ]
|
detection of p - wave arrivals accurately is an important step to make earthquake forecast , to analyze the interior structure of the earth and to study earthquake sources . picking seismic phase arrivalscorrectly is also helpful to discriminate between natural earthquakes and man - made explosions .several p - wave detection techniques utilize only one component of the usually recorded three - component seismogram .these techniques work very well for the case of teleseismic events ( with epicentral distance 3300 km ) since seismic p - waves approach the seismic sensors in a nearly vertical incidence ( where is the epicentral distance between a seismic station and the seismic epicenter ) ( figure 2 ) .magotra attempted to use horizontal component seismograms in order to determine the direction of arrival of seismic waves towards a seismic station . used kurtosis and skewness of the vertical component seismogram to estimate the p - onset time assuming that the maximum of the derivatives of the kurtosis and skewness curve , just before the curve attains its maximum , is considered to be the time of arrival for the p - wave . unlike in teleseismic events , the p - wave signal originating from regional ( 100 km 1400 km ) and local distances 200 km ) arrive at the seismic station with significant strengths in both horizontal and vertical directions - .since the energy is distributed between the horizontal and the vertical recording channels , both horizontal and vertical component seismograms need to be taken into account for improved detection . in thisstudy the seismic vector magnitude of three component seismograms from a single station is used to determine the p - wave arrival without employing the derivatives of the kurtosis and skewness .when a p - wave arrives , we can observe a maximum asymmetry of distribution of the vector magnitude of the three - component seismograms . as a result of this asymmetry ,maximum values of kurtosis and skewness are expected to occur when a p - wave arrives and this information can be extracted and used for the p - arrival detector .thus , one essential hypothesis of this study is that the normalized vector magnitude will show very high kurtosis and skewness magnitude when a p - wave arrives .first we perform window - by - window normalization of the vector magnitude to reduce the huge variations in magnitude resulting from the ground motion and obtain a zero - mean normalized vector magnitude for each window .the normalized vector magnitude is then used as input to the detection and picking system . the automatic detection and picking system makes use of the skewness and kurtosis of the input normalized vector magnitude in order to detect the p - arrival . for a sliding time window of the seismic vector magnitude the values of kurtosis and skewnessare calculated and the time at which these values become maximum / minimum is used to estimate the time of arrival of the p - wave .another important contribution of this study is that unlike some previous studies which are applied only on the vertical component seismogram and used the derivatives within the kurtosis and skewness values for their correction , our technique uses the ratios within the kurtosis and skewness values rather than the derivative to make a correction in the p - onset time .this new technique supposes that the time for p - onset occurs when the hos ( skewness and kurtosis ) values of the vector magnitude begins to change drastically during the course of gaining their maximum values . in the next subsections we discuss the mathematical background of the implemented technique , our proposed methods , results , discussion , and some concluding remarks .broadband seismograms represent either the ground velocity or displacement .if the three component seismograms representing the motion of a seismic sensor in the three perpendicular directions have magnitudes x , y and z , where x and y are usually the east - west and north - south components , and z is the vertical component , the magnitude of the resultant velocity / displacement vector v ( figure 1 ) can be given by for a sequence of digital seismograms , v can be calculated sample - by - sample as where n is the nth sample .the angle of incidence for the seismic ray in figure 2 can be given in terms of x , y , and z component values as where represents the total horizontal component of the seismic p - wave velocity .for teleseismic earthquakes ( events ) , the p - wave arrives the seismic sensor nearly vertically and thus this angle of incidence is very small . for local and regional earthquakesthis angle of incidence becomes significant .the method developed here is an improved version of the p arrival identification - skewness / kurtosis technique .that method is based on the observation that the noise in the vertical component seismogram shows zero - mean gaussian behavior until the p - wave arrives . on the other hand , unlike the single components , the seismic vector magnitude , v(n ) is always greater than or equal to zero , and it may not show zero mean gaussian behavior .but we can normalize this vector magnitude on a window - by - window basis not only to have a zero mean gaussian behavior , but also to better look at the differences in the computed skewness and kurtosis .the normalized vector magnitude is obtained through the following transformation : each normalized variable is a rescaled ( transformed ) variable of sample for an ith window of mean value and standard deviation .this transformation is linear .figure 3 displays the gaussian behavior of the normalized vector magnitude for a 3-component seismogram .it also shows how the distribution and skewness and kurtosis values for the normalized vector magnitude change drastically when p arrives ( figure 3(c ) ) . as figure 3 clearlydisplays , skewness and kurtosis receive much higher values when a p - wave arrival is included in their calculation ( figure 3(c ) ) as compared to their near - zero values when p - arrival is not included in the sliding window ( figure 3(a),(b ) , and ( d ) ) .figure 4 depicts clearly that the kurtosis and skewness values take their peaks just after p - arrives .this normalization process seems to lead to enhancement of differences and sets the limit in the range of variations .figure 5 shows the enhancement in the maximum values of the quantities under investigation when we apply normalization as compared to the values without normalization .skewness(sk ) and kurtosis ( ku ) for a finite length sequence v(n ) are estimated by : and here and are the mean and standard deviation estimates of and is the length of the finite sequence .generally , hos values for a gaussian distribution are zero . in this case, it is demonstrated that the normalized magnitude yields higher hos values for p - arrival than the values calculated for the background noise and p - wave coda ( figure 3(c ) ) .a generalized hypothesis applicable to local and regional events is proposed .this hypothesis is based on the combination of all three - component seismograms in a single station , and it is supposed to outperform those techniques that are based on just a single component seismogram . following the mathematical equations in section 2 , the vector magnitude of three component seismograms for a single station was calculated .this is followed by normalization of n - sample window of the vector magnitude ( eq .while this window slides to the right by one - sample at a time the window next to it is automatically normalized .this normalization continues until the last window is normalized .the skewness and kurtosis of each normalized window were computed .pure background noise and the seismic signal away from p - phase arrival follow a gaussian distribution with nearly zero hos values . because of high asymmetry and non - gaussian distribution introduced by the p - arrival on the gaussian background noise , the window with p - arrival follows a highly skewed distribution .maximum ( minimum ) hos values are attained for the window that includes the arrival of the p - onset and just few additional samples of the p - wave coda .thus , to determine the p - onset time more accurately , a correction scheme is introduced for the additional samples included after the actual p - onset .the pai - s / k method of , using the vertical component alone , proposes to use the location of the maximum slope as the p - onset time . in this study , however , the p - onset time is taken as the time when the hos values of the normalized vector magnitude start to increase sharply .the hos values change drastically when p breaks out from the background noise level during the p - onset .thus , the magnitude of the ratio of the skewness value on the right - hand - side at the p - onset to the skewness value on the left - hand - side ( background noise ) should be the maximum of all the ratios within the window .the same is true with the ratio of kurtosis values ( figures 6 to 9 ) .for this particular study , we used seismic data sets obtained from integrated research institutions for seismology ( iris ) data management center ( dmc ) depository website . events from several geological terrains were selected .the seismic networks contributing to the iris / dmc which are utilized in this study include the iris - passcal tanzania broadband experiment ( 1994/1995 ) , the global seismic network ( gsn ) , iris / usgs network , pfo of the iris / ida , southern california seismic network ( scsn ) , and pacific northwest seismograph network ( pnsn ) .events within regional and local distance range of 50 km to 1400 km were selected . only broadbandseismic data are used in this study .seismic records with different levels of noise were included .the signal analysis was performed using matlab 7.6 , r2008a ( 3 ghz dual - core intel mac pro ) .the performance of the method using the seismogram vector magnitude has been compared to the pai - s / k ( figures 6 to 9 ) .the improvement of the arrival detection when applying the ratios of hos values as compared to using just the hos values directly as in is clearly shown for both skewness and kurtosis .further discussion is given in the next subsection .sizes of sliding window ranging from 30 to 500 samples are found to exhibit best results .comparison with pai - s / k and other schemes has been made .there are three important differences between the new technique presented in this paper and pai - s / k method .the pai - s / k method is among the many vertical component utilizing methods while our method is a three - component based technique .though the new technique is applicable to all distance ranges , it is more advantageous for regional and local distance range studies .another important difference between the pai - s / k and the new technique is the application of the normalization scheme that helps to reduce the variation in the computed values of skewness and kurtosis from window - to - window . the third difference , which is a striking one , between the pai - s / k and the technique in this paper is a correction procedure , procedure for making correction to the p - wave arrival time estimate . to the best of our knowledge , this new procedure is introduced in this study for the first time to solve such problems .the absolute value of the ratio of right - hand - side ( rhs ) to left - hand - side ( lhs ) hos values gives a very good estimate of the accurate arrival correction .saragiotis et al . pai - s / k method uses a maximum slope correction procedure .figures 6 to 9 show a comparison between these two approaches and our results indicate clear improvements when using the absolute values of the ratios of the adjacent values of hos ( abs(rhs / lhs ) of hos values ) as compared to using the slope ( derivative ) of the hos values .on figures 6 and 7 , the new ratio method for skewness implemented here suggests 11 sample correction while the maximum slope in pai - s / k method suggests 5 sample correction . on figures 8 and 9 ,the ratio approach for kurtosis suggests 11 sample correction while the slope approach of pai - s / k technique indicate 1 sample correction . the actual correction requiredcan be seen by closely looking at the p - arrival very closely ( figure 10 ) .figure 10 indicates that the required correction is 13 samples .our approach gives not only a much better correction in both kurtosis and skewness cases , but also both skewness and kurtosis give consistent correction values .like pai - s / k we suggest to apply both skewness and kurtosis together to detect a p phase arrival .the use of both these statistical quantities instead of just one for detection will constrain and give a better result than using just one of these quantities .the number of false alarms also decreases with the use of the two quantities simultaneously than using just one of them .this article has attempted to make use of vector magnitude of three component seismograms in order to improve p - wave arrival detection system .many single component ( usually the vertical component ) , single - station based methods have been developed for a p - wave arrival detection and picking . since seismic waves from regional events approach a seismic station ( sensor ) in a more horizontal incidence than seismic waves from teleseismic events , the energy is generally distributed among the horizontal and vertical recording channels .thus , it is not only advantageous to develop a method that makes use of all the three components recorded by the single station in a combined form but it also gives the technique more generality . though the method is applicable to all distance ranges , it is more advantageous for regional and local distance range studies .we investigated the application of the normalization to vector magnitude .in contrast , the hos values for a p - phase arrival do not improve as compared to other possible seismic phase arrivals which may suggest that normalization may play an important role in the identification of the - more - difficult - to - identify smaller seismic phases . we have proposed to apply kurtosis and skewness on the combination of three component seismograms for picking p wave arrivals automatically and determined that the method introduced by saragiotis et al . can be improved if we use the vector magnitude of seismograms for regional events .this technique makes use of the information from all the three component data as compared to the application of the technique on the single component , vertical , seismogram .the three component seismograms help us determine the magnitude of the total ground motion through their vector magnitude .a new approach for making correction of p - wave arrival time is also proposed and implemented .the absolute value of the ratio of right - hand - side ( rhs ) to left - hand - side ( lhs ) hos values gives a very good estimate of the accurate arrival correction .this has been compared to the derivative ( slope ) values correction used in the saragiotis et al . pai - s / k method and our results indicate improvements when using abs(rhs / lhs ) of hos values as compared to using the slope of the hos values . we also propose to use both skewness and kurtosis together to detect a seismic phase arrival .the use of both these statistical quantities instead of just one for detection will constrain and give a better result than using just one of these quantities .the number of false alarms also decreases with the use of thresholds on both quantities simultaneously than using just one of these two quantities .this study has indicated that the rhos technique can be applied on vector magnitude of three component seismograms for improved detection of p - wave arrivals .it is also shown that the new ratio of hos values technique can be applied just on the vertical component seismograms in lieu of the vector magnitude of three component seismograms and it still gives an improved detection of p - wave arrivals as compared to that of pai - s / k .this new procedure also provides better detection and picking of p - wave arrivals which is essential for locating earthquake sources more accurately .the source of the earthquake is the point where slippage between the fault surfaces or faulting starts inside the earth .thus , rhos would enable us to make more accurate earthquake location using seismic signals .we strongly believe that this new technique has the potential to be applied in a similar fashion for accurate detection and location of fractures in machines or mechanical systems using acoustic signals .christos d. saragiotis , leontios j. hadjileontiadis , and stavros m. panas , ( 2002 ) .`` pai - s / k : a robust automatic seismic p phase arrival identification scheme . '' ieee transactions on geoscience and remote sensing , vol .40 , no . 6 , 1395 - 1404 , june 2002
|
in this paper we present two new procedures for automatic detection and picking of p - wave arrivals . the first involves the application of kurtosis and skewness on the vector magnitude of three component seismograms . customarily , p - wave arrival detection techniques use vertical component seismogram which is appropriate only for teleseismic events . the inherent weakness of those methods stems from the fact that the energy from p - wave is distributed among horizontal and vertical recording channels . our procedure , however , uses the vector magnitude which accommodates all components . the results show that this procedure would be useful for detecting / picking of p - arrivals from local and regional earthquakes and man - made explosions . the second procedure introduces a new method called `` ratios in higher order statistics ( rhos ) . '' unlike commonly used techniques that involve derivatives , this technique employs ratios of adjacent kurtosis and skewness values to improve the accuracy of the detection of the p onset . rhos can be applied independently on vertical component seismogram as well as the vector magnitude for improved detection of p - wave arrivals .
|
mass fluctuations along the line of sight to a distant galaxy distort its apparent shape via weak gravitational lensing .if we can measure the `` shear '' field from the observed shapes of galaxies , we can map out the intervening mass distribution . but how should the galaxies shapes be measured ? a monochromatic image of the sky is simply a two - dimensional function of surface brightness , in which the galaxies are isolated peaks .we would like to form local shear estimators from some combination of the pixel values around each peak .the estimators are merely required to trace the true shear signal when averaged over a galaxy population : .individual estimators will inevitably be noisy , because of galaxies wide range of intrinsic ellipticities and morphologies .furthermore , we are primarily interested in distant ( and therefore faint ) galaxies. additional biases from observational noise can therefore be limited by forcing to be a linear ( or only mildly non - linear ) combination of the pixel values .the standard shear measurement method applied to most current weak lensing data was invented by ( * ? ? ?* kaiser , squires & broadhurst ( 1995 ; ksb ) ) .ksb provides a formalism to correct for smearing by a point - spread function ( psf ) , and to form a shear estimator .it uses a galaxy s gaussian - weighted quadrupole ellipticity , because the unweighted ellipticity does not converge in the presence of observational noise .unfortunately , the weight function complicates psf correction , and there is no obvious choice for its scale size .it is important to note that such an ellipticity by itself would _ not _ be a valid shear estimator. it does not respond linearly with shear ; nor is it expected to , and this is a separate issue from the 0.85 calibration factor of .the necessary `` shear susceptibility '' factor , , is calculated from the object s higher - order moments .heymans ( 2004 ; poster at this conference ) finds that most practical problems with the ksb method arise during the measurement of . it can be noisy ( the distribution of then obtains large wings that need to be artificially truncated for to converge ) ; it is a tensor ( for which division is mathematically ill - defined , or inversion numerically unstable ) ; it assumes the object is intrinsically circular ( to eliminate the off - diagonal terms in the tensor ) ; and it needs to be measured from an image _ before _ the shear is applied .the last two problems can never be solved for an individual galaxy because it is impossible to observe the pre - shear sky .they are circumvented by fitting from many galaxies , as a function of their size , magnitude ( _ and ellipticity ! _ ) in a sufficiently wide area to contain no coherent shear signal .however , these steps restrict ksb to a non - local combination of galaxy shapes in a large population ensemble , introduce the problem of `` kaiser flow '' ( kaiser 2000 ) , and also tend to introduce biases of around ten percent . the potential of modern , high resolution imaging surveys to accurately measure shear and reconstruct the mass distribution of the universe is now limited by the precision of ksb .several efforts are under way to invent new shear estimators and shear measurement methods to take advantage of such data ( ( * ? ? ? * bridle et al .2004 ) , ( * ? ? ?* bernstein & jarvis 2002 ) , ( * ? ? ?* goldberg & bacon 2004 ) , ( * ? ? ?* refregier & bacon 2003 ) ) .among the most promising candidates to supercede ksb are shapelets - based analysis methods ( ( * ? ? ?* refregier 2003 ) , ( * ? ? ?* massey & refregier 2004 ) ) .indeed , the shapelets formalism is a logical extension of ksb , introducing higher order terms that can be used to not only increase the accuracy of the older method , but also to remove its various biases .shapelets has already proved useful for image compression and simulation ( ( * ? ? ?* massey et al . 2004 ) ) and the quantitative parameterisation of galaxy morphologies ( ( * ? ? ?* kelly & mckay 2004 ) ) .it seems reasonable that if it can parameterise the unlensed shapes of galaxies , it should also be able to measure small perturbations in these shapes . ) of the plot , and the imaginary parts in the bottom half of the plot .the basis functions with are wholly real . in a shapelet decomposition , all of the basis functions are weighted by a complex number , whose magnitude determines the strength of a component and whose phase sets its orientation .arrows indicate the `` bleeding '' of power into four adjacent shapelet coefficients when a small shear is applied . _right panel : _ reconstruction of irregular hdf galaxies .accurate models can be produced for even these peculiar shapes , using between 12 and 15 , to leave image residuals entirely consistent with noise.,title="fig:",height=340 ] ) of the plot , and the imaginary parts in the bottom half of the plot .the basis functions with are wholly real . in a shapelet decomposition , all of the basis functions are weighted by a complex number , whose magnitude determines the strength of a component and whose phase sets its orientation .arrows indicate the `` bleeding '' of power into four adjacent shapelet coefficients when a small shear is applied . _right panel : _ reconstruction of irregular hdf galaxies .accurate models can be produced for even these peculiar shapes , using between 12 and 15 , to leave image residuals entirely consistent with noise.,title="fig:",height=340 ] the shapelets technique is based around the decomposition of a galaxy image into a weighted sum of ( complete ) orthogonal basis functions where are the `` shapelet coefficients '' .the polar shapelet basis functions are shown in figure [ fig : basis ] .these are successive perturbations around a gaussian of width ( equivalent to in ksb ) , parameterised by indices and . the mathematics of a shapelets is somewhat analogous to fourier synthesis , but with a compact support well - suited to the modelling of localised galaxies .for example , a shapelet decomposition can similarly be truncated to eliminate the highly oscillatory basis functions that correspond noise in the original image .note that figure [ fig : basis ] takes a convenient shorthand form .the basis functions are only defined if and are both even or both odd but , for clarity , their images have been enlarged into the spare adjacent space .the polar shapelet basis functions and coefficients are also complex numbers .however , the constraint that a combined image should be a wholly real function introduces degeneracies : and the coefficients with are wholly real .the top half of figure [ fig : basis ] with shows the real parts of the basis functions ; the bottom half shows the complex parts . before finding the shapelet coefficients for an image ,it is necessary to specify the centre of the shapelet basis functions , and their scale size .since shapelets form a complete basis , decomposition is _ possible _ at any value of .however , there is definitely a preferred scale for most galaxies , with which a faithful model can be produced using only a small number of shapelet coefficients . written an algorithm to automatically decompose an arbitrary image into shapelets by exploring values of .it seeks a model of the image that leaves a residual consistent with noise , and chooses the scale size that achieves that goal using the fewest coefficients .the optimal centre of the basis functions can be found simultaneously , by shifting the basis functions so that the model s unweighted centroid is zero .the procedure can also deal analytically with the pixellisation of observational data , and perform deconvolution from a point - spread function .its success at faithfully modelling of even irregular hdf galaxies is demonstrated in the right - hand panel of figure [ fig : basis ] . a complete idl software package to implement the shapelets decomposition of arbitrary images , and to perform analysis and manipulation in shapelet space ,can be downloaded from www.astro.caltech.edu/ / shapelets/.it would be , of course , possible to analyse images using a more physically motivated basis set , or traditional sersic and moffat radial profiles .however , the shapelet basis functions are specifically chosen to simplify image analysis and manipulation as encountered in weak gravitational lensing . as shown by , shears ,magnifications and convolutions are elegantly represented in shapelet space as the mixture of power between an ( almost ) minimal number of adjacent basis states .shapelets are not motivated by their _ physics _but rather their _mathematics_. the burden of proof for shapelets therefore shifts to the question of whether the central cusps and extended wings of real galaxies can be faithfully modelled by a set of functions based around a gaussian .in fact , the recovery of galaxies extended wings is surprisingly complete with this algorithm .the process is helped by the fact that the smooth shapelets basis functions can find faint but coherent signal spread over many pixels , even though it may be beneath the noise level in any given pixel ( and therefore not detected by sextractor ) .a polar shapelet decomposition conveniently separates components of an image that are intuitively different .the index describes the total number of oscillations ( spatial frequency ) and also the size ( radius ) of the basis function .the index describes the degree of rotational symmetry of the basis functions .basis functions with are rotationally invariant .a circularly symmetric object contains power only in these states ; its flux and radial profile are defined by the realtive values of its coefficients .an object containing only states will be a useful place to start for lensing analysis because , if galaxies intrinsic ellipticities are uncorrelated , the ensemble average of an unlensed population will indeed be circularly symmetric . even in a typical galaxy ,most of the power compactly occupies shapelet coefficients with low , and particularly those with or 2 .basis functions with are invariant only under rotations of .these coefficients encode an object s centroid : their real and imaginary parts correspond to displacements in the and directions .alternatively , their moduli correspond to an absolute distance , and their phases indicate a direction .basis functions with are invariant under rotations of , and become negative versions of themselves under rotations of .these are precisely the properties of an ellipse .indeed , an object s gaussian - weighted ellipticity is simply given by .its unweighted ellipticity is a combination of all of the shapelet coefficients .ellipticity estimators can also incorporate coefficients with , because their basis functions also contain at least the necessary symmetries .all linear transformations can be described in shapelet space by the mixing of power between a few adjacent shapelet coefficients .for example , let us begin with an object containing power in just its coefficient .this is indicated by the arrows overlaid on figure [ fig : basis ] . as the object is sheared by , this power `` bleeds '' into four nearby coefficients by an amount proportional to .thus ( recall that is complex ) . to first order , is unchanged .the diagonal pattern of the arrows is identical across the _ _ v__s plane , although the constant varies as a function of and . for more details , see .an initially circular object may contain power in all of its coefficients .after a small shear , it also contains power in its coefficients : the combination of circularly - symmetric plus quadrupole states produces an ellipse .weak shear estimators primarily involve combinations of the shapelet coefficients .for example , the coefficient is the ksb ellipticity estimator . for a circularly symmetric object, this will have been affected under the shear by the initial values of and .a weighted combination of these two ( real ) coefficients gives the trace of the ksb shear susceptibility tensor ( ignoring terms involving correction for psf anisotropy ) .a non - circularly symmetric object can also contain nonzero coefficients . under a shear ,the coefficients affect the coefficient ( plus some coefficients ) to order .indeed , are the off - diagonal components of the ksb shear susceptibility tensor .unfortunately , the complex conjugation of mixes the and signals between the real and imaginary parts of the .it becomes impossible to disentangle the two components of shear ; and ksb can only work by averaging the shapes of many galaxies , to ensure that the population s initial coefficients are precisely zero .using shapelets , _ every _ shapelet coefficient with can provide a statistically independent ellipticity estimator .each of these has an effective involving its adjacent and coefficients .multiple shear estimators are very useful .firstly , they can act as a consistency check to examine measurement errors within each object. they can also be combined to increase s / n : either by a simple average , or in more sophisticated ways that remove some of the biases of ksb ( while staying stay linear in flux ) .for example , it is possible to take a linear combination of coefficents that is independent of the choice of .however , the most successful estimator involves a `` multiple multipole '' combination of shapelet coefficients that has .this is an exciting result for weak lensing , solving all of the problems with ksb s listed in [ intro ] .an object s flux is its zeroth - order moment , which can be measured with less noise ; it is a single , real number ; and this shear susceptibility is unchanged by a ( pure ) shear .we can therefore form shear estimators using individual galaxies rather than having to average over a population ensemble .the method works stably for any galaxy morphology , because it does not rely on objects initially having zero coefficients .this is particularly important when the calibration is perfomed on simple image simulations using elliptical galaxies with concentric isophotes . as a final note of caution , gravitational lensingdoes not apply a pure shear : it also applies a magnification , of the same order as .the enlargement caused by a lensing magnification mixes power between a small number of nearby shapelet coefficients .however , an enlargement is also equivalent to a increase of : this effect is therefore eliminated from lensing measurements using a decomposition method with an adaptative choice of .ksb and shapelet shear measurements are also insensitive to the flux amplification , because they are all formed from one linear sum of coefficients divided by another .to test ( and calibrate ) various shear measurement methods , we have created simulated images containing a known shear signal , .we can then compare measured values to the true value .our simulated images mimic the depth , pixellisation and psf of the hdf , but the galaxies are simply parameterised by concentric ellipses with an exponential radial profile .such objects are chosen to make the test especially challenging for shapelets - based methods : their central cusps and extended wings of such objects will be hard to match , while the concentric isphotes improve the prospects for ksb , that effictively measures shear at only one fixed radius .we have created many 7.5 square degrees simulated images , each containing a constant input signal in one component of shear , and zero in the other . every shear measurement ( and each point in figure [ fig : results ] )can therefore be performed as an average over a realistic population of galaxy sizes , magnitudes and intrinsic ellipticities .the shear measured by a real ksb pipeline , , is shown in the left - hand panel of figure [ fig : results ] .almost identical results can be reproduced using the shapelets software to imitate ksb .the statistical errors are quite large and there is calibration bias , as already noticed by . the value of the calibration factor can vary as a function of exposure time and galaxy morphology , and therefore needs to be calibrated using realistic simulated images .the precision of the ksb method is therefore dependent upon the fidelity of the simulated images used to test it . using the shapelets formalism, we can derive many statistically independent shear estimators for each object .the middle panel of figure [ fig : results ] shows their ( noise - weighted ) average .this does indeed have higher s / n than the ksb measurement ; however , it still exhibits the familiar calibration bias .the right - hand panel of figure [ fig : results ] shows results for the multiple multipole shear estimator .this is very sensitive to weak shears : indeed , a measurement of the components of shear that are not shown in figure [ fig : results ] ( which are all zero ) gives 0.06%.10% . for large input shear values ,% , the precision of this shapelets - based shear estimator is also sufficient to detect deviations from the weak shear approximation .a shapelet decomposition parameterises _ all _ of an object s shape information , in a convenient and intuitive form .several shear estimators can be formed from combinations of shapelet coefficients .these are not only more accurate than ksb , but also more stable .in particular , the use of higher order moments to analytically remove any calibration factor reduces the reliance of older methods upon simulated images to faithfully model all aspects of observational data . % level .the calibration bias has to be measured using simulated images , then corrected for in real data . _middle panel : _ shear recovery by combining shapelet coefficients to create multiple , ksb - like estimators .the s / n improves , but the calibration bias remains ._ right panel : _ shear recovery using a more sophisticated , `` multiple multipole '' shapelets - based shear estimator .this is precise enough to detect deviations from the weak shear approximation at high values of .,title="fig:",width=162 ] % level .the calibration bias has to be measured using simulated images , then corrected for in real data . _middle panel : _ shear recovery by combining shapelet coefficients to create multiple , ksb - like estimators .the s / n improves , but the calibration bias remains ._ right panel : _ shear recovery using a more sophisticated , `` multiple multipole '' shapelets - based shear estimator .this is precise enough to detect deviations from the weak shear approximation at high values of .,title="fig:",width=162 ] % level .the calibration bias has to be measured using simulated images , then corrected for in real data . _middle panel : _ shear recovery by combining shapelet coefficients to create multiple , ksb - like estimators .the s / n improves , but the calibration bias remains ._ right panel : _ shear recovery using a more sophisticated , `` multiple multipole '' shapelets - based shear estimator .this is precise enough to detect deviations from the weak shear approximation at high values of .,title="fig:",width=162 ]
|
the measurement of weak gravitational lensing is currently limited to a precision of % by instabilities in galaxy shape measurement techniques and uncertainties in their calibration . the potential of large , on - going and future cosmic shear surveys will only be realised with the development of more accurate image analysis methods . we present a description of several possible shear measurement methods using the linear `` shapelets '' decomposition . shapelets provides a complete reconstruction of any galaxy image , including higher - order shape moments that can be used to generalise the ksb method to arbitrary order . many independent shear estimators can then be formed for each object , using linear combinations of shapelet coefficients . these estimators can be treated separately , to improve their overall calibration ; or combined in more sophisticated ways , to eliminate various instabilities and a calibration bias . we apply several methods to simulated astronomical images containing a known input shear , and demonstrate the dramatic improvement in shear recovery using shapelets . a complete idl software package to perform image analysis and manipulation in shapelet space can be downloaded from www.astro.caltech.edu/ / shapelets/.
|
although the basic concept of heat assisted recording ( hamr ) reaches back almost 60 years only very recently the first fully functional drive was realized with more than 1000 write power on hours and a 1.4tb / in device was demonstrated . in order to keep up the continuous increase of areal storage density ( ad ) the following factors are essential ( i ) provide small magnetic grains and ( ii ) provide a recording scheme with a high effective write field and temperature gradient in order to allow for small bit transitions . to realize small magnetic grains high magnetic anisotropy has to be used to ensure that the stored binary information is thermally stable .the limited maximum magnetic field of write heads results in the so called magnetic recording trilemma .hamr can help to overcome this trilemma .one uses a laser spot to locally heat the selected recording bit near or above the curie temperature ( ) .hence , even the magnetization of very hard magnetic materials can be reversed with the available write fields .nevertheless , thermally written - in errors are a serious problem of hamr . in this paperwe will show under which circumstances a bit patterned recording medium , consisting of hard magnetic single phase grains , can have an areal storage density of 10tb / in and more , despite thermal fluctuations at high temperatures during writing , which significantly deteriorates the bit transition , as well as the distribution of the curie temperature of the recording grain are considered .the accurate calculation of the magnetic behavior of a recording grain is a challenging task . during hamr temperatures near and above of the involved materials can occur , and thus a meaningful physical model should be able to reproduce the phase transition from a ferromagnetic to a paramagnetic state at .we use the landau - lifshitz - bloch ( llb ) equation for this purpose , which has already been validated in different publications . in this workwe use the formulation proposed by evans et al . . to accelerate the simulations in order to provide an insight into the detailed switching behavior of realistic recording grains we use a coarse grained approach . for detailed information about the used coarse grained llb modelplease refer to . in summary , in the coarse grained llb model each materialis described with just one magnetization vector . with this approach recording grains with realistic lateral dimensions of several nanometerscan be efficiently simulated with low computational effort . nevertheless , the resulting dynamic trajectories reproduce the according computationally very expensive atomistic simulations ..magnetic properties of the hm recording grain . [cols="^,^,^,^",options="header " , ] table [ tab : clsr_ber ] shows the final maximum ad with the corresponding parameters . shingled recording reaches the remarkable density of 14.34tb / in , which is more than twice the ad than for conventional recording .additionally we simulated the same footprint as in fig .[ fig : clsr_phase ] for a head velocity of 10 m/s fixing to 10 nm .the optimized ad for these calculations can also be found in tab .[ tab : clsr_ber ] for comparison .so far no interactions between the recording grains were considered , as the switching probabilities of fig .[ fig : clsr_phase ] are based on trajectories of just one bit . due to the small distances between the bits , the demagnetization field of neighboring grainscan influence the effective write field of the recording head .our simulations show that a change in the applied magnetic field shifts the phase transition in fig .[ fig : clsr_phase ] along the peak temperature axis .additionally a slight broadening of the transition area for larger fields can be mentioned .the same effect is caused by a change of the curie temperature of the material .hence , we are able to include the interaction of the recording bits due to their demagnetization field in an additional distribution of . to quantify this effectwe inspect a complete finite element model of bit b and its 24 nearest neighbors with nm and nm . the total field at b is calculated for 50000 configurations consisting of randomly chosen magnetization down and up directions of the neighbors .a histogram of the resulting field values allows to extract the underlying distribution and its standard deviation . with the computed shifts of the transitions in the switching probability diagrams for different applied fieldsthe field distribution can be transferred to an additional distribution of the curie temperature . in totalan increase of 5% in is obtained . as a consequence the ad of the bit patterned medium slightly decreases to 13.23tb / in for shingled and 6.62tb / in for conventional recording as shown in tab .[ tab : clsr_ber ] . even in the more conservative scenario of a 10% increase of the distribution to % the final ad does not significantly change .in summary , we used an efficient coarse grained llb model to calculate switching probabilities and bit error rates of hard magnetic ( fept like ) recording bits with a diameter of 5 nm and a height of 10 nm during hamr . in our calculations we considered a distribution of the curie temperature of the recording grains , as well as a distribution of the grain size , its position and the position of the heat spot on the medium .we also included interactions between the bits in our calculations as additional contribution to the intrinsic distribution .hence , we obtained a realistic model of hamr , where we optimized the areal density in terms of the spacings between the bits and the write temperature of the heat spot .based on this model we presented a bit patterned recording medium , which reaches an areal storage density of 13.23tb / in for shingled and 6.62tb / in for conventional recording , if a heat spot with a full width at half maximum of 20 nm and a velocity of 7.5 m/s and an external magnetic field of 0.8 t is assumed .the authors would like to thank the vienna science and technology fund ( wwtf ) under grant ma14 - 044 , the austrian science fund ( fwf ) : f4102 sfb vicom and the advanced storage technology consortium ( astc ) for financial support .the support from the cd - laboratory amsen ( financed by the austrian federal ministry of economy , family and youth , the national foundation for research , technology and development ) was acknowledged .the computational results presented have been achieved using the vienna scientific cluster ( vsc ) .20ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1063/1.1723319 [ * * , ( ) ] link:\doibase 10.1109/tmag.2012.2218228 [ * * , ( ) ] link:\doibase 10.1109/tmag.2014.2355173 [ * * , ( ) ] link:\doibase 10.1109/tmag.2015.2439690 [ * * , ( ) ] link:\doibase 10.1109/tmag.1967.1066003 [ * * , ( ) ] http://www.google.com/patents/us3626114 [ ] ( ) http://www.google.com/patents/us4466004 [ ] ( ) link:\doibase 10.1109/tmag.2006.879572 [ * * , ( ) ] link:\doibase 10.1063/1.3681297 [ * * , ( ) ] link:\doibase 10.1103/physrevb.70.212409 [ * * , ( ) ] link:\doibase 10.1103/physrevb.74.094436 [ * * , ( ) ] link:\doibase 10.1063/1.2822807 [ * * , ( ) ] link:\doibase 10.1103/physrevb.77.184428 [ * * , ( ) ] link:\doibase 10.1103/physrevb.80.214403 [ * * , ( ) ] link:\doibase 10.1103/physrevb.81.174428 [ * * , ( ) ] link:\doibase 10.1103/physrevb.85.014433 [ * * , ( ) ] link:\doibase 10.1063/1.4733311 [ * * , ( ) ] link:\doibase 10.1109/tmag.2012.2187776 [ * * , ( ) ] link:\doibase 10.1038/srep03980 [ ( ) , 10.1038/srep03980 ] , link:\doibase 10.1103/physrevb.90.214431 [ * * , ( ) ]
|
the limits of the areal storage density as can be achieved with heat assisted magnetic recording ( hamr ) are still an open issue . we want to address this central question and present the design of a possible bit patterned medium with an areal storage density above 10tb / in . the model uses hard magnetic recording grains with 5 nm diameter and 10 nm height . it assumes a realistic distribution of the curie temperature of the underlying material as well as a realistic distribution of the grain size and the grain position . in order to compute the areal density we analyze the detailed switching behavior of a recording bit under different external conditions , which allows to compute the bit error rate of a recording process ( shingled and conventional ) for different grain spacings and write head positions . hence , we are able to optimize the areal density of the presented medium .
|
physicists and mathematicians have different language and different logics .namely , a mathematician first formulates the conditions , then the theorem ( the formula ) , and only after this proves it .a physicist , conversely , first derives a formula , which he makes rather convincingly , but he often conceals the conditions assuming them to be obvious . as a rule , mathematicians prove theorems , which have already been `` known '' to physicists in the sense that the physicists have been using them explicitly or implicitly for a long time .there is a series of rather well - known theorems bearing the names of their authors , which were formulated in a different language and , in fact , had been already proved by the physicists .the results of mathematicians who are translators to the mathematical language were especially easily and rapidly understood by the mathematical community . sometimes , a mathematician , who also obtained new results in physics and , in addition , tried to explain them in the physical language , was not taken as a mathematician by the mathematical community , and he had to prove this separately . the author also happened to obtain several results concerning the area of interests of those who doubted , see , , and the work continued in ( how levsha had to `` show a flea '' ) , in order to make the mathematical community to acknowledge him .similarly , it is difficult to make the physicists to believe that the mathematicians , who follow rigorous logics , can obtain results that are paradoxical from the point of view of customary understanding .physicists are not convinced even by the fact that the mathematical results correspond to well - known experiments and predict new experiments , which are in a rather good agreement with the theoretical calculations .one of my friends , a famous physicist , used to tell me : `` you do not convince me by your lemmas '' . in the second part of the famous book of feynman and hibbs about continual integrals , the unrigorous logics of feynman continuous integrals ( which was useful for physicists ) led to the rigorous results obtained by norman wiener twenty years ago ( no reference was made ) .the author had to `` prove '' the semiclassical asymptotics of the feynman continuous integrals ( he proved it indirectly somewhat earlier ) according to the feynman logics ( v. guillemin and s. sternberg repeated the proof in , but fairly quoted the word `` proof '' ) .the `` rules of the game '' developed by physicists are very useful , because they allow them to `` jump over '' the mathematical difficulties and hence to proceed significantly faster .their physical intuition developed in experiments in different areas of physics helps them to feel the result empirically , especially if the numerical data are known .the latter allows them to avoid complicated asymptotics in several small parameters and a nonstandard analysis . in the first aspect ( the intuition ) , i was always surprised by b. l. ginzburg ; in the second aspect ( the knowledge of numerical data ) , by ya .b. zeldovich , who , even speaking over phone , could mentally calculate whether or not a given theorem corresponds to physics .if a mathematician does not have any physical intuition ( for example , just as the author ) , then he can be severely disappointed considering some well - known physical law as an axiom and obtaining an answer contradicting the experimental data .therefore , prior to presenting mathematically rigorous results , which correspond to the unsolved problems of chemistry ( and even of alchemy , because the term `` fluids '' was introduced by alchemists on the critical isotherm behaves as the 100% methanol in several extraction processes . in technology ,the supercritical water vapor is used in turbines in all thermal and nuclear power stations . ] ) , we show that some well - known , even `` eternal '' physical and mathematical `` verities '' turn out to be significantly refined in the case of rigorous mathematical verification . to distinguish the classical theory in its modern understanding from the quantum theory , it is necessary somewhat to change the physicist s ideology that the classical theory is the theory that existed in the 19th century before the appearance of the quantum theory .but , in fact , the classical theory is the theory obtained from the quantum theory in the limit as .so feynman was right stating that spin is a phenomenon of classical mechanics .indeed , this is the case in rigorous passing to the limit from quantum to classical mechanics .similarly , the ray polarization does not disappear as the frequency increases and hence is a property of geometric rather than wave optics , as was used to think because the light polarization was discovered as a result of discovering the wave optics .we consider the `` lifshits well '' , i.e. , the one - dimensional schrdinger equation with potential symmetric with respect to the origin and having two wells .its eigenfunctions are symmetric or antisymmetric with respect to the origin . as ,this symmetry remains unchanged , and since the squared modulus of the eigenfunction corresponds to the probability that a particle stays in the wells , it follows that , in the limit as , i.e. , `` in the classics '' , for energies less than the barrier height , the particle occurs between the wells , in the two wells at once , although the classical particle can not penetrate through the barrier .nevertheless , this simple example shows the variation in the ideology of the `` classical theory '' . to understand this paradox , it is necessary to take into account that the symmetry must be very precise and the state stationarity means that this state appears in the limit of `` infinitely large '' time .as we show below , the same concerns the bose - distribution in the classical theory of gases .now we show how the existence of an additional parameter changes the representation in the classical limit .we dwell upon the notion , which is called `` collective oscillations '' in classical physics and quaseparticles in quantum physics . in classical physics , this is the vlasov equation of self - consistent ( or mean ) field , and in quantum physics , this is the hartree ( or hartree fock ) equation .so we want to note the following point that seems to be paradoxical .the solutions of the equation in variations for the vlasov equation _ do not coincide _ with the classical limit for the equations in variations for the mean - field equation in the quantum theory .this is because of the fact that the variations are related to another small parameter , namely , the variation parameter . andthis is already the field of nonstandard analysis .n. n. bogolyubov , for example , in , studied the problem without an external field , and the asymptotics thus obtained coincides with the semiclassical asymptotics in an external field , . moreover ,this is in fact the classical limit , because the parameter in this bogolyubov s paper can be compensated by a large parameter , namely , the wave number .this becomes especially obvious if the interaction is assumed to be zero .then an ideal gas is obtained , which in this case can be considered as a classical gas .just therefore , the author proved that the classical fluids in nanotubes have superfluidity ( see , ) , which was confirmed by a series of experiments ( see the references in ) .confusion is due to the fact that the constant has dimension of action , and , rather often , in the classical limit , in order to keep the dimensions , it participates even in the maxwell distribution .for example , the thomas fermi equations ( including the temperature equations ) are classical equations from the above point of view , but the fact that they contain the constant leads to confusion .therefore , the van der waals law of corresponding states is very important , it allows us to consider the reduced temperatures and pressures as dimensionless quantities .now we consider the -particle gibbs distribution where is the hamiltonian .it is assumed that the level surfaces are simply connected .the theorem about this distribution was in fact proved in .it must be considered in the sense of kolmogorov complexity .this distribution is the distribution over the number of different experiments whose number is independent of , over systems of particles at the same temperature ( the average distribution over the number of experiments ) , and it is the distribution over the energy surfaces , , : where and are less than some average energy , , and the phase volume is , .we divide the phase space into finitely many domains where , , , , , and , correspondingly , the phase space has coordinates .we perform ordered sampling with return from the partition of domains in the space into the `` box '' , under the condition that from the physical viewpoint , the ordered sampling means that distinguishable -dimensional particles are considered .let be the number of -dimensional `` particles '' in the energy interval divided by .assume that the above conditions on the function are satisfied .we determined from the condition is considered as average over the the number of particles , then . here is the average energy over the number of experiments . ] where is the boltzmann constant .we determine in as .then we have the following theorem . [ theor2 ]the following relation holds : where is arbitrarily integer .here the probability is the lebesgue measure of the phase volume in parentheses in with respect to the entire phase volume bounded by .thus , the gibbs distribution is not a distribution over momenta and coordinates , but is a distribution over the energy levels .if the domain is multiply connected , then the problem can be solved only by semiclassical transitions .if , then we formally obtain the maxwell boltzmann distribution .but the latter is considered , as a rule , as the average distribution for particles .we discuss this treatment in remark 5 .boltzmann obtained his distribution integrating this distribution over . butthis can be done under the following conditions only .for a great many particles , their dependence on a common potential field must be assumed to be slowly varying as a rule . indeed ,if the particles are in a volume ( or in an area ) , then the thermodynamical asymptotics requires that the volume ( the area ) tend to infinity . but this means that the potential field varies very slowly , and it must be assumed that the function of the coordinates is of the form {v}}).\ ] ] otherwise , we have no rights to pass from the gibbs distribution from the maxwell boltzmann distribution ( integrating it over the momenta ) to the boltzmann distribution of the form since the maxwell boltzmann distribution is not a distribution of the density of the number of particles with respect to the momenta and coordinates. this distribution can give only the number of particles between energy levels of the form but if is of the form {v}}),\ ] ] then this integration can be done in the thermodynamical asymptotics as .one should not forget here that the boltzmann distribution is also far from being a distribution with respect to the coordinates , and that it is a distribution with respect to level surfaces of the function only .there is a mathematical error in the very definition of thermodynamical limit as the limit as and such that , where ( the density ) is finite .let the pressure be .this pressure does not lead to a too rarified gas ( and is admitted by the knudsen criterion ) . obviously , the above definition is incorrect because the number of particles is bounded by avogadro s number . as , we have as well . but and is not large under the above pressure .it would be correct to pose the problem , as was said above , neglecting the inverse quantity . in this case, a mathematically rigorous asymptotics in statistical physics would be obtained . the transfer of the bose distribution for photons to gas in the form of the bose einstein distribution is incorrect . since , where stands for the number of particles at the energy level , can not vary up to infinity ( ), it follows that the chemical potential can take positive values , which must not be neglected .this restriction means that parastatistics must be considered .the most essential is the note that the distribution given in remark 4 with the property that and are finite permits preserving the parastatistics of the bose einstein - type distribution as , and just this property will be used in what follows . the fact that it holds in the classical limit is especially unusual for the physicists ( cf .remark 1 . ) .therefore , in addition to references to exact theorems , i also present several other arguments .for example , the physicists completely accepted the financial considerations presented in . `` a general important property of money bills is that the change of one money bill by another of the same denomination does not play any role .this property , which can be called `` money do not smell '' , permits uniquely determining the formula for nonlinear addition .we assume that we want to deposit two copecks into two equivalent banks .how many possibilities do we have ?we can deposit both copecks into the first bank , or deposit both copecks into the second bank , or deposit per one copeck in each of the banks .thus , we have three possibilities .but if we want to deposit two diamonds , then we have four possibilities , because we can interchange the diamonds . at the same time, it does not make any sense to interchange the copecks .the identity property of money bills of the same denomination permits changing the number of versions .this statistics , just as the statistics of identical bose - particles is called the quantum statistics ( bose - statistics ) , and the statistics of diamonds ( provided that they are not absolutely identical ) is called the classical statistics .but as we see , the bose - statistics can be applied to money , more precisely , to money bills.'' , p. 5 . andthis is despite the fact that the copecks may have different years of their issue and the money bills have individual numbers .objectively , they are different , but for our problems , this difference is not essential , because only the quantity of money bills is important for us .further , the physicists did not protest , when schoenberg used the creation and annihilation operators in classical mechanics . andthis is possible only when considering the fock space of identical particles .we consider one more aspect of this problem in more detail , because the molecular dynamics of classical particles is nowadays commonly used .in the well - known textbook by landau and lifshits , the authors explain the identity principle for particles as follows : `` in classical mechanics , identical particles ( such as electrons ) do not lose their ` identity ' despite the identity of their physical properties .namely , we can visualize the particles constituting a given physical system at some instant of time as ` numbered ' and then observe the motion of each of them along its trajectory ; hence , at any instant of time , the particles can be identified . ... in quantum mechanics , it is not possible , in principle , to observe each of the identical particles and thus distinguish them .we can say that , in quantum mechanics , identical particles completely lose their ` identity ' '' ( p. 252 ) .there are similar explanations of the identity principle for particles in other textbooks as well .but , as a matter of fact , if the initial data for the cauchy problem does not possess a symmetry property , then the situation in quantum mechanics does not differ from that in classical mechanics .indeed , suppose that the hamiltonian function is symmetric with respect to permutation of and , .let denote the self - adjoint operator in the space corresponding to the hamiltonian ( weyl of jordan quantized ) . consider the corresponding schrdinger equation satisfying the initial conditions where , and we can assume that , where , and is a bell - shaped function vanishing outside a neighborhood of the point , and the neighborhoods are small so that these functions do not intersect .returning in time to the initial point , we can number all the bell - shaped functions .but in the projection on the real space containing all particles , the experimenter can not distinguish them without taking into account the full deterministic process with respect to time from zero to the given .similarly , in classical mechanics , of two point particles have intersected and the experimenter has no knowledge of their velocities ( instant photo ) , then he also can not distinguish them .he must know their original velocities , i.e. , use slow - motion filming .this means that he has to look into their `` past '' .but if the experimenter must determine which of the original particles with a prescribed velocity arrived at the given point , then he must observe the whole process , up to the point .finally , let us consider the particle identity philosophically . in statistical calculations of the number of inhabitants in a town , the permutation between a child and an old man does not change the total number of inhabitants .hence , from the point of view of the statistics of the given calculation , they are indistinguishable . from the point of view of the experimenters who observes the molecules of a homogeneous gas using an atomic microscope , they are indistinguishable .he counts the number of molecules ( monomers ) and , for example , of dimers in a given volume .dimers constitute 7% in the total volume of gas ( according to m. h. kalos ) .this means that the experimenter does not distinguish individual monomers , as well as dimers , from one another and counts their separate numbers .his answer does not depend on the method of numbering the molecules .these obvious considerations are given for the benefit of those physicists who relate the fact that quantum particles are indistinguishable with the impossibility of knowing the world .i do not intend to argue with this philosophical fact , but wish to dwell only on mathematics and statistics and distributions related to the number of objects .let us turn to the boltzmann statistics , as is described in `` mathematical encyclopedia '' . in his article on the boltzmann statistics , d. zubarev , the well - known specialist in mathematical physics and the closest disciple of n. n. bogolyubov , writes that , in boltzmann statistics , `` particles ... are distinguishable '' .however , a few lines below , zubarev states that `` in the calculation of statistical weight , one takes into account the fact that the permutation of identical particles does not change the state , and hence the phase volume must be decreases by times '' . of course , it is impossible to simultaneously take into account both remarks .but they are both needed to solve the gibbs paradox . as i pointed out on numerous occasions , from the mathematical point of view , the gibbs paradox is a counterexample to the maxwell boltzmann distribution regarded as a statistical distribution for a gas in which the molecules can not be turned back , but not a counterexample to a dynamical distribution .as for the famous discussion between boltzmann and some mathematicians , of course , if the particles of the gas are distinguishable and can be numbered , then they can also be mentally turned back and returned to a state close to their initial state , as the poincare theorem states . as an example we consider the last source of errors , because the computer has a finite memoryhow is the number of particles related to the computer error and the computer s ability to recover their initial data ? by way of example, we consider a set of unnumbered billiard balls of unit mass and the same color .first , consider one billiard ball and launch it from some ( arbitrary ) point with velocity not exceeding a certain sufficiently large value , i.e. , with energy not exceeding .however , since the computer has certain accuracy , it follows that the energy of the ball will take a finite integer number , , of values in the interval of energies ] ) .let be the value of on the barrier maximum .but the `` eigenfunctions '' corresponding to the regge poles ( the poled of the resolvent continued through the cut into the complex domain ) increase exponentially as . butthe regge `` eigenfunctions '' for which the real parts of the poles are close to the quasilevels of the well are sufficiently large in the domain of the well .we prove that the solution of the cauchy problem for the wave equation whose initial conditions are consistent with the radiation conditions ( i.e. , the derivative with respect to depends on the initial function for ) tends to zero at each point of the support of the finite potential . in the problem under study , there is addition energy equal to the average energy ( where is the boltzmann constant and is the temperature ) multiplied by the number of particles ( another large parameter ) .thus , we must compare the quantity with .in fact , there is one more dissipation in gas , namely , the viscosity , which also can not be neglected in mathematical computations and which ensures that the initial problem is not self - adjoint ( the parameter corresponding to friction ) .the problem of determining the limit situation becomes significantly simpler in the language of nonstandard analysis .the rigorous theorems even in the case of this simplification are very c cumbersome .we present the main ideas . in the semiclassical limit, one can assume that the well up to the barrier maximum and the domain from this point to the boundary condition are in no way related to each other .only at the point equal to the barrier maximum in infinite time , the classical particle can `` creep '' over the barrier ( see fig .[ fig1 ] ) .the mathematicians consider the one - dimensional wave equation in the half - space , for .the potential has a smooth barrier with maximum at the point .the stationary problem has the form the radiation conditions outside the interval ] .now we consider the contrary problem , i.e. , the `` irradiation '' problem .in other words , we pose conditions that are complex conjugate to the sommerfeld conditions .this problem does not always has solutions .the well is being filled until if then the solutions do not exist , because the part \ ] ] reflects from the point , and the `` irradiation '' condition is not satisfied .this phenomenon is similar to the following one .assume that a negatively charged body is `` irradiated '' by positive ions . then after a certain number of ions sticks to the body ( fills the well ) and neutralizes it , the other freely flying ions reflect from the body and the `` irradiation '' process stops .the conditions dual to the sommerfeld condition are not satisfied any more , because the radiation process originates simultaneously . in our case ,the role of attracting charges is played by regge poles .the sommerfeld conditions are a specific case of the blackbody condition .this problem was posed even by frank and von mises in their survey as one of fundamental problems .the feynman integral does not contain any measure , because it is impossible to pose the boundary conditions for the feynman tube : the feynman tube boundary is a black body . by definition ,if the trajectory enters this boundary , it should not be considered any more , in the wiener integral of tunes , this is also a black body , but compared with the wave problem , the black body in the diffusion problem is understood as a situation in which the particles stick to the wall , and this means the zero boundary conditions for the heat conduction equation .after the transition to -representations , as was shown in our work with a. m. chebotarev , the zero boundary conditions give the feynman tube in the -representation , and it is possible to determine the measure for the finite potential . the temporary energy capture in the laser illumination and its subsequent transformation into a directed beam also characterize the absorption - type trap of `` irradiation '' that further turns into radiation .probably , the problem of black holes in astronomy is a problem of the same type . in the one - dimensional case , on a finite interval, the modern mathematical apparats can solve this problem completely . in this case , if , where is the number of particles , then with a large probability ) one can determine the number of particles inside the well and the number of particles outside it . by the way , this also readily follows from the concept of microcanonical distribution .indeed , the multiplicity ( or the `` cell '' as it is called in ) of the eigenvalue is different inside and outside the well : it depends on the density of eigenvalues as on a small interval . obviously , this quantity is proportional to the width of each well on the interval ] and ] , is the number of particles in the well ] , and is the length of the interval ] . the graph of the function , where , has two points on the graph in fig . [ fig2 ] for {5} ] .these points correspond to the minimum and maximum of the trap in fig .[ fig1 ] .note that the passage of a classical particle through the point , when the energy of the particle corresponds to the point ( see fig . [ fig2 ] ) , is not a passage through an ordinary barrier . in passing from quantum mechanics to classical mechanics, we see that this point is no longer the usual turning point , and the classical particle , in the limit , penetrates into the well in infinitely large time . in , we already discussed koroviev s trick in a variety show when he scattered money bills among spectators and we shoed that if the number of spectators is twice as large as some critical number , then , with a large probability , one - half of the spectators does not get a single bill .however , if they combine into pairs with a view to share the spoils , then most of them will get some bills with a large probability . from the point of view of kolmogorov complexity , both scenarios are equiprobable : none of them is preferable .as proved , given such a canonical distribution , there is also a critical number , as in the case of the bose condensate . by what has been said above , the axioms concerning the bose distribution that particles pass to the zero level ( stop ) are replaced by the following axiom : particles form dimers , trimers , and other clusters that do not affect pressure and form brownian particles in the gas .the point for which the depth of the well ( trap ) is maximum is . in our theory, this point corresponds to the critical temperature , where is the boltzmann constant .let us present the table for for different gases for which the formula for the gas is in correspondence with the depth of the well of the lennard jones potential ( not to be confused with the depth of a trap in the scattering problem ) . the deviations of the data for propane , ammonia , and carbon dioxide from the theoretical data are the largest in the table . apparently , the ammonia and carbon dioxide to not reach the well bottom because of the polarity effect .it follows from abstract thermodynamics that if the dimension of dimers and monomers is the same , the total energy inside and outside the well is also the same . therefore , when we subtract the energy of dimers in the well it follows from the total energy of the monomers , as is seen from our general concept , that , for monomers , the energy remains equal to .the physical thermodynamic relations imply that such a levelling - off occurs in infinite time .it is especially easy to see in the case of a plasma when it is easy to reproduce a pattern with a well and a barrier . , , is the temperature in kelvin degrees , is the volume in , is the gas constant , and is the compressibility factor .the isochores are shown by dotted lines . ]since the point corresponds to the fractal dimension , and the point to the fractal dimension , using abstract thermodynamics , we can compare the corresponding values of for with experimental curves in fig .[ fig3 ] and in fig .[ fig5 ] . as fig .[ fig3 ] shows , experimental curves for different gases differ somewhat . in fig .[ fig6 ] , we marked the values of the averaged isotherm by black points . in fig .[ fig5 ] , we demonstrate the law of corresponding van der waals states and use the following notation : , . then the dimensionless quantity , i.e. , the compressibility factor becomes a dimensional quantity in the units . to avoid this, we assume that the volume is also dimensionless , taking the ratio divided by unit volume equal to in fig .this volume is shown in the figure by dotted line marked by .then all the quantities are dimensionless , including the chemical potential .we assume that and . in what follows ,the subscript is omitted . to find the isotherm , we apply the well - known ( in thermodynamics ) general formula for , , is the volume , is the pressure , and it the dimensionless chemical potential .as is known , in the - order parastatistics of fractal dimension , the distribution over energy has the form /\theta}-1}-\frac{k}{e^{[(p^2/2m)-\mu]k/\theta}-1}\bigg\}\ , dp } { \int_0^\infty p^{\gamma}\bigg\{\frac{1}{e^{[(p^2/2m)-\mu]/\theta}-1}-\frac{k}{e^{[(p^2/2m)-\mu]k/\theta}-1}\bigg\}\ , dp}.\ ] ] for the bose - statistics , , for the fermi - statistics , .the parastatistics is characterized by a finite number . in this case , the number of particles is assumed to be large .we associated this number with the number of particles and obtain a new distribution : for and for . in our case , , where is the number of particles .although , the chemical potential still can be positive and depend on .we consider the parastatic energy distribution for in the case where and the dimension is equal to for near the point we expand the expression in braces in a series up to .we have for , this expression tends to infinity as . at zero, the integrand exponentially tends to zero if is integrable as .this implies that as , the expression in braces tends to and the integral in tends to .this implies that and since for , up to ( i.e. , up to 4% ) , we obtain the diagonal line on the graph for the isotherm .this corresponds to the strict incompressibility of fluid as the pressure increases , which , as we see , coincides with experimental graph in fig .[ fig5 ] . for ( ) , the graph of the criticalisotherm decreases and becomes the diagonal only for .we note that the fluid critical volume at the phase transition instant , a jump in the fractal dimension , is equal to .this quantity is clearly determined .it also weakly depends on the repulsion degree of the lennard jones potential .therefore just this quantity is the main characteristics of the critical volume .as was already said , it is rather difficult to observe this quantity experimentally .it is much easier to observe the lower end of the vertical segment .if the trajectory of rotation of a pair of molecules about the central point is nearly circular , then the energy redistribution from translational to rotational is . for noble gases ,the trajectory ellipticity is rather small , and hence the energy redistribution is .but for carbon dioxide , it is more `` elliptic '' , and hence the fall may attend the value , but the original critical volume remains .the energy redistribution law results in the phase transition , see the vertical segment in fig .further , the condensate is also formed as the fluid becomes incompressible in the end , the fractal dimension is then .although distribution is similar to the bose distribution , the final condensate is an incompressible fluid consisting of clusters of dimension less than three , i.e. , it does not contain domains or micelles . notethat , for gas dimers , the fractal dimension is equal to . as is readily seen from arguments in , if for the three - dimensional case , the poisson adiabatic curve is , then , for the dimension , the adiabatic curve will be , and , for `` liquid '' fluids of dimension the adiabatic curve is of the form .the point is that the experiment even with argon is highly unstable bear the critical isotherm . roughly speaking ,it is considered as follows .a cylinder with a freely sliding piston on its upper cover is first fixed so that the volume is equal to the value of for argon . herepart of the argon is in liquid state and part of it in vapor state ( see ) .further , the cylinder is heated up from to until the surface tension film disappears and a fluid is formed .so we obtain the lower point of the vertical segment in fig .further , slowly releasing the piston and supplying heat so that the temperature remains equal to all the time , we must come to the point .if , as the result of the experiment , the temperature drops below , then we have jumped to dimension , as is seen from the graph in fig .[ fig6 ] . therefore , here it is hard to abide by equilibrium thermodynamics , while theoretical calculations yield the dimension at first , and then the volume is increased until the temperature drops to , then the dimension in the experiment will be preserved . until , and then becomes in the phase transition of the second kind ( see below ) . ] . the incompressible part of the critical fluid is obtained for any dimension . andthis variation in the dimension is a very important new phenomenon .the dimension reconstruction is a separate question .apparently , this phenomenon explains the well - known `` jamming '' effect for glass ( cf .thus , although the volume does not vary with pressure , but , in this case , the dimers clusters are `` chewed '' with a gradual reconstruction of clusters in the direction of a more strict ( ideal ) architecture . for the case of zero dimension and the architecture `` self - organization '' ,see .the `` chewing '' effect , i.e. , the dimension variation with varying on the isotherm , occurs according to the law {\mu = p}}{\frac{\partial\omega}{\partial\gamma } } , \qquad \omega=-\int_0^{\infty}\varepsilon^{\gamma}\ln\big(1-e^{(\mu-\varepsilon)/t}\big)\,d\varepsilon , \qquad \gamma|_{p=1.5p_{\text{cr}}}=0.2.\ ] ] the fractal dimension decreases ( is `` chewed '' ) by the law ( see fig . [ fig7 ] ) as we known , corresponds to the fractal dimension .thus , the fractal dimension has the form , . for , the parastatistical term , where , must be taken into account in .we described the critical isotherm .the process of passing to other isotherms is described in , by successive steps of the -mapping . in this paper, we do not consider this procedure , which requires a lot of computer computations .we consider the isochore for the scattering problem . for a fixed ,it means attraction , i.e. , , where and are coordinates of two particles participating in scattering and denoted by the letter .thus , this is the problem of `` irradiation '' by a flow of monomers . as was already mentioned ,if , where is the well width , then the monomers get stuck in the well , and if , then a part of monomers return ( i.e. , are reflected ) .in our case , the quantity at this `` transition point '' is equal to , and , and hence . in fig .[ fig5 ] , one can see that the isochore lying above this point begins to stretch .this means that the isochore endures a phase transition of the second kind .this occurs due to variations in the dimension and , respectively , in the entropy .this problem , just as the relation between the press and the pressure , must be studied separately , because it is related to some generalization of the basic thermodynamical notions . in the scattering problem ,the volume is assumed to be equal to infinity . according to nonstandard analysis, infinities can be graded . in our case, we can consider the given isochore ( fig . ) on different scales , simultaneously extending or shortening both coordinate axes .it is only necessary to calculate the angle which they make with the isotherm .let us carry out these calculations .the fractal dimension does not vary along the isochore up to a point at which the trap width in the scattering problem for the interaction potential coincides with the distance from the barrier maximum point to the impact parameter ( fig . [ fig1 ] ) . in our case, this coincidence occurs at the point .we note that , for other lennard jones potentials with a different degree of repulsion , the point of phase transition of the critical volume varies only in thousandth fractions .the value at the critical point , is always equal to .thus , up to the value , we can reconstruct the family of isochores , and hence of isotherms , because , at each point of the isochore , and , and hence , are determined .since in the three - dimensional case ( the fractal dimension is ) for is attained at the point , and above the point , the dimensions begin to increase up to the three - dimensional , i.e. , if there are no dimers at the boyle and higher temperatures , then , according to the above argument , the addition to the original number of particles monomers is equal to with a certain constant , which we determine by the condition : for , the dimension must be equal to , i.e. , .hence for the lennard - jones type potential .we find from the condition now , in our case , varies from to and depends on , i.e , on , where . but where is determined by the isochore continued above the point .now we have determined the dependence for the new isochore whose dimension varies from to and the volume remains unchanged .the quantity , as a function of temperature , increases almost exponentially approaching the boyle temperature . to retain this fast increasing number of monomers in the volume ,it is necessary to increase the pressure at the same rate .therefore , the pressure increases much faster than , and this results in elongation of the isochore for .the value of increases by units and attains , while the pressure increases more than twice .the other isochores are similar to those constructed above , because their abscissas and ordinates increase and decreases by the same factors .the angle at which they intersect the critical isotherm was calculated .to each point in the family of isochores there corresponds a pair of points and .hence , for a given , the temperature is determined and the locus of points corresponding to the isotherm is constructed .in conclusion , i note that the rigorous proof of all the above statements is very cumbersome .it is based on the use of the correction to variations in the self - consistent field in the scattering problem and the complex germ method ( in topology , the term `` maslov gerbe '' is used , in mathematical physics , the term `` complex germ '' ) , as well as the ultrasecond quantization , where the operators of couple creation are used . the author wishes to express his deep gratitude to a. r. khokhlov , a. a. kulikovskii , a. i. osipov , i. v. melikhov , l. r. fokin , yu .a. ryzhov , a. e. fekhman , a. v. uvarov , p. n. nikolaev , m. v. karasev , and a. m. chebotarev for fruitful discussions and also thanks a. v. churkin , d. s. golokov , and d. s. minenkov for help in computer computations .v. p. maslov , on discreteness criterion for the spectrum of the sturm liouville equation with operator coefficient ( to the paper by b. m. levitan and g. a. suvorchenkova ) .// functional anal .appl . , v.2 , no.2 , 1968 , pp .6367 . v. p. maslov . on the superfluidity of classical liquid in nanotubes . //rjmp , i : v.14 , no.3 , 2007 , pp .304318 ; ii : v.14 , no.4 , 2007 , pp . 401412; iii : v.15 , no.1 , 2008 , pp . 6165 ; iv : v.15 , no.2 , 2008 , pp . 280290 . v. p. maslov . on a general theorem of the set theory leading to the gibbs , bose einstein , and pareto distributions as well as to the zipf mandelbrot law for the stock market .notes , v.78 , no.6 , 2005 , pp . 870877 .y. kaneda , t. ishihara , m. yokokawa , and others . energy dissipation rate and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box . // physics of fluids , v.15 , no.2 , 2009 , pp .
|
to solve the ancient problem of fluids , i.e. , of states in which there is no difference between gas and liquid ( the so - called supercritical states ) , it is necessary to abandon several `` rules of the game '' , which are customary to physicists and to refine them by using rigorous mathematical theorems . to the memory of v. l. ginzburg
|
transfer ribonucleic acid , or trna for short , is an important molecule which transmits genetic information from dna to protein in molecular biology .it has been known that all trnas share a common primary , secondary , tertiary structure .most trna sequences have a `` cca '' hat in terminus 5 and a polya tail in terminus 3 in its primary structure .its secondary structure is represented by a cloverleaf .they have four base - paired stems and a variable stem , defining three stem loops ( the d loop , anticodon loop , and t loop ) and the acceptor stem , to which oligonucleotides are added in the charging step .variable loop varies in length from 4 to 13 nt , some of the longer variable loops contain base - paired stems .the trnas also share a common three - dimensional shape , which resembles an inverted `` l '' . though much effort had been put on trna research in the past time , little is known about specific features of trna that are exclusive to a species , taxa or phylogenetic domain level . with the progress of genome projects ,a vast amount of nucleotide sequence data of trna is now available , which makes it possible to study the trna genes expression for a wide range of organisms .recently scientists are trying to find specific feature in genes families by a new tool complex networks . with the development of techniques on oligonucleotide or cdna arrays , using gene chips to erect a complicated network and studying its feature and evolution has become a hot subject , and has gained a success .basically , the networks can be classified into two types in terms of its degree distributions of nodes : exponential networks and scale - free networks .the former type has a prominent character that although not all nodes in that kind of network would be connected to the same degree , most would have a number of connections hovering around a small , average value , i.e. , where is the number of edges connected to a node and is called degree of the node .the distribution leads to a poisson or exponential distribution , such as random graph modelerd 1960 and small - world model , which is also called homogenous networks .the latter type network has a feature that some nodes act as `` very connected '' hubs which have very large numbers of connections , but most of the nodes have small numbers of connections .its degree distribution is a power - law distribution , .it is called inhomogeneous network , or scale - free network .the trna sequences have similarities in sequences and structure , which make it possible to construct networks and use specialized clustering techniques to make classification .the similarity of trna sequences suggests their relationships in evolutionary history .if we consider all the trna sequences at present evolve from common ancestor via mutation , the sequence similarity will reveal their evolutionary affiliation .there are lots of trna sequences .the similarities of every two of the sequences are different .lots of data will be dealt with . since complex network is a good model to describe and study complex relationships , the network model may be useful in this field . in this paper, we constructed a similarity network of 3917 trna genes in order to show network model is a powerful tool to study the evolutionary relationships among the trna genes .the topology of the network is discussed , the degree distribution and clustering coefficient are considered , and the network constructed by the same number of random trna sequences is used to make comparisons .transfer rna sequence have been collected into database by sprinzl et al 1974 .all of our data , 3719 trna genes sequences , are retrieved from this database ( free available at http://www.uni-bayreuth.de/departaments/biochemie/sprinzl/trna/ ) , which including 61 anticodon subsets , 429 species , and 3 kingdoms : archaea , bacteria , and eucarya .each trna sequence has 99 bases when the variable stem is considered . for convenience of alignment ,the absent bases in some positions of the trnas are inserted with blank . firstly .we align the trna sequences with the same anticodons , and then align all 3719 trnas . since there have been too many conclusions proving that trna genes have a high similarity in sequencesstefan1999 , sergey1993,saks1998 , the results of the alignment of 3719 trna gene sequences will not be listed in detail .we only focus on some prominent characters of the statistics of the alignment .if each base including the inserted blank is considered equally , the length of a trna is . to align two trna sequences , a parameter is used to depict their similarity degree , which indicates how many bases in the same position of two trna gene sequences are identical . for example, if the first bases of two trna sequences both are a , one score is added to .obviously , .although it is the simplest kind of alignment , as we show later , it gives lots of information of the relationships among trna genes .when , it means two sequences are matched perfectly .since the perfectly matched sequences have the the same significance in biology , we take only one of them as a representative . to construct the trna similarity network , every sequence is considered as a node .if the alignment score of two trna sequences is larger than a given similarity degree , put an edge between the corresponding nodes .obviously , if is small , the nodes will connect closely , and when grows larger , the number of connections will decrease . for comparison , we make a similarity network of the same number of random trna genes . to generate the random trna genes , every base of the sequencesis randomly taken from the four bases ( c , g , a and t ) and the sequences must conform to the prototype of the real trna , which means the sequences we generate randomly must confirm the secondary structure of trna .pajek ( the slovene word for spider ) , a program for large - network analysis ( free available at http://vlado.fmf.uni-lj.si/pub/networks/pajek/ ) , was used to map the topology of the network .figure [ figure1 ] displays several typical topologies of the similarity network of different kinds of trna gene sequences .figure [ figure1 ] ( a ) , ( b ) and ( c ) are similarity networks constructed by trna genes with the same anticodons ( cgc , cca and tgc respectively ) and .the networks of trna genes with the same anticodon identity are highly clustered .some of them divide into two or more clusters , such as figure [ figure1](c ) .each of the clusters almost entirely connected when is small .when grows large , the connection number decreases , and the network becomes not so closely connected .figure[figure1](d ) is the similarity network of anticodon gtt when . as more nodes added in the network ,the network becomes more complex .figure [ figure1 ] ( e ) , ( f ) shows the network with a large ( the number of nodes ) .( e ) is the network containing anticodons cat and gcc with , and ( f ) is the network of all trna sequences with .small local clusters with the same anticodons get together to form a large cluster , `` very connected '' hubscan be observed in the center of the network ( figure [ figure1 ] ( f ) ) . at a large similarity degree ,the scale free property ( or power law distribution ) emerges , which means a few nodes have a large degree ( number of connections ) , but most nodes have a small degree . to make the figure figure1 ( e ) more visualized , we extracted the nodes whose connections number is bigger than to make the figure [ figure2 ] .it also has hubs in the center of the network . of course , the hubs are smaller . the scale free property is still kept . the distribution of the connected probability of the networks of the trna genes with the same anticodon is shown in table [ table1 ] .the connected probability is defined as the fraction of number of real connections to the largest number of possible connections . in the tableit can be found , when , the network is almost entirely connected and most of the connected probabilities are larger than ; when , most of the connected probabilities decrease to one tenth of the former , and some decrease to zero .consider the network of random trna sequences in the same size .when similarity degree is small , most of the nodes have the same number of connections .when increases , the number of the edges of the network decreases sharply and most of the nodes lose their links ; only few of them have two or three edges linked .table [ table2 ] shows the statistics of the connection numbers of real trna similarity network and random trna similarity network at different similarity degrees .the table shows that when , the number of the connections of the two networks are very large ; and when , both of them drop , but the random one drops more quickly than real one does .the connection number of real trna network drops from ( ) to ( ) . the connection number of random trna network drop from ( ) to when .it shows the real trna sequences have more similarity with each other than random ones do . in other words ,the real trna sequences are not randomly taken .if we consider that the real trna genes have evolutionary relationships , the differences between the statistics of real and random trna similarity networks shown above can be explained to a certain extent .it has already been found that networks constructed of the large scale organization of genomic sequence segments display a transition from a gaussian distribution via a truncated power - law to a real power - law shaped connectivity distribution towards increasing segment size. .the similarity networks of trna sequences have similar features .the investigations begin with an important parameter , degree distribution of the nodes , and the analysis is considered in figure [ figure3 ] .as observed in figure [ figure3 ] , with the similarity degree increasing , behaves more and more similar to power - law distribution .when , degree distribution of the nodes follows a uninterrupted fluctuated distribution . for those , fluctuate from to ; and for those , fluctuate from to , and the peak of the fluctuation is at .the mean degree , and the maximal degree . when , the peak of the fluctuation deviates to left , at . when , the distribution of appears a analogous power - law distribution if ignore the minimal value of . for , the distribution transits from a analogous power - law distribution to a real power - law . as shown in figure [ figure3](e ) ,when , the distribution curve fits the power - law perfectly .the fitting result is . comparing to the real trna gene sequences , the degree distribution of the network of random trna sequences , when , is a gaussion distribution ( figure [ figure3](f ) ) .most nodes have approximately the same degree , ; the maximal degree and the minimal degree . when , the distribution is almost unchanged ( figure [ figure3](g ) ) .when , the number of the edges descend sharply with its maximal degree . in figure [ figure3](f ) , ( g ) , there are lower peaks except the main peaks of the gaussion distribution .it is possibly because the random trna sequences are not generated completely arbitrarily for they must conform to the prototype of the real trna . from above data analysis, we can conclude the real trna genes are more self - organized than the random trna genes .the power - law distribution means there are a few trna genes which behave as `` very connected '' hubs of the similarity network .lots of trna genes are similar with them in arranging of sequences .if we suppose all the trna genes come from common ancestor , it is possible that the `` very connected '' trna genes will have more relationships with the ancestor than other trna genes do . in other words ,the `` very connected '' trna genes probably diverge less from ancestral sequences than other trna genes do in the evolutionary history . in mathematics , a way to construct a scale free network is to follow a rule that an added node has much more possibility to connect with a node with a large degree than to connect with a nod with a small degree . in the trna similarity network , it maybe means the trna genes which have small degrees diverged more from ancestor sequences and is less stable than the trna genes which have large degrees .if a node connect with other nodes and there are edges connected within these nodes , the clustering coefficient of the original node is defined as where is the total number of possible connections among nodes . clustering coefficient reflects relationships of the neighbors of a node , and quantifies the inherent tendency of the network to clustering .as shown in figure [ figure6 ] , the average clustering coefficient of the real trna network is larger than the random one . as increase , decrease .when , it approaches a local minimum and experience a little increase and then decreases slowly again .comparing with the average clustering coefficient of the trna network , the average clustering coefficient of the random network decreases fast while increases , when , .the behavior of the coefficient of two networks is also illustrated in table [ table2 ] .when , , ; when , drops to zero quickly , but decrease slowly .once again , we proved the real trna genes are not randomly selected .the real trna genes have close relationships with each other . table [ table3 ] shows the distribution of the average clustering coefficient of trna groups which are classfied by the possible amino acid - accepting .some groups contain isoacceptor trna which consist of different trna species that bind to alternate codons for the same amino acid residue .the trna group who carries the amino acid residue named met is ignored for it contains only one trna sequence . comparing table [ table3 ] with table [ table2 ], we can conclude that the nodes are more likely to connect with the nodes within the same amino acid group .the trna similarity network can be classified into several large clusters with the same amino acids .it hints that in trna genes evolutionary step is much more likely to happen within the same amino acid group .the cases that a trna gene of certain amino acid evolve to trna gene of another amino acid are rare .in this paper , we want to show the network model is a powerful tool to study the relationship of trna genes .although some results are not new , such as the real trna genes are not random and the relationships among trna genes with same anticodon are closer than the relationships among trna genes with different anticodons , they are evidences that network model works well for the network model distinguishes these properties clearly . what is more , the trna similarity network behaves scale - free properties when is large . as we know the scale - free natureis rooted in two generic mechanisms .firstly scale - free networks describe open systems that grow by the continuous addition of new nodes .secondly scale - free networks exhibit preferential attachment that means the likelihood of connecting to a node depends on the node s degree . with these mechanisms ,the `` very connected '' nodes in scale - free networks usually are added in the network at early time during the growth of the network .it has been found that most recent trna genes are evolved from a few common precursors , and these oldest evolutionary sequences , comparing to the recent trna genes .therefore , in trna similarity netwok , the `` very connected '' trna genes may have diverged less from their ancestors than weakly connected ones .most recently , many research conclusions show that genes of related function could behave together as a group in the networks constructed according to their similarity features . in this paper , although we use the simplest alignment , this property can be found .when similarity degree is small , nodes of the trna genes with the same anticodons are connected to form a local cluster , among them are entirely connected .when increases to a large value , a scale - free character emerges that a few nodes compose the core of the network and most of nodes have low links .these observations seem to be perfectly fit to the evolutionary processes of the trna genes . on the other hand ,the oldest trna genes undergo disturbances such as mutation , loss , insertion , or rearrangement etc . during the evolution .some new trna genes are suited for the environment and reserved .so , they have a high similarity to its ancestral sequences . in the network constructed by similarity degree of these trna genes, they form local clusters .an interesting finding of trna similarity networks is that some local clusters have high connectivity with the other clusters ; or to say , some nodes of one cluster have lots of connections with some nodes of another cluster .see figure [ figure5 ] .it may hint that the evolution relationship of trna sequences of two different anitcodons . as shown in figure[figure5](a ) , the network is of two different anticodons : acg and cca .the solid circle nodes are the trna genes of acg , and the hollow circle nodes are the trna genes of cca . in this figure, they mix into one cluster .figure [ figure5](b ) shows that the network of anticodons tag and tga .the solid circle nodes are the trna genes of tag , and the hollow circle nodes are trna genes of tga .they appear three clusters in the topology map , and each cluster has some nodes which highly connect with some nodes of other clusters .it shows that although some trna genes have different anticodons , they have high similarities in sequences . in evolutionary history, the trna genes of one anticodon identity can evolve to trna genes of another identity .the above finding may be an evidence of this kind of evolutionary mode . in the other hand , from figure [ figure1](c ), the network of the same anticodon gcc split into two cluster .it hints the evolution process of the trna genes of same anticodon may diverge in the history .therefore , there are different modes of evolutionary processes , i.e. evolution within the same anticodon groups and evolution among different anticodon groups .the former may be the main part of trna evolution .the later may be the key cases of the interaction among trna of different anticodons during the evolution . for the alignment we used is simply counting the number of cites that are identical , it losts many information in the evolution process .more complicated alignment models may exhibit more details of the relationships among trna genes .the content of trna database is limited , the numbers of trna sequences from different organisms varied largely .therefore , the biases of taxon samples may influence the topology of the network and the results gotten from the network may not completely reflect the evolution relationship of trna genes . it is a limitation of network model that will be improved when more trna genes are sequenced .although we did not get many new results from what we have know about the evolution of trna genes , the results contribute as proofs that the network model can work well in the research of relationship of trna genes and is a useful tool .robert m. farber , ( 1996 ) _ a mutual information analysis of trna sequence and modification patterns distinctive of species and phylogenetic domain _ biocomputing : proceedings of the 1996 pacific symposium world scientific pubishing co , singapore .j. robert macey et al ( 1999 ) _ molecular phylogenetics , trna evolution , and historical biogeography in anguid lizards and related taxonomic families _ molecular phylogenetics and evolution * 12 * , no.3 , 250 ..the distribution of the connected probability of all 57 anticodons trnas networks , which have excluded four anticodons for they have too small vertices .the statistic shows that when s=50 , many networks are complete connection ; when =90 , the connected probability decreasing sharply , some of the connected probability decrease to zero [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ]
|
we showed in this paper that similarity network can be used as an powerful tools to study the relationship of trna genes . we constructed a network of 3719 trna gene sequences using simplest alignment and studied its topology , degree distribution and clustering coefficient . it is found that the behavior of the network shift from fluctuated distribution to scale - free distribution when the similarity degree of the trna gene sequences increase . trna gene sequences with the same anticodon identity are more self - organized than the trna gene sequences with different anticodon identities and form local clusters in the network . an interesting finding in our studied is some vertices of the local cluster have a high connection with other local clusters , the probable reason is given . moreover , a network constructed by the same number of random trna sequences is used to make comparisons . the relationships between properties of the trna similarity network and the characters of trna evolutionary history are discussed .
|
diffusion controlled bio - chemical reactions play a central role in keeping any organism alive : the transport of molecules through cell membranes , the passage of ions across the synaptic gap , or the search carried out by drugs on the way to their protein receptors are predominantly diffusive processes . further more ,essentially all of the biological functions of dna are performed by proteins that interact with specific dna sequences , and these reactions are diffusion - controlled .however , it has been realized that some proteins are able to find their specific binding sites on dna much more rapidly than is ` allowed ' by the diffusion limit .it is therefore generally accepted that some kind of facilitated diffusion must take place in these cases .several mechanisms , differing in details , have been proposed .all of them essentially involve two steps : the binding to a random non - specific dna site and the diffusion ( sliding ) along the dna chain .these two steps may be reiterated many times before proteins actually find their target , since the sliding is occasionally interrupted by dissociation .berg and zhou have provided thorough ( but somewhat sophisticated ) theories that allow estimates for the resulting reaction rates .recently , halford has presented a comprehensive review on this subject and proposed a remarkably simple and semiquantitative approach that explicitly contains the mean sliding length as a parameter of the theory .this approach has been refined and put onto a rigorous base in a recent work by the authors .although analytical models provide a good general understanding of the problem , they fail to give quantitative predictions for systems of realistic complexity .therefore , numerical simulations are required to calibrate the set of parameters that form the backbone of these models . however , a straight forward simulation of a protein searching through mega - bases of non - target dna to find its specific binding site would be prohibitive for all except for the most simple numerical models .fortunately , there are better ways .two of the authors ( kk and jl ) have recently introduced the method of excess collisions ( mec ) for an efficient simulation of intramolecular reactions in polymers . in the present work ,this method is modified to apply to second order diffusion controlled chemical reactions ( section [ sec : mec ] ) .we thereby construct a simple random walk approach to facilitated diffusion of dna - binding proteins ( section [ sec : facilitated ] ) and apply the mec and our analytical estimate for reaction times to this model ( section [ sec : appl ] and [ sec : tauf ] ) .section [ sec : chain ] provides details about the generation of dna - chains , followed by a set of simulations covering a large range of system dimensions ( section [ sec : simu ] ) to verify the performance of the mec .we consider a ( time - homogeneous ) stochastic process . the problem is to find the average time of the first arrival at a certain state a , provided that , at time , the system occupied another state b.suppose we observe the system for a long time interval and monitor the events of entering state a. these events will be referred to as collisions .each collision that occurs for the first time after visiting state b will be called prime collision .we obtain the ( asymptotically correct for ) relation where and are the average numbers of all and of prime collisions during the time interval , respectively , and and are the corresponding mean recurrence times . hence , the ratio defines the average number of collisions between two visits to state b and does actually not depend on , once is chosen sufficiently large .the mean recurrence time of prime collisions is simply the average time the system requires to move from state a to b and back from state b to a : where is the mean time of first arrival at state b starting from a. with eq .( [ eq:10 ] ) we then obtain this relation is useful for the numerical estimation of if . a simulation cycle then starts in state a and ends as soon as state b is reached , i.e. the reversed reaction is simulated in order to obtain the ( much lower ) reaction rate of the original reaction . in this casewe can write where is the average number of collisions in a simulation cycle and the second term accounts for the prime collision ( which is not observed in the simulations , since the cycle starts at the time instant that immediately follows the prime collision ) .as will be shown later in section [ sec : appl ] , the recurrence time can be renormalized and computed efficiently inside a small test system . note that eq .( [ eq:20 ] ) can be written as where is the mean number of excess collisions per simulation cycle , since the ratio is just the mean number of collisions that would be observed in a simulation run of length with a starting point at an arbitrary state of the system ( not necessary state a ) .we consider a spherical volume ( cell ) of radius and inside it a worm - like chain ( dna ) of length and radius .the protein is represented as a random walker moving inside the cell with a certain time step .a collision takes place once the walker enters the active binding site , a spherical volume of radius positioned in the middle of the chain that , in its turn , coincides with the center of the cell .we want to point out that the parameter does not necessary correspond to any geometrical length in the real system .it defines a probability for the reaction to take place , and may cover additional variables which are not included explicitly in the model , like protein orientation and conformation .an attractive step potential is implemented as where is the shortest distance between walker and chain .this defines a pipe with radius around the chain contour that the walker is allowed to enter freely from outside , but to exit only with the probability where is the boltzmann factor , otherwise it is reflected back inside the chain .we may therefore denote as _ exit probability_. it is important to note that defines the equilibrium constant of the two phases , the free and the non - specifically bound protein , according to where is the concentration of free proteins and is the linear density of proteins that are non - specifically bound to the dna , with being the geometric volume of the chain .the two states of interest are the protein entering the cell , b , and the same protein reaching the active site in the center of the cell , a. more specifically , we are interested in finding the time the walker requires to reach a distance when starting at distance . we shall first define the excluded volume of the chain as } { k_{\rm b } t}\right)\right]\ , d{\bf r } = v_c\ , \left ( 1 - \frac{1}{p } \right)\;,\ ] ] where is the energy of the walker as defined by eq .( [ eq:40 ] ) and the integration is performed over the geometric volume of the cell , .the effective volume of the cell is then next we assume that simulations were carried out within a small test system of radius and that the recurrence time of the walker was found .its recurrence time in the larger system is then found as where we have defined this ratio does not depend on system size and may therefore be called _ specific recurrence time_. it only depends on the potential - depth and the step - size chosen for the random walk .the idea is to compute ( as described in section [ sec : tautilde ] ) for a small test system with dimensions of the order of ( which is the radius of the specific binding site ) to obtain for the system of interest using eq .( [ eq:80 ] ) .once is known , is computed via random walk simulations in the large system , starting at and terminating as soon as the periphery of the cell is reached . following the trajectory of the walker , the number of collisions is monitored as well , so that eq .( [ eq:20 ] ) can be used to determine the much longer reaction time . as has been discussed in detail elsewhere , it is possible to estimate the reaction time for the protein using an analytical approach , once certain conditions are satisfied .the resulting expression is \;\ ] ] with the sliding variable and and being the diffusion coefficients in sliding - mode and free diffusion , respectively . generally , the equilibrium constant has to be determined in simulations of a ( small ) test system , containing a piece of chain without specific binding site . in the present model , known analytically via eq .( [ eq:50 ] ) .if the step - size of the random walker is equal both inside and outside the chain ( the direction of the step being arbitrary ) , we further have , and hence obtain this variable has got the dimension of length ; as we have pointed out in , it corresponds to the average sliding length of the protein along the dna contour in halford s model . in this light ,a ( non rigorous ) interpretation of eq .( [ eq:95 ] ) is as follows : the first term in the round brackets represents the time of free diffusion of the walker , whereas the second term stands for the time of one - dimensional sliding . with increasing affinity of the walker to the chain ( expressed as a reduced value for the exit probability ) ,the sliding variable increases and the contribution of free diffusion to the reaction time ( first term in [ eq:95 ] ) becomes less significant . at the same time , the second term of eq .( [ eq:95 ] ) is growing . depending on the choice of system parameters, there may be a turning point where the latter contribution over - compensates the former , so that the total reaction time increases once is growing further . for a random walk model as simple as used here ,this analytical formula describes the reaction times well within 10% tolerance , as long as the following conditions are satisfied : ( 1 ) , i.e. the sliding parameter should be small compared to the system size .this restriction assures the correct normalization of the protein s probability distributions and the diffusion efficiencies as discussed in .( 2 ) during the diffusion process , the system reaches its equilibrium , so that the constant represents the average times the protein spends in free and in non - specifically bound mode .this requires either a crowded environment ( the chain - density inside the cell is high enough ) or a reasonably small value for , since the initial position of the walker is always at the periphery and outside the chain , i.e. not in equilibrium .( 3 ) , where is the persistence length of the chain .this restriction accounts for the assumption that the walker moves along an approximately straight line during one sliding period .however , numerical tests have shown that deviations from a straight geometry actually have little impact to the accuracy of the model .( 4 ) the step - size of the random walk has to be small compared to the size of the binding site .it should be pointed out that an analytical approach as simple as that is by no means supposed to simulate the actual situation in a living cell .instead , it serves as a platform for a much wider class of semi - empirical models .the sliding - parameter contains the affinity of non - specific protein - dna binding and is flexible to vary with the potential chosen for the simulation .the diffusion coefficients and can be adapted to experimental measurements , and the target size contains protein - specific reaction probabilities .these parameters can be fitted to either describe system - specific experimental results or the output of more sophisticated numerical codes which would otherwise not permit any analytical treatment .in order to approximate the real biological situation , the dna was modeled by a chain of straight segments of equal length .its mechanical stiffness was defined by the bending energy associated with each chain joint : where represents the dimensionless stiffness parameter , and the bending angle .the numerical value of defines the persistence length ( ) , i.e. the `` stiffness '' of the chain .the excluded volume effect was taken into account by introducing the effective chain radius .the conformations of the chain , with distances between non - adjacent segments smaller than , were forbidden .the target of specific binding was assumed to lie exactly in the middle of the dna .the whole chain was packed in a spherical volume ( cell ) of radius in such a way that the target occupied the central position . to achieve a close packing of the chain inside the cell, we used the following algorithm .first , a relaxed conformation of the free chain was produced by the standard metropolis monte - carlo ( mc ) method .for the further compression , we defined the center - norm ( c - norm ) as the maximum distance from the target ( the middle point ) to the other parts of the chain .then , the mc procedure was continued with one modification .namely , a mc step was rejected if the c - norm was exceeding 105% of the lowest value registered so far .the procedure was stopped when the desired degree of compaction was obtained .the protein was modeled as a random walker within the cell with reflecting boundaries . during one time- step it was displaced by the distance in a random direction .once approaching the chain closer than its radius defining the non - specific binding pipe " , it was allowed to enter it freely and continue its random walk inside . upon crossing the pipe boundary from inside ,it was either allowed to pass with the exit probability or otherwise reflected back inside , as described in section [ sec : facilitated ] . below in this paper , one step was chosen as the unit of time and one persistence length nm of the dna chain as the unit of distance .the following values of parameters were used .the length of one segment was chosen as , so that one persistence length was partitioned into 5 segments .the corresponding value of the stiffness parameter was .the chain radius was , and the active site was modeled as a sphere of identical radius embedded into the chain .the step - size of the random walker both inside and outside the chain was , corresponding to a diffusion coefficient .this choice was a compromise between accuracy and simulation time .tests have confirmed that a smaller step - size could somewhat reduce the gap between theoretical ( eq .[ eq:95 ] ) and simulated reaction time at small values of .to compute the specific recurrence time of eq .( [ eq:85 ] ) , a very small test system is sufficient .moreover , the computations can be carried out for the collisions from within the specific binding site of radius .the entire system , i.e. the sphere and a short piece of chain , was embedded into a cube of side - length with reflective walls . in principle, the size of the cube should be of no relevance , but it was found that , if chosen too small , effects of the finite step - size were emerging .the walker started inside the sphere .each time upon leaving the spherical volume a collision was noted .if the walker was about to exit the cylindrical volume of the chain , it was reflected back inside with the probability .the clock was halted as long as the walker moved outside the sphere and only counted time - steps inside the sphere .since the binding site was embedded into the chain , its effective volume ( eq . [ eq:65 ] ) was simply , with being the volume of the specific binding site ..recurrence time ( 3rd column ) inside the spherical binding site ( ) , specific recurrence time eq .( [ eq:85 ] ) ( 4th column ) , and simulation results for the large system ( , column 5 - 9 ) .the first column is the exponent of the exit probability , the second column the corresponding sliding parameter eq .( [ eq:105 ] ) .the last column defines the speed - up achieved with the mec approach .[ tab : ncoll ] [ cols=">,^,^,^,^,^,^,^,^ " , ] figure [ fig : tauf ] displays the first reaction times as a function of the sliding parameter . both methods ( explicit simulation and mec approach ) deliver identical results within the statistical errors .the solid curve is a plot of the analytical estimate eq .( [ eq:95 ] ) , which consistently under - estimates the first reaction time by 5 - 10% but otherwise describes the trends accurately , including the location of the minimum .the results prove that facilitated diffusion is able to accelerate the reaction considerably .it is also obvious that a very high affinity of the protein to the chain becomes counter - productive : the walker spends long periods of time trapped within a particular loop of the chain without being able to explore the remaining parts of the cell exhaustively . ideally , the affinity has to be chosen so that the walker is occasionally able to dissociate from the chain and associate again after having passed some time in free diffusion .the actual value of the ideal affinity depends on the system parameters and is easily estimated using eq .( [ eq:95 ] ) prior to any simulations .table [ tab : simu ] contains a summary of the simulation results for various system sizes .it appears that the speed - up delivered by the mec approach increased proportional to the square of the cell radius , and gained a significant dimension in the largest of our test systems .whereas a cell as small as was treated within 30 minutes on a pc , including 2000 runs of explicit simulation for 12 different values of the exit probability , the large cell of required more than 5 days for the same set of computations . the mec method reduced that time to less than four hours .in this work , the method of excess - collisions ( mec ) , recently introduced as a technique to speed up the simulation of intramolecular reactions in polymers , is generalized to second order diffusion controlled reactions , and applied to the problem of facilitated diffusion of site - specific dna - binding proteins .this method is based on eq .( [ eq:20 ] ) and ( [ eq:80 ] ) to simulate the much faster back - reaction ( protein starts at the binding site and propagates to the cell - periphery ) instead of .we have demonstrated how mec led to a speed - up of up to two orders of magnitude , depending on protein - dna affinity ( table [ tab : ncoll ] ) , and gaining significance with increasing cell size ( table [ tab : simu ] ) .the cell model employed in this work was perhaps the most simple ansatz that was possible without being trivial , and intentionally so .the simulations had to cover a large range of system sizes in order to verify the efficiency of the mec approach .the chain - lengths span a factor of 64 from the smallest to the largest system .nevertheless , the validity of our results does not depend on the complexity of the model , such as protein - dna potential , which modifies the equilibrium constant in eq .( [ eq:50 ] ) and thereby the sliding parameter ( eq .[ eq:100 ] ) , hydrodynamic interactions , which would lead to effective diffusion coefficients , also modifying , or the introduction of protein orientation and conformation , acting on the effective target size .the speed - up is consistently evaluated in terms of simulation steps , not cpu - time , to ensure invariance on the complexity of the underlying protein / dna model .based on the results presented here , the mec approach can be expected to reduce the numerical effort by orders of magnitude , once more sophisticated ( and time consuming ) simulation techniques are employed to study biochemical reaction times in systems of realistic dimensions .
|
in this paper , a new method to efficiently simulate diffusion controlled second order chemical reactions is derived and applied to site - specific dna - binding proteins . the protein enters a spherical cell and propagates via two competing modes , a free diffusion and a dna - sliding mode , to search for its specific binding site in the center of the cell . there is no need for a straight forward simulation of this process . instead , an alternative and exact approach is shown to be essentially faster than explicit random - walk simulations . the speed - up of this novel simulation technique is rapidly growing with system size .
|
in a _ set system auction _ there is a single buyer and many vendors that can provide various services .it is assumed that the buyer s requirements can be satisfied by various subsets of the vendors ; these subsets are called the _ feasible sets_. in particular , a widely - studied class of set - system auctions is _ path auctions _ , where each vendor is able to sell access to a link in a network , and the feasible sets are those sets whose links contain a path from a given source to a given destination ; the study of these auctions has been initiated in the seminal paper by nisan and ronen ( see also ) .we assume that each vendor has a privately known cost of providing his services , but submits a possibly larger _bid _ to the auctioneer . based on these bids ,the auctioneer selects a feasible subset of vendors , and makes payments to the vendors in this subset .each selected vendor enjoys a profit of payment minus cost .vendors want to maximise profit , while the buyer wants to minimise the amount he pays .a natural goal in this setting is to design a _ truthful _auction , in which vendors have an incentive to bid their true cost .this can be achieved by paying each selected vendor a premium above his bid in such a way that the vendor has no incentive to overbid .an important issue in mechanism design ( which we address in this paper ) is how much the auctioneer will have to overpay in order to ensure truthful bids . in the context of path auctionsthis topic was first addressed by archer and tardos .they define the _ frugality ratio _ of a mechanism as the ratio between its total payment and the cost of the cheapest path disjoint from the path selected by the mechanism .they show that , for a large class of truthful mechanisms for this problem , the frugality ratio is as large as the number of edges in the shortest path .talwar extends this definition of frugality ratio to general set systems , and studies the frugality ratio of the classical vcg mechanism for many specific set systems , such as minimum spanning trees and set covers .the situations where one has to hire a team of agents to perform a task are quite typical in many domains . in a market - based environment , this goal can be achieved by means of a ( combinatorial ) procurement auction : the agents submit their bids and the buyer selects a team based on the agents ability to work with each other as well as their payment requirements .the problem is complicated by the fact that only _ some _ subsets of agents constitute a valid team : the task may require several skills , and each agent may possess only a subset of these skills , the agents must be able to communicate with each other , etc .also , for each agent there is a cost associated with performing the task .this cost is known to the agent himself , but not to the buyer or other agents .a well - known example of this setting is a _shortest path auction _, where the buyer wants to purchase connectivity between two points in a network that consists of independent subnetworks . in this case, the valid teams are sets of links that contain a path between these two points .this problem has been studied extensively in the recent literature starting with the seminal paper by nisan and ronen ( see also ) .generally , problems in this category can be formalized by specifying ( explicitly or implicitly ) the sets of agents capable of performing the tasks , or _feasible _ sets .consequently , the auctions of this type are sometimes referred to as _ set system auctions_. the buyer and the agents have conflicting goals : the buyer wants to spend as little money as possible , and the agents want to maximise their earnings .therefore , to ensure truthful bidding , the buyer has to use a carefully designed payment scheme .while it is possible to use the celebrated vcg mechanism for this purpose , it suffers from two drawbacks .first , to use vcg , the buyer always has to choose a cheapest feasible set .if the problem of finding a cheapest feasible set is computationally hard , this may require exponential computational effort .one may hope to use approximation algorithms to mitigate this problem : the buyer may be satisfied with a feasible set whose cost is _ close _ to optimal and for many np - hard problems there exist fast algorithms for finding approximately optimal solutions. however , generally speaking , one can not combine such algorithms with vcg - style payments and preserve truthfulness .the second issue with vcg is that it has to pay a bonus to each agent in the winning team . as a result, the total vcg payment may greatly exceed the true cost of a cheapest feasible set .in fact , one can easily construct an example where this is indeed the case . while the true cost of a cheapest feasible set is not necessarily a realistic benchmark for a truthful mechanism , it turns out that vcg performs quite badly with respect to more natural benchmarks discussed later in the paper .therefore , a natural question to ask is whether one can design truthful mechnisms and reasonable benchmarks for a given set system such that these mechanisms perform well with respect to these benchmarks .this issue was first raised by nisan and ronen .it was subsequently addressed by archer and tardos , who introduced the concept of _ frugality _ in the context of shortest path auctions .the paper proposes to measure the overpayment of a mechanism by the worst - case ratio between its total payment and the cost of the cheapest path that is disjoint from the path selected by the mechanism ; this quantity is called the _frugality ratio_. the authors show that for a large class of truthful mechanisms for this problem ( which includes vcg and all mechanisms that satisfy certain natural properties ) the frugality ratio is , where is the number of edges in the shortest path .subsequently , elkind et al . showed that a somewhat weaker bound of holds for _ all _ truthful shortest path auctions .talwar extends the definition of frugality ratio given in to general set systems , and studies the frugality ratio of the vcg mechanism for many specific set systems , such as minimum spanning trees or set covers . while the definition of frugality ratio proposed by is well - motivated and has been instrumental in studying truthful mechanisms for set systems , it is not completely satisfactory .consider , for example , the graph of figure [ fig : diamond ] with the costs , .this graph is 2-connected and the vcg payment to the winning path abcd is bounded .however , the graph contains no a d path that is disjoint from abcd , and hence the frugality ratio of vcg on this graph remains undefined . at the same time, there is no _ monopoly _ , that is , there is no vendor that appears in all feasible sets . in auctions for other types of set systems ,the requirement that there exist a feasible solution disjoint from the selected one is even more severe : for example , for vertex - cover auctions ( where vendors correspond to the vertices of some underlying graph , and the feasible sets are vertex covers ) the requirement means that the graph must be bipartite . to deal with this problem , karlin et al . suggest a better benchmark , which is defined for any monopoly - free set system .this quantity , which they denote by , intuitively corresponds to the total payoff in a cheapest nash equilibrium of a first - price auction .based on this new definition , the authors construct new mechanisms for the shortest path problem and show that the overpayment of these mechanisms is within a constant factor of optimal . *vertex cover auctions * we propose a truthful polynomial - time auction for vertex cover that outputs a solution whose cost is within a factor of 2 of optimal , and whose frugality ratio is at most , where is the maximum degree of the graph ( theorem [ thm:2delta ] ) .we complement this result by proving ( theorem [ thm : delta/4 ] ) that for any , there are graphs of maximum degree for which _ any _ truthful mechanism has frugality ratio at least .this means that both the solution quality and the frugality ratio of our auction are within a constant factor of optimal . in particular , the frugality ratio is within a factor of of optimal . to the best of our knowledge ,this is the first auction for this problem that enjoys these properties .moreover , we show how to transform any truthful mechanism for the vertex - cover problem into a frugal one while preserving the approximation ratio .* frugality ratios * our vertex cover results naturally suggest two modifications of the definition of in .these modifications can be made independently of each other , resulting in four different payment bounds that we denote as , , , and , where is equal to the original payment bound of in .all four payment bounds arise as nash equilibria of certain games ( see appendix ) ; the differences between them can be seen as `` the price of initiative '' and `` the price of co - operation '' ( see section [ sec : frugality ] ) . while our main result about vertex cover auctions ( theorem [ thm:2delta ] ) is with respect to , we make use of the new definitions by first comparing the payment of our mechanism to a weaker bound , and then bootstrapping from this result to obtain the desired bound . inspired by this application , we embark on a further study of these payment bounds .our results here are as follows : 1 .we observe ( proposition [ inequalities ] ) that the payment bounds we consider always obey a particular order that is independent of the choice of the set system and the cost vector , namely , .we provide examples ( proposition [ exvcone ] and corollaries [ exvctwo ] and [ exvcthree ] ) showing that for the vertex cover problem any two consecutive bounds can differ by a factor of , where is the number of agents .we then show ( theorem [ thm : upper ] ) that this separation is almost optimal for general set systems by proving that for any set system .in contrast , we demonstrate ( theorem [ thm : upperpath ] ) that for path auctions .we provide examples ( proposition [ expath ] ) showing that this bound is tight .we see this as an argument for the study of vertex - cover auctions , as they appear to be more representative of the general team - selection problem than the widely studied path auctions .2 . we show ( theorem [ thm : ratios ] ) that for any set system , if there is a cost vector for which and differ by a factor of , there is another cost vector that separates and by the same factor and vice versa ; the same is true for the pairs and .this result suggests that the four payment bounds should be studied in a unified framework ; moreover , it leads us to believe that the bootstrapping technique of theorem [ thm:2delta ] may have other applications .3 . we evaluate the payment bounds introduced here with respect to a checklist of desirable features .in particular , we note that the payment bound of exhibits some counterintuitive properties , such as nonmonotonicity with respect to adding a new feasible set ( proposition [ clm : nm ] ) , and is np - hard to compute ( theorem [ thm : nphard ] ) , while some of the other payment bounds do not suffer from these problems .this can be seen as an argument in favour of using weaker but efficiently computable bounds and .vertex - cover auctions have been studied in the past by talwar and calinescu .both of these papers are based on the definition of frugality ratio used in ; as mentioned before , this means that their results only apply to bipartite graphs .talwar shows that the frugality ratio of vcg is at most .however , since finding the cheapest vertex cover is an np - hard problem , the vcg mechanism is computationally infeasible .the first ( and , to the best of our knowledge , only ) paper to investigate polynomial - time truthful mechanisms for vertex cover is . that paper studies an auction that is based on the greedy allocation algorithm , which has an approximation ratio of .while the main focus of is the more general set cover problem , the results of imply a frugality ratio of for vertex cover .our results improve on those of as our mechanism is polynomial - time computable , as well as on those of , as our mechanism has a better approximation ratio , and we prove a stronger bound on the frugality ratio ; moreover , this bound also applies to the mechanism of .a _ set system _ is a pair , where is the _ ground set _ , , and is a collection of _ feasible sets _ , which are subsets of .two particular types of set systems are of particular interest to us _ shortest path _ systems and _ vertex cover _ systems . in a shortest path system ,the ground set consists of all edges of a network , and a set of edges is feasible if it contains a path between two specified vertices and . in a vertex cover system ,the elements of the ground set are the vertices of a graph , and the feasible sets are vertex covers of this graph .we will also present some results for _ matroid _ systems , in which the ground set is the set of all elements of a matroid , and the feasible sets are the bases of the matroid . for a formal definition of a matroid , the reader is referred to . in this paper , we use the following characterisation of a matroid .[ matroid ] a collection of feasible sets is the set of bases of a matroid if and only if for any , there is a bijection between and such that for any . in set system auctions ,each element of the ground set is owned by an independent agent and has an associated non - negative cost .the goal of the buyer is to select ( purchase ) a feasible set .each element in the selected set incurs a cost of .the elements that are not selected incur no costs .the auction proceeds as follows : all elements of the ground set make their bids , then the buyer selects a feasible set based on the bids and makes payments to the agents .formally , an auction is defined by an _ allocation rule _ and a _ payment rule _ .the allocation rule takes as input a vector of bids and decides which of the sets in should be selected .the payment rule also takes as input a vector of bids and decides how much to pay to each agent .the standard requirements are _ individual rationality _ , that the payment to each agent should be at least as high as its incurred cost ( 0 for agents not in the selected set and for an agent in the selected set ) , and _ incentive compatibility _ , or _, that each agent s dominant strategy is to bid its true cost .an allocation rule is _ monotone _ if an agent can not increase its chance of getting selected by raising its bid .formally , for any bid vector and any , if then for any . given a monotone allocation rule and a bid vector , the _ threshold bid _ of an agent is the highest bid of this agent that still wins the auction , given that the bids of other participants remain the same . formally , .it is well known ( see , e.g. ) that any set - system auction that has a monotone allocation rule and pays each agent its threshold bid is truthful ; conversely , any truthful set - system auction has a monotone allocation rule .the vcg mechanism is a truthful mechanism that maximises the `` social welfare '' and pays 0 to the losing agents . for set system auctions, this simply means picking a cheapest feasible set , paying each agent in the selected set its threshold bid , and paying 0 to all other agents .note , however , that the vcg mechanism may be difficult to implement , since finding a cheapest feasible set may be computationally hard. if is a set of agents , denotes .( note that we identify an agent with its associated member of the ground set . )similarly , denotes .we start by reproducing the definition of the quantity from ( * ? ? ?* definition 4 ) .let be a set system and let be a cheapest feasible set with respect to the ( vector of ) true costs .then is the solution to the following optimisation problem .minimise subject to * for all * for all * for every , there is such that and the bound can be seen as an outcome of a hypothetical two - stage process as follows .an omniscient auctioneer knows all the vendors private costs , and identifies a cheapest set .the auctioneer offers payments to the members of .he does it so as to minimise his total payment subject to the following constraints that represent a notion of fairness . *the payment to any member of covers that member s cost .( condition 1 ) * is still a cheapest set with respect to the new cost vector in which the cost of a member of has been increased to his offer . ( condition 2 ) * if any member of were to ask for a higher payment than his offer , then some other feasible set ( not containing ) would be cheapest .( condition 3 ) this definition captures many important aspects of our intuition about ` fair ' payments .however , it can be modified in two ways , both of which are still quite natural , but result in different payment bounds .first , we can consider the worst rather than the best possible outcome for the buyer .that is , we can consider the maximum total payment that the agents can extract by jointly selecting their bids subject to ( 1 ) , ( 2 ) , and ( 3 ) .such a bound corresponds to maximising subject to ( 1 ) , ( 2 ) , and ( 3 ) rather than minimising it . if the agents in submit bids ( rather than the auctioneer making offers ) , this kind of outcome is plausible .it has to be assumed that agents submit bids independently of each other , but know how high they can bid and still win .hence , the difference between these two definitions can be seen as `` the price of initiative '' .second , the agents may be able to make payments to each other . in this case , if they can extract more money from the buyer by agreeing on a vector of bids that violates individual rationality ( i.e. , condition ( 1 ) ) for some bidders , they might be willing to do so , as the agents who are paid below their costs will be compensated by other members of the group .the bids must still be realistic , i.e. , they have to satisfy .the resulting change in payments can be seen as `` the price of co - operation '' and corresponds to replacing condition ( 1 ) with the following weaker condition : by considering all possible combinations of these modifications , we obtain four different payment bounds , namely * , which is the solution to the optimisation problem `` minimise subject to , ( 2 ) , and ( 3 ) '' .* , which is the solution to the optimisation problem `` maximise subject to , ( 2 ) , and ( 3 ) '' .* , which is the solution to the optimisation problem `` minimise subject to ( 1 ) , ( 2 ) , and ( 3 ) '' .* , which is the solution to the optimisation problem `` maximise subject to ( 1 ) , ( 2 ) , ( 3 ) '' .the abbreviations tu and ntu correspond , respectively , to transferable utility and non - transferable utility , i.e. , the agents ability / inability to make payments to each other . for concreteness, we will take to be where is the lexicographically least amongst the cheapest feasible sets .we define , , and similarly , though we will see in section [ sec : choices ] that and are independent of the choice of .note that the quantity from is .the second modification ( transferable utility ) is more intuitively appealing in the context of the maximisation problem , as both assume some degree of co - operation between the agents . while the second modification can be made without the first , the resulting payment bound turns out to be too strong to be a realistic benchmark , at least for general set systems . in particular, it can be smaller than the total cost of a cheapest feasible set ( see section [ sec : properties ] ) .however , we provide the definition and some results about , both for completeness and because we believe that it may help to understand which properties of the payment bounds are important for our proofs. another possibility would be to introduce an additional constraint in the definition of ( note that this condition holds automatically for non - transferable utility bounds , and also for , as ) .however , such a definition would have no direct economic interpretation , and some of our results ( in particular , the ones in section [ sec : compare ] ) would no longer hold .[ maxeasy ] for the payment bounds that are derived from maximisation problems , ( i.e. , and ) , constraints of type ( 3 ) are redundant and can be dropped .hence , and are solutions to linear programs , and therefore can be computed in polynomial time as long as we have a separation oracle for constraints in ( 2 ) .in contrast , can be np - hard to compute even if the size of is polynomial ( see section [ sec : properties ] ) . the first and third inequalities in the following observation follow from the fact that condition is weaker than condition ( 1 ) .[ inequalities ] .let be a truthful mechanism for .let denote the total payments of when the actual costs are .a _ frugality ratio _ of with respect to a payment bound is the ratio between the payment of and this payment bound . in particular , we conclude this section by showing that there exist set systems and respective cost vectors for which all four payment bounds are different . in the next section , we quantify this difference , both for general set systems , and for specific types of set systems , such as path auctions or vertex cover auctions .consider the shortest - path auction on the graph of figure [ fig : diamond ] .the minimal feasible sets are all paths from to .it can be verified , using the reasoning of proposition [ expath ] below , that for the cost vector , , , we have * ( with the bid vector , ) , * ( with the bid vector , ) , * ( with the bid vector , ) , * ( with the bid vector , ) .we start by showing that for path auctions any two consecutive payment bounds ( in the sequence of proposition [ inequalities ] ) can differ by at least a factor of 2 . in section[ sec : upper ] ( theorem [ thm : upperpath ] ) , we show that the separation results in proposition [ expath ] are optimal ( that is , the factor of 2 is maximal for path auctions ) .[ expath ] there is a path auction and cost vectors , and for which * , * , * . consider the graph of figure [ fig : diamond ] .for the cost vectors , and defined below , abcd is the lexicographically - least cheapest path , so we can assume that .* let be edge costs , .the inequalities in ( 2 ) are , . by condition ( 3 ), both of these inequalities must be tight ( the former one is the only inequality involving , and the latter one is the only inequality involving ) .the inequalities in ( 1 ) are , , .now , if the goal is to maximise , the best choice is , , so . on the other hand , if the goal is to minimise , one should set , , so .* let be the edge costs be , , .the inequalities in ( 2 ) are the same as before , and by the same argument both of them are , in fact , equalities . the inequalities in ( 1 ) are , , .our goal is to maximise .if we have to respect the inequalities in ( 1 ) , we have to set , , so . otherwise , we can set , , so . * the edge costs are , , .again , the inequalities in ( 2 ) are the same , and both are , in fact , equalities . the inequalities in ( 1 ) are , , .our goal is to minimise .if we have to respect the inequalities in ( 1 ) , we have to set , , so . otherwise , we can set , , so .the separation results for path auctions are obtained on the same graph using very similar cost vectors .it turns out that this is not coincidental .namely , we can prove the following theorem .[ thm : ratios ] for any set system , and any feasible set , where the maximum is over all cost vectors for which is a cheapest feasible set .the proof of the theorem follows directly from the four lemmas proved below ; in particular , the first equality in theorem [ thm : ratios ] is obtained by combining lemmas [ useone ] and [ usetwo ] , and the second equality is obtained by combining lemmas [ usethree ] and [ usefour ] .[ useone ] suppose that is a cost vector for such that is a cheapest feasible set and + .then there is a cost vector such that is a cheapest feasible set and .suppose that and where .assume without loss of generality that consists of elements , and let and be the bid vectors that correspond to and , respectively .construct the cost vector by setting for , for .clearly , is a cheapest set under .moreover , as the costs of elements outside of remain the same , the right - hand sides of all constraints in ( 2 ) and ( 3 ) do not change , so any bid vector that satisfies ( 2 ) and ( 3 ) with respect to , also satisfies them with respect to . we will construct two bid vectors and that satisfy conditions ( 1 ) , ( 2 ) and ( 3 ) for the cost vector , and also satisfy , .it follows that and , which implies the lemma .we can set : this bid vector satisfies conditions ( 2 ) and ( 3 ) since does , and we have , which means that satisfies condition ( 1 ). we can set .again , satisfies conditions ( 2 ) and ( 3 ) since does , and since satisfies condition ( 1 ) , we have , which means that satisfies condition ( 1 ) .[ usetwo ] suppose that is a cost vector for such that is a cheapest feasible set and + .then there is a cost vector such that is a cheapest feasible set and .suppose that and where .again , assume that consists of elements , and let and be the bid vectors that correspond to and , respectively .construct the cost vector by setting for , for . as satisfies condition ( 2 ) , is a cheapest set under . as in the previous construction ,the right - hand sides of all constraints in ( 2 ) do not change .let be a bid vector that corresponds to .let us prove that .indeed , the bid vector must satisfy for ( condition ( 1 ) ) .suppose that for some , and consider the constraint in ( 2 ) that is tight for .there is such a constraint , as satisfies condition ( 3 ) .namely , for some not containing , for every appearing in the left - side of this constraint , we have but , so the bid vector violates this constraint .hence , for all and therefore . on the other hand, we can construct a bid vector that satisfies conditions ( 2 ) and ( 3 ) with respect to and has .namely , we can set : as satisfies conditions ( 2 ) and ( 3 ) , so does .as , this proves the lemma .[ usethree ] suppose that is a cost vector for such that is a cheapest feasible set and + .then there is a cost vector such that is a cheapest feasible set and .suppose that and where .again , assume consists of elements , and let and be the bid vectors that correspond to and , respectively .the cost vector is obtained by setting for , for . since satisfies condition ( 2 ) , is a cheapest set under , and the right - hand sides of all constraints in ( 2 ) do not change .let be a bid vector that corresponds to .it is easy to see that , since the bid vector must satisfy for ( condition ( 1 ) ) , and . on the other hand ,we can construct a bid vector that satisfies conditions ( 2 ) and ( 3 ) with respect to and has .namely , we can set : as satisfies conditions ( 2 ) and ( 3 ) , so does . as ,this proves the lemma .[ usefour ] suppose that is a cost vector for such that is a cheapest feasible set and + .then there is a cost vector such that is a cheapest feasible set and .suppose that and where .again , assume that consists of elements , and let and be the bid vectors that correspond to and , respectively .construct the cost vector by setting for , for .clearly , is a cheapest set under .moreover , as the costs of elements outside of remained the same , the right - hand sides of all constraints in ( 2 ) do not change .we construct two bid vectors and that satisfy conditions ( 1 ) , ( 2 ) , and ( 3 ) for the cost vector , and have , . as and , this implies the lemma .we can set .indeed , the vector satisfies conditions ( 2 ) and ( 3 ) since does .also , since satisfies condition ( 1 ) , we have , i.e. , satisfies condition ( 1 ) with respect to . on the other hand ,we can set : the vector satisfies conditions ( 2 ) and ( 3 ) since does , and it satisfies condition ( 1 ) , since .in contrast to the case of path auctions , for vertex - cover auctions the gap between and ( and hence between and , and between and ) can be proportional to the size of the graph .[ exvcone ] for any , there is a an -vertex graph and a cost vector for which + .the underlying graph consists of an -clique on the vertices , and an extra vertex adjacent to .see figure [ fig : vc ] .the costs are , .we can assume that ( this is the lexicographically first vertex cover of cost ) .for this set system , the constraints in ( 2 ) are for .clearly , we can satisfy conditions ( 2 ) and ( 3 ) by setting for , .hence , . for , there is an additional constraint , so the best we can do is to set for , , which implies .combining proposition [ exvcone ] with lemmas [ useone ] and [ usethree ] ( and re - naming vertices to make the lexicographically - least cheapest feasible set ) , we derive the following corollaries .[ exvctwo ] for any , there is an instance of the vertex cover problem on an -vertex graph for which for which .[ exvcthree ] for any , there is an instance of the vertex cover problem on an -vertex graph for which .it turns out that the lower bound proved in the previous subsection is almost tight .more precisely , the following theorem shows that no two payment bounds can differ by more than a factor of ; moreover , this is the case not just for the vertex cover problem , but for general set systems .we bound the gap between and .since , this bound applies to any pair of payment bounds .[ thm : upper ] for any set system auction having vendors and any cost vector , let be the size of the winning set .let be the true costs of elements in , let be their bids that correspond to , and let be their bids that correspond to .for , let be the feasible set associated with using ( 3 ) applied to the tumin bids . since , it follows that since the result follows .the final line of the proof of theorem [ thm : upper ] shows that , in fact , the upper bound on + can be strengthened to the size of the winning set , .note that in proposition [ exvcone ] , as well as in corollaries [ exvctwo ] and [ exvcthree ] , , so these results do not contradict each other . for path auctions ,this upper bound can be improved to 2 , matching the lower bounds of section [ sec : pathlb ] .[ thm : upperpath ] for any path auction with cost vector , . given a network ,let be the lexicographically - least cheapest path in . to simplify notation , relabel the vertices of as so that .let and be bid vectors that correspond to and , respectively .for let be a path associated with by a constraint of type ( 3 ) applied to ; consequently .we can assume without loss of generality that coincides with up to some vertex , then deviates from to avoid , and finally returns to at a vertex and coincides with from then on ( clearly , it might happen that or ) .indeed , if deviates from more than once , one of these deviations is not necessary to avoid and can be replaced with the respective segment of without increasing the cost of . among all paths of this form ,let be the one with the largest value of , i.e. , the `` rightmost '' one .this path corresponds to an equality of the form .we construct a set of equalities such that every variable appears in at least one of them .we construct inductively as follows .start by setting . at the step ,suppose that all variables up to ( but not including ) appear in at least one equality in .add to .note that for any we have .this is because the equalities added to during the first steps did not cover .see figure [ fig : avoid ] .since , we must also have : otherwise , would not be the `` rightmost '' constraint for .therefore , the variables in and do not overlap , and hence no can appear in more than two equalities in . hence , adding up all of the equalities in ( and noting that the are non - negative ) we obtain on the other hand , each equality has a corresponding inequality based on constraint ( 2 ) applied to , namely . summing these inequalitieswe have .the result follows from this and the previous expression .finally , we show that for matroids all four payment bounds coincide .[ thm : uppermatroid ] for any matroid with cost vector , .let , be the lexicographically - least cheapest base of .we can assume without loss of generality that .let and be bid vectors that correspond to and , respectively .for the bid vector and any , consider a constraint in ( 2 ) that is tight for and the base that is associated with this constraint .suppose , i.e. , the tight constraint for is of the form , . by proposition [ matroid ]there is a mapping such that and for the set is a base .therefore by condition ( 2 ) we have for all .consequently , it must be the case that all these constraints are tight as well , and in particular we have .on the other hand , as , we also have . as this holds for any , we have . since also , the theorem follows .recall that for a vertex - cover auction on a graph , an _ allocation rule _ is an algorithm that takes as input a bid for each vertex and returns a vertex cover of . as explained in section [ sec : preliminaries ] , we can combine any monotone allocation rule with threshold payments to obtain a truthful auction .two natural examples of monotone allocation rules are , which finds an optimal vertex cover , and the mechanism that uses the greedy allocation algorithm .however , can not be guaranteed to run in polynomial time unless and has a worst - case approximation ratio of .another approximation algorithm for ( weighted ) vertex cover , which has approximation ratio 2 , is the _ local ratio _ algorithm .this algorithm considers the edges of one by one .given an edge , it computes and sets , .after all edges have been processed , returns the set of vertices .it is not hard to check that if the order in which the edges are considered is independent of the bids , then this algorithm is monotone as well .hence , we can use it to construct a truthful auction that is guaranteed to select a vertex cover whose cost is within a factor of 2 from the optimal .however , while the quality of the solution produced by is much better than that of , we still need to show that its total payment is not too high . in the next subsection, we bound the frugality ratio of ( and , more generally , all algorithms that satisfy the condition of _ local optimality _ , defined later ) by , where is the maximum degree of .we then prove a matching lower bound showing that for some graphs the frugality ratio of any truthful auction is at least . for vertices and , means that there is an edge between and .we say that an allocation rule is _ locally optimal _ if whenever , the vertex is not chosen .note that for any such rule the threshold bid of satisfies .the mechanisms , , and are locally optimal .[ thm:2delta ] any vertex cover auction on a graph with maximum degree that has a locally optimal and monotone allocation rule and pays each agent its threshold bid has frugality ratio . to prove theorem [ thm:2delta ] , we first show that the total payment of any locally optimal mechanism does not exceed .we then demonstrate that . by combining these two results, the theorem follows .[ lem : pay ] let be a graph with maximum degree .let be a vertex - cover auction on that satisfies the conditions of theorem [ thm:2delta ] .then for any cost vector , the total payment of satisfies . first note that any such auction is truthful , so we can assume that each agent s bid is equal to its cost .let be the vertex cover selected by .then by local optimality we now derive a lower bound on ; while not essential for the proof of theorem [ thm:2delta ] , it helps us build the intuition necessary for that proof .[ lem : vctumax ] for a vertex cover instance in which is a minimum - cost vertex cover with respect to cost vector , . for a vertex with at least one neighbour in ,let denote the number of neighbours that has in .consider the bid vector in which , for each , . then .to finish we want to show that is feasible in the sense that it satisfies ( 2 ) .consider a vertex cover , and extend the bid vector by assigning for .then and since all edges between and go to , the right - hand - side is equal to next , we prove a lower bound on ; we will then use it to obtain a lower bound on .[ lem : vcntumax ] for a vertex cover instance in which is a minimum - cost vertex cover with respect to cost vector , . if , by condition ( 1 ) we are done .therefore , for the rest of the proof we assume that .we show how to construct a bid vector that satisfies conditions ( 1 ) and ( 2 ) such that ; clearly , this implies .recall that a network flow problem is described by a directed graph , a source node , a sink node , and a vector of capacity constraints , .consider a network such that , , where , , .since is a vertex cover for , no edge of can have both of its endpoints in , and by construction , contains no edges with both endpoints in .therefore , the graph is bipartite with parts . set the capacity constraints for as follows : , , for all , .recall that a _ cut _ is a partition of the vertices in into two sets and so that , ; we denote such a cut by . abusing notation, we write if or , and say that such an edge _ crosses _ the cut .capacity _ of a cut is computed as .we have , .let be a minimum cut in , where , .see figure [ fig : cut ] . as , andany edge in has infinite capacity , no edge crosses .consider the network , where , .clearly , is a minimum cut in ( otherwise , there would exist a smaller cut for ) . as , we have .now , consider the network , where , .similarly , is a minimum cut in , . asthe size of a maximum flow from to is equal to the capacity of a minimum cut separating and , there exists a flow of size .this flow has to saturate all edges between and , i.e. , for all .now , increase the capacities of all edges between and to . in the modified network ,the capacity of a minimum cut ( and hence the size of a maximum flow ) is , and a maximum flow can be constructed by greedily augmenting .set for all , for all .as is constructed by augmenting , we have for all , i.e. , condition ( 1 ) is satisfied .now , let us check that no vertex cover can violate condition ( 2 ) . set , , , ; our goal is to show that .consider all edges such that . if then ( no edge in can cross the cut ) , and if then .hence , is a vertex cover for , and therefore .consequently , .now , consider the vertices in . any edge in that starts in one of these vertices has to end in (this edge has to be covered by , and it can not go across the cut ) . therefore , the total flow out of is at most the total flow out of , i.e. , .hence , . finally , we derive a lower bound on the payment bound that is of interest to us , namely , . [lem : vcntumin ] for a vertex cover instance in which is a minimum - cost vertex cover with respect to cost vector , .suppose for contradiction that is a cost vector with minimum - cost vertex cover and .let be the corresponding bid vector and let be a new cost vector with for and for .condition ( 2 ) guarantees that is an optimal solution to the cost vector . now compute a bid vector corresponding to .we claim that for any . indeed , suppose that for some ( for by construction ) .as satisfies conditions ( 1)(3 ) , among the inequalities in ( 2 ) there is one that is tight for and the bid vector .that is , . by the construction of , since for all , implies .but this violates ( 2 ) .so we now know .hence , we have , giving a contradiction to the fact that which we proved in lemma [ lem : vcntumax ] .as satisfies condition ( 1 ) , we have .together will lemma [ lem : vcntumin ] , this implies . combined with lemma [ lem : pay ] , this completes the proof of theorem [ thm:2delta ] . as , our bound of extends to the smaller frugality ratios that we consider , i.e. , and .it is not clear whether it extends to the larger frugality ratio .however , the frugality ratio is not realistic because the payment bound is inappropriately low we show in section [ sec : properties ] that can be significantly smaller than the total cost of a cheapest vertex cover. we can also apply our results to monotone vertex - cover algorithms that do not necessarily output locally - optimal solutions . to do so, we simply take the vertex cover produced by any such algorithm and transform it into a locally - optimal one , considering the vertices in lexicographic order and replacing a vertex with its neighbours whenever .note that if a vertex gets added to the vertex cover during this process , it means that it has a neighbour whose bid is higher than s bid , so after one pass all vertices in the vertex cover satisfy .this procedure is monotone in bids , and it can only decrease the cost of the vertex cover . therefore , using it on top of a monotone allocation rule with approximation ratio , we obtain a monotone locally - optimal allocation rule with approximation ratio . combining it with threshold payments ,we get an auction with .since any truthful auction has a monotone allocation rule , this procedure transforms any truthful mechanism for the vertex - cover problem into a frugal one while preserving the approximation ratio .furthermore , our vertex - cover results can be extended to set cover .namely , we can transform a set cover instance into a vertex cover instance as follows .for each set , create a vertex .the vertices and are adjacent iff the intersection of and is nonempty . for this vertex cover instance, the degree of any vertex is at most , where is the maximum set size and is the maximum number of sets containing any ground set element .it is easy to see that an instance of set cover is monopoly - free if a set in the set cover can be replaced with all its neighbours , so we can transform any set cover into a locally optimal one as described above .therefore , any monotone approximation algorithm for set cover yields an auction with . in this subsection , we prove that the upper bound of theorem [ thm:2delta ] is essentially optimal .our proof uses the techniques of , where the authors prove a similar result for shortest - path auctions .[ thm : delta/4 ] for any , there exists a graph with vertices and degree , such that for any truthful mechanism on we have .let be a complete bipartite graph with parts and , , thus has degree .we consider two families of cost vectors for .under a cost vector , has one vertex of cost 1 ; all other vertices cost 0 . under a cost vector ,each of and has one vertex of cost 1 , and all other vertices have cost 0 .clearly , , .we construct a bipartite graph with the vertex set as follows .consider a cost vector ; let its cost-1 vertices be and . by changing the cost of either of these vertices to 0, we obtain a cost vector in .let and be the cost vectors obtained by changing the cost of and , respectively .the vertex cover chosen by must either contain all vertices in or all vertices in . in the former case ,we add to an edge from to and in the latter case we add to an edge from to ( if the vertex cover includes all of , contains both of these edges ) .the graph has at least edges , so there must exist an of degree at least .let be the other endpoints of the edges incident to , and for each , let be the vertex of whose cost is different under and ; note that all are distinct .it is not hard to see that : the cheapest vertex cover contains the all-0 part of , and we can satisfy conditions ( 1)(3 ) by allowing one of the vertices in the all-0 part of each block to bid 1 , while all other vertices in the cheapest set bid 0 .on the other hand , by monotonicity of we have for ( is in the winning set under , and is obtained from by decreasing the cost of ) , and moreover , the threshold bid of each is at least 1 , so the total payment of on is at least .hence , .theorem [ thm : delta/4 ] can be extended to apply to graphs with degree of unlimited size : a similar argument applies to any graph made up of multiple copies of the bipartite graph in the proof .the resulting lower bound is still , i.e. , it does not depend on the size of the graph .[ randommechanisms ] the lower bound of theorem [ thm : delta/4 ] can be generalised to randomised mechanisms , where a randomised mechanism is considered to be truthful if it can be represented as a probability distribution over truthful mechanisms . in this case , instead of choosing the vertex with the highest degree , we put both and into , label each edge with the probability that the respective part of the block is chosen , and pick with the highest weighted degree .in this section we consider several desirable properties of payment bounds and evaluate the four payment bounds proposed in this paper with respect to them .the particular properties that we are interested in are the relationship with other reasonable bounds , such as the total cost of the cheapest set ( section [ sec : comparecost ] ) , or the total vcg payment ( section [ sec : comparevcg ] ) . we also consider independence of the choice of ( section [ sec : choices ] ) , monotonicity ( section [ sec : nonmon ] ) , computational tractability ( section [ sec : nph ] ) .the basic property of _ individual rationality _ dictates that the total payment must be at least the total cost of the selected winning set . in this sectionwe show that amongst the payment bounds we consider here , may be less than the cost of the winning set . for such set systems , may as a result be too low to be realistic .clearly , and are at least the cost of due to condition ( 1 ) , and so is , since .however , fails this test .the example of proposition [ expath ] ( part ) shows that for path auctions , can be smaller than the total cost by a factor of 2 .moreover , there are set systems and cost vectors for which is smaller than the cost of the cheapest set by a factor of .consider , for example , the vertex - cover auction for the graph of proposition [ exvcone ] with the costs , .the cost of a cheapest vertex cover is , and the lexicographically first vertex cover of cost is .the constraints in ( 2 ) are .clearly , we can satisfy conditions ( 2 ) and ( 3 ) by setting , , which means that .this example suggests that the payment bound is sometimes too strong to be realistic , since it can be substantially lower than the cost of a cheapest feasible set .note , however , that this is not an issue for matroid auctions becuase , for matroids , all four payment bounds have the same value .the paper shows that if the feasible sets are the bases of a monopoly - free matroid , then .it is not difficult to see that this is also the case for other payment bounds . for any monopoly - free matroid, we have the claim follows immediately from theorem [ thm : uppermatroid ] .alternatively , it is not hard to check that the argument used in for does not use condition ( 1 ) at all and hence it works for as well . to show that is at most , one must prove that the vcg payment is at most .this is shown for in the first paragraph of the proof of theorem 5 in .their argument does not use condition ( 1 ) at all , so it also applies to . on the other hand , since and by proposition 7 of ( and also by proposition [ vcgge1 ] below ) .another measure of suitability for payment bounds is that they should not result in frugality ratios that are less then 1 for well - known truthful mechanisms .if this is indeed the case , the payment bound may be too weak , as it becomes too easy to design mechanisms that perform well with respect to it .it particular , a reasonable requirement is that a payment bound should not exceed the total payment of the classical vcg mechanism .the following proposition shows that , and therefore also and , do not exceed the vcg payment .the proof essentially follows the argument of proposition 7 of .[ vcgge1 ] for any set - system auction , .let be a winning set chosen by vcg ( hence , a cheapest set ) .suppose .the vcg payment is .let be a feasible set which achieves the minimum , so .but constraint ( 2 ) gives for all , so since , , so now by constraint ( 1 ) , , so ( 4 ) gives thus , every winner s payment is at least his bid , so the result follows .proposition [ vcgge1 ] shows that none of the payment bounds , and exceeds the payment of vcg . however , the payment bound can be larger that the total vcg payment . in particular , for the instance in proposition [ exvcone ] ,the vcg payment is smaller than by a factor of .we have already seen that .on the other hand , under vcg , the threshold bid of any , , is 0 : if any such vertex bids above 0 , it is deleted from the winning set together with and replaced with . similarly , the threshold bid of is 1 , because if bids above 1 , it can be replaced with .so the vcg payment is .this result is not surprising : the definition of implicitly assumes there is co - operation between the agents , while the computation of vcg payments does not take into account any interaction between them .indeed , co - operation enables the agents to extract higher payments under vcg .that is , vcg is not group - strategyproof .this suggests that as a payment bound , may be too liberal , at least in a context where there is little or no co - operation between agents .perhaps can be a good benchmark for measuring the performance of mechanisms designed for agents that can form coalitions or make side payments to each other , in particular , group - strategyproof mechanisms .another setting in which bounding is still of some interest is when , for the underlying problem , the optimal allocation and vcg payments are np - hard to compute . in this case , finding a _ polynomial - time computable _ mechanism with good frugality ratio with respect to is a non - trivial task , while bounding the frugality ratio with respect to more challenging payment bounds could be too difficult . to illustrate this point , compare the proofs of lemma [ lem : vctumax ] and lemma [ lem :vcntumax ] : both require some effort , but the latter is much more difficult than the former . all payment bounds defined in this paper correspond to the total bid of all elements in a cheapest feasible set , where ties are broken lexicographically .while this definition ensures that our payment bounds are well - defined , the particular choice of the draw - resolution rule appears arbitrary , and one might ask whether our payment bounds are sufficiently robust to be independent of this choice . it turns out that is indeed the case for and .the values of and do not depend on the choice of .consider two feasible sets and that have the same cost . in the computation of ,all vertices in would have to bid their true cost , since otherwise would become cheaper than .hence , any bid vector for can only have for , and hence constitutes a valid bid vector for ( in the context of ) and vice versa .a similar argument applies to .however , for and this is not the case .for example , consider the set system with the costs , , .the cheapest sets are and .now , as the total bid of the elements in can not exceed the total cost of . on the other hand , , as we can set .similarly , , because the equalities in ( 3 ) are and . but , since we can set , , .the results in and our vertex cover results are proved for the frugality ratio .indeed , it can be argued that is the `` best '' definition of frugality ratio , because among those payment bounds that are at least as large as the cost of a cheapest feasible set , it is most demanding of the algorithm .however , is not always the easiest or the most natural payment bound to work with . in this subsection , we discuss several disadvantages of ( and also ) as compared with and . in much more detail .while some of our results on are subsumed by their work , we present our results here as we feel that they are relevant in the context of this paper , and furthermore , they also apply to .] the first problem with is that it is not monotone with respect to , in that it may increase when one adds a feasible set to .( it is , however , monotone in the sense that a losing agent can not become a winner by raising its cost . ) intuitively , a good payment bound should satisfy this monotonicity requirement , as adding a feasible set increases the competition , so it should drive the prices down .note that this is indeed the case for and since a new feasible set adds a constraint in ( 2 ) , thus limiting the solution space for the respective linear program ( recall remark [ maxeasy ] ) .[ clm : nm ] adding a feasible set to can increase and by a factor of , where is the number of agents .let .let , , , , and suppose that .the costs are , , , for .note that is a cheapest feasible set .for , the bid vector , satisfies ( 1 ) , ( 2 ) , and ( 3 ) , so . let .for , is still the lexicographically - least cheapest set .any optimal solution has ( by constraint in ( 2 ) with ) .condition ( 3 ) for implies , so and .as all constraints in ( 1 ) are of the form , we also have . for path auctions, it has been shown that is non - monotone in a slightly different sense , i.e. , with respect to adding a new edge ( agent ) rather than a new feasible set ( a team of existing agents ) .we present that example here for completeness .[ dd ] for shortest path auctions , adding an edge to the graph can increase by a factor of .consider the graph of figure [ fig : diamond ] with the edge costs , . in this graph, is the cheapest path , and it is easy to see that with the bid vector , . now suppose that we add a new edge of cost 0 between and , obtaining the graph of figure [ fig : doublediamond ] .we can assume that the original shortest path is the lexicographically first shortest path in the new graph , so it gets selected .however , now we have a new constraint in ( 2 ) , namely , , so we have with the bid vector , .it is not hard to modify the example of proposition [ dd ] so that the underlying graph has no multiple edges .also , as all constraints in ( 1 ) are of the form , it also applies to .we can also show that and are non - monotone for vertex cover . in this case , adding a new feasible set corresponds to _ deleting _ edges from the graph .it turns out that deleting a single edge can increase and by a factor of ; the construction is based on the graph and the cost vector used in proposition [ exvcone ] .another problem with is that it is np - hard to compute , even if the number of feasible sets is polynomial in .again , this puts it at a disadvantage compared to and ( see remark [ maxeasy ] ) .[ thm : nphard ] computing is np - hard , even when the lexicographically - least cheapest feasible set is given in the input .we reduce exact cover by 3-sets(x3c ) to our problem .an instance of x3c is given by a universe and a collection of subsets , , , where the goal is to decide whether one can cover by of these sets .observe that if this is indeed the case , then each element of is contained in exactly one set of the cover .consider a minimisation problem of the following form : + minimise under conditions * for all * for ; subsets * for each , one of the constraints in ( 2 ) involving it is tight .for any such , one can construct in polynomial time a set system and a vector of costs such that is the optimal solution to .[ lem : reword ] the construction is straightforward : there is an element of cost 0 for each , an element of cost for each , the feasible solutions are , or any set obtained from by replacing the elements indexed by , with . by this lemma , all we have to do to prove theorem [ thm : nphard ] is to show how to solve x3c by using the solution to a minimisation problem of the form given in lemma [ lem : reword ] .we do this as follows . for each , we introduce 4 variables , , , and .also , for each element of there is a variable .we use the following set of constraints : * in ( 1 ) , we have constraints , , , , for all , . * in ( 2 ) , for all , we have the following 5 constraints : + + + + + .+ also , for all we have the constraint .the goal is to minimize .observe that for each , there is only one constraint involving , so by condition ( 3 ) it must be tight .consider the two constraints involving .one of them must be tight ; either or .it follows that .hence , for any feasible solution to ( 1)(3 ) we have . now, suppose that there is an exact set cover .set for .also , if is included in this cover , set , , otherwise set , . clearly , all inequalities in ( 2 ) are satisfied ( we use the fact that each element is covered exactly once ) , and for each variable , one of the constraints involving it is tight .this assignment results in .conversely , suppose there is a feasible solution with .as each addend of the form contributes at least 1 , we have for all , for all .we will now show that for each , either and , or and .for the sake of contradiction , suppose that , .as one of the constraints involving must be tight , we have . similarly , . hence , .this contradicts the previously noted equality .to finish the proof , note that for each we have and , so the subsets that correspond to constitute a set cover . in the proofs of theorem [ thm : nphard ] all constraints in ( 1 ) are of the form .hence , the same result is true for . for shortest - path auctions ,the size of can be superpolynomial .however , there is a polynomial - time separation oracle for constraints in ( 2 ) ( to construct one , use any algorithm for finding shortest paths ) , so one can compute and in polynomial time . on the other hand , recently and independently it was shown that computing for shortest - path auctions is np - hard .we thanks david kempe for suggesting the `` diamond graph '' auction and the cost vector used in the proof of proposition [ expath][(i ) ] . 1 a. archer and e. tardos , frugal path mechanisms . in _ proceedings of the 13th annual acm - siam symposium on discrete algorithms _ , pages 991999 , 2002 .r. bar - yehuda , k. bendel , a. freund , and d. rawitz , local ratio : a unified framework for approximation algorithms . in memoriam : shimon even 1935 - 2004 ., 36(4):422463 , 2004 . r. bar - yehuda and s. even , a local - ratio theorem for approximating the weighted vertex cover problem ., 25:2746 , 1985 .e. clarke , multipart pricing of public goods ., 8:1733 , 1971 .g. calinescu , bounding the payment of approximate truthful mechanisms . in _ proceedings of the 15th international symposium on algorithms and computation _ , pages 221233 , dec .n. chen and a.r .karlin , cheap labor can be expensive , in _ proceedings of the 18th annual acm - siam symposium on discrete algorithms _ , pages 707715 , jan .a. czumaj and a. ronen , on the expected payment of mechanisms for task allocation . in _ proceedings of the 5th acm conference on electronic commerce ( ec04 )_ , 2004 .e. elkind , true costs of cheap labor are hard to measure : edge deletion and vcg payments in graphs . in _ proceedings of the 6th acm conference on electronic commerce ( ec05 ) _ , 2005 .e. elkind , a. sahai , and k. steiglitz , frugality in path auctions . in _ proceedings of the 15th annual acm - siam symposium on discrete algorithms _ ,pages 694702 , 2004 .j. feigenbaum , c. h. papadimitriou , r. sami , and s. shenker , a bgp - based mechanism for lowest - cost routing . in _ proceedings of the 21st symposium on principles of distributed computing _, pages 173182 , 2002 .a. fiat , a. goldberg , j. hartline , and a. karlin , competitive generalized auctions . in_ proceedings of the 34th annual acm symposium on theory of computation _ , pages 7281 , 2002 .r. garg , v. kumar , a. rudra and a. verma , coalitional games on graphs : core structures , substitutes and frugality . in _ proceedings of the 4th acm conference on electronic commerce ( ec03 ) _ , 2005 .a. goldberg , j. hartline , and a. wright , competitive auctions and digital goods . in _ proceedings of the 12th annual acm - siam symposium on discrete algorithms _ ,pages 735744 , 2001 .t. groves , incentives in teams ., 41(4):617631 , 1973 .n. immorlica , d. karger , e. nikolova , and r. sami , first - price path auctions . in _ proceedings of the 6th acm conference on electronic commerce ( ec05 )_ , 2005 . a. r. karlin , d. kempe , and t. tamir beyond vcg : frugality of truthful mechanisms . in _ proceedings of the 46th annual ieee symposium on foundations of computer science_ , pages 615624 ,oct . 2005 .n. nisan and a. ronen , algorithmic mechanism design . in _ proceedings of the 31st annual acm symposium on theory of computation _ ,pages 129140 , 1999 .n. nisan and a. ronen , computationally feasible vcg mechanisms . in _ proceedings of the 2nd acm conference on electronic commerce ( ec00 )_ , pages 242252 , 2000 .j. oxley , matroid theory . the clarendon press oxford university press , new york , 1992 .a. ronen and r. talisman , towards generic low payment mechanisms for decentralized task allocation . in _ proceedings of the 7th international ieee conference on e - commerce technology _ ,k. talwar , the price of truth : frugality in truthful mechanisms . in _ proceedings of 20th international symposium on theoretical aspects of computer science _ , 2003 .w. vickrey , counterspeculation , auctions , and competitive sealed tenders ., 16:837 , 1961 .karlin et al . , argue that the payment bound can be viewed as the total payment in a nash equilibrium of a certain game . in this section ,we build on this intuition to justify the four payment bounds introduced above .we consider two variants of a game that differ in how profit is shared between the winning players .we will call these variants the tu game and the ntu game ( standing for `` transferable utility '' and `` non - transferable utility '' respectively ) .we then show that and correspond to the worst and the best nash equilibrium of the ntu game , and and correspond to the worst and the best nash equilibrium of the tu game . corresponds to the payment bound of . in both versions ,the players are the elements of the ground set .each player has an associated cost that is known to all parties .the game starts by the buyer selecting a cheapest feasible set ( with respect to the true costs ) , resolving ties lexicographically .then the elements of are allowed to make bids , and the buyer decides whether or not to accept them .intuitively , ought to be able to win the auction , and we seek bids from that are low enough to win , and high enough that no member of has an incentive to raise his bid ( because that would cause him to lose ) . given that is supposed to win , we modify the game to rule out behaviour such as elements of bidding unnecessarily high and losing .one way to enforce the requirement that wins is via fines .if is not among the cheapest sets with respect to the bids ( where the new cost of a set is the sum of the total cost of and the total bid of ) , the buyer rejects the solution and every element of who bids above its true cost pays a fine of size , while other elements pay 0 .otherwise , members of are paid their bids ( which may then be shared amongst members of ) .this ensures that in a nash equilibrium , the resulting bids are never rejected as a result of not being the cheapest feasible set . in the ntu game , we assume that players can not make payments to each other , i.e. , the utility of each player in is exactly the difference between his bid and his true cost .in particular , this means that no agent will bid below his true cost , which is captured by condition ( 1 ) . in a nash equilibrium, is the cheapest set with respect to the bids , which is captured by condition ( 2 ) . now, suppose that condition ( 3 ) is not satisfied for some bidder .then the vector of bids is not a nash equilibrium : can benefit from increasing his bids by a small amount .conversely , any vector of bids that satisfies ( 1 ) , ( 2 ) and ( 3 ) is a nash equilibrium : no player wants to decrease its bid , as it would lower the payment it receives , and no player can increase its bid , as it would violate ( 2 ) and will cause this bidder to pay a fine . as minimises under conditions ( 1 ) , ( 2 ) , and ( 3 ) , and maximises it , these are , respectively , the best and the worst nash equilibrium , from the buyer s point of view . in the tu game ,the players in redistribute the profits among themselves in equal shares , i.e. , each player s utility is the difference between the total payment to and the total cost of , divided by the size of .we noted in section 6.1 that when is _ required _ to be the winning set , this may result in nash equilibria where members of make a loss collectively , and not just individually as a result of condition ( 1 ) not applying .( recall that we do assume that agents bids are non - negative ; condition . ) thus represents a situation in which `` winners '' are being coerced into accepting a loss - making contract . does not have the above problem , since it is larger than the other payment bounds , so members of will not make a loss . the meaning of conditions ( 2 ) and ( 3 ) remains the same : the agents do not want the buyer to reject their bid , and no agent can improve the total payoff by raising their bid . note that we are not allowing coalitions ( see remark [ no - coalitions ] ) , i.e. , coordinated deviations by two or more players : even though the players share the profits , they can not make joint decisions about their strategies .similarly to the ntu game , it is easy to see that and are , respectively , the worst and the best nash equilibria of this game from the buyer s viewpoint . [ no - coalitions ] allowing payment redistribution within a set is different from allowing players to form coalitions ( as in , e.g. , the definition of strong nash equilibrium ) : in the latter case , players are allowed to make joint decisions about their bids , but they can not make payments to each other .the reason why can result in negative payoffs to the `` winners '' is that we artificially required the set to win .let us consider what happens in the ntu game when _ all _ agents are allowed to bid and is not required to be the winning set .suppose bids lower than .that is , hence optimality of means that , hence non - transferable utility implies that , so , hence with the previous inequalities we noted , the above inequality states that some members of are losing the auction while bidding above their costs . for each member of which this strict inequality holds , reduce its bid either to its cost or to the point where the cost of equals the cost of .note that the subset of in the above proof that are bidding above cost and losing the auction , must be of size at least .if it was of size 1 , we would not have a nash equilibrium ; that bidder could unilaterally improve his situation by reducing his bid .observe also that is the outcome of the ntu game provided that losing players do not bid unnecessarily high .
|
in _ set - system auctions _ , there are several overlapping teams of agents , and a task that can be completed by any of these teams . the buyer s goal is to hire a team and pay as little as possible . recently , karlin , kempe and tamir introduced a new definition of _ frugality ratio _ for this setting . informally , the frugality ratio is the ratio of the total payment of a mechanism to perceived fair cost . in this paper , we study this together with alternative notions of fair cost , and how the resulting frugality ratios relate to each other for various kinds of set systems . we propose a new truthful polynomial - time auction for the vertex cover problem ( where the feasible sets correspond to the vertex covers of a given graph ) , based on the _ local ratio _ algorithm of bar - yehuda and even . the mechanism guarantees to find a winning set whose cost is at most twice the optimal . in this situation , even though it is np - hard to find a lowest - cost feasible set , we show that _ local optimality _ of a solution can be used to derive frugality bounds that are within a constant factor of best possible . to prove this result , we use our alternative notions of frugality via a bootstrapping technique , which may be of independent interest .
|
pileup , the superposition of many soft proton proton collisions over interesting hard - scattering events , is a significant issue at cern s large hadron collider ( lhc ) and also at possible future hadron colliders .it affects many observables , including lepton and photon isolation , missing - energy determination and especially jet observables .one main technique currently in use to remove pileup from jet observables is known as the area median approach .it makes an event - wide estimate of the pileup level , , and then subtracts an appropriate 4-momentum from each jet based on its area , i.e. its extent in rapidity and azimuth .detector - level information can also help mitigate the effect of pileup : for example , with methods such as particle flow reconstruction , it is to some extent possible to eliminate the charged component of pileup , through the subtraction of contributions from individual charged pileup hadrons .however , even with such charged hadron subtraction ( chs ) , there is always a substantial remaining ( largely ) neutral pileup contribution , which remains to be removed .currently , when chs is used , area median subtraction is then applied to remove the remaining neutral pileup .another approach is to use the information about charged pileup hadrons in a specific jet to estimate and subtract the remaining neutral component , without any reference to a jet area or a global event energy density .its key assumption is that the neutral energy flow is proportional to the charged energy flow and so we dub it neutral - proportional - to - charged ( npc ) subtraction .an advantage that one might imagine for npc subtraction is that , by using _ local _ information about the charged pileup , it might be better able to account for variations of the pileup from point - to - point within the event than methods that rely on event - wide pileup estimates .we understand that there has been awareness of this kind of approach in the atlas and cms collaborations for some time now , and we ourselves also investigated it some years ago .our main finding was that at particle level it performed marginally worse than area subtraction combined with chs . from discussions with colleagues in the experimental collaborations, we had the expectation that there might be further degradation at detector level .accordingly we left our results unpublished .recently ref . ( klsw ) made a proposal for an approach to pileup removal named jet cleansing .one of the key ideas that it uses is precisely the npc method , applied to subjets , much in the way that area median subtraction has in the past been used with filtering and trimming .klsw found that cleansing brought large improvements over area median subtraction . given our earlier findings , klsw sresult surprised us .the purpose of this article is therefore to revisit our study of the npc method and also carry out independent tests of cleansing , both to examine whether we reproduce the large improvements that they observed and to identify possible sources of differences . as part of our study , we will investigate what properties of events can provide insight into the performance of the npc method .we will also be led to discuss the possible value of charged tracks from the leading vertex in deciding whether to keep or reject individual subjets ( as in charged - track based trimming of ref .finally we shall also examine how one might optimally combine npc and area median subtraction .npc subtraction relies on the experiments ability to identify whether a given charged track is from a pileup vertex , in order to measure the charged pileup entering a particular jet . to a good extentthis charged component can be removed , for example as in cms s charged - hadron subtraction ( chs ) procedure in the context of particle flow .the npc method then further estimates and subtracts the neutral pileup component by assuming it to be proportional to the charged pileup component .at least two variants can be conceived of .if the charged pileup particles are kept as part of the jet during clustering , then the corrected jet momentum is where is the four - momentum of the charged - pileup particles in the jet and is the average fraction of pileup transverse momentum that is carried by charged particles .specifically , one can define where the sums run over particles in a given event ( possibly limited to some central region with tracking ) , and the average is carried out across minimum - bias events .if the charged pileup particles are not directly included in the clustering ( i.e. it is the chs event that is provided to the clustering ) , then one does not have any information on which charged particles should be used to estimate the neutral pileup in a given jet .this problem can be circumvented by a clustering an `` emulated '' chs event , in which the charged - pileup particles are kept , but with their momenta rescaled by an infinitesimal factor . in this casethe correction becomes where is the momentum of the jet as obtained from the emulated chs event , while is the summed momentum of the rescaled charged - pileup particles that are in the jet .when carrying out npc - style subtraction , this is our preferred approach because it eliminates any backreaction associated with the charged pileup ( this is useful also for area - based subtraction ) , while retaining the information about charged pileup tracks .there are multiple issues that may be of concern for the npc method .for example , calorimeter fluctuations can limit the experiments ability to accurately remove the charged pileup component as measured with tracks .for out - of - time pileup , which contributes to calorimetric energy deposits , charged - track information may not be available at all . in any case, charged - track information covers only a limited range of detector pseudorapidities .additionally there are subtleties with hadron masses : in effect , is different for transverse components and for longitudinal components . in this workwe will avoid this problem by treating all particles as massless . , rapidity and azimuth . ]the importance of the above limitations can only be fully evaluated in an experimental context .we will be comparing npc to the area median method .the latter makes a global estimate of pileup transverse - momentum flow per unit area , , by dividing an event into similarly sized patches and taking the median of the transverse - momentum per unit area across all the patches .it then corrects each jet using the globally estimated and the individual jet s area , , like npc , the area median method has potential experimental limitations .they include include questions of non - trivial rapidity dependence and detector non - linearities ( the latter are relevant also for npc ) .these have , to a reasonable extent , been successfully overcome by the experiments .one respect in which npc may have advantages over the area median method is that the latter fails to correctly account for the fact that pileup fluctuates from point to point within the event , a feature that can not be encoded within the global pileup estimate .determination can be adapted to use just the jet s neighbourhood ( e.g. as discussed in the context of heavy - ion collisions ) , however it can never be restricted to just the jet . ]furthermore npc does not need a separate estimation of the background density , which can have systematics related to the event structure ( e.g. events v. dijet events ) ; and there is no need to include large numbers of ghosts for determining jet areas , a procedure that has a non - negligible computational cost .let us now proceed with an investigation of npc s performance , focusing our attention on particle - level events for simplicity .the key question is the potential performance gain due to npc s use of local information . to study this quantitatively, we consider a circular patch of radius centred at and examine the correlation coefficient of the actual neutral energy flow in the patch with two estimates : ( a ) one based on the charged energy flow in the same patch and ( b ) the other based on a global energy flow determination from the neutral particles , . fig .[ fig : correl - central ] ( left ) shows these two correlation coefficients , `` ntr v. chg '' and `` ntr v. '' , as a function of , for two average pileup multiplicities , and .one sees that the local neutral - charged correlation is slightly _ lower _ , i.e. slightly worse , than the neutral- correlation .both correlations decrease for small patch radii , as is to be expected , and the difference between them is larger at small patch radii .the correlation is largely independent of the number of pileup events being considered , which is consistent with our expectations , since all individual terms in the determination of the correlation coefficient should have the same scaling with . .right : the standard deviation of the difference between neutral transverse momentum in a central patch and either the rescaled charged transverse momentum in that patch or the prediction using the area median method , i.e. .the events are composed of superposed zero - bias collisions simulated with pythia 8 , tune 4c , and the number of collisions per event is poisson distributed with average .,title="fig:",scaledwidth=48.0% ] .right : the standard deviation of the difference between neutral transverse momentum in a central patch and either the rescaled charged transverse momentum in that patch or the prediction using the area median method , i.e. .the events are composed of superposed zero - bias collisions simulated with pythia 8 , tune 4c , and the number of collisions per event is poisson distributed with average .,title="fig:",scaledwidth=48.0% ] quantitative interpretations of correlation coefficients can sometimes be delicate , as we discuss in appendix [ sec : correlation - coefs ] , essentially because they combine the covariance of two observables with the two observables individual variances .we find that it can be more robust to investigate a quantity , the standard deviation of where the estimate of neutral energy flow , , may be either from the rescaled charged flow or from .the right - hand plot of fig .[ fig : correl - central ] shows for the two methods , again as a function of , for two levels of pileup .it is normalised to , to factor out the expected dependence on both the patch radius and the level of pileup .a lower value of implies better performance , and as with the correlation we reach the conclusion that a global estimate of appears to be slightly more effective at predicting local neutral energy flow than does the local charged energy flow . if one hoped to use npc to improve on the performance of area median subtraction , then figure [ fig : correl - central ] suggests that one will be disappointed . in striving for an understanding of this finding, one should recall that the ratio of charged - to - neutral energy flow is almost entirely driven by non - perturbative effects . inside an energetic jet , the non - perturbative effects are at scales that are tiny compared to the jet transverse momentum .there are fluctuations in the relative energy carried by charged and neutral particles , for example because a leading -quark might pick up a or a from the vacuum .however , because , the charged and neutral energy flow mostly tend to go in the same direction .the case that we have just seen of an energetic jet gives an intuition that fluctuations in charged and neutral energy flow are going to be locally correlated .it is this intuition that motivates the study of npc .we should however examine if this intuition is actually valid for pileup .we will examine one step of hadronisation , namely the production of short - lived hadronic resonances , for example a . the opening angle between the decay products of the is of order . given that pileup hadronsare produced mostly at low , say , and that , the angle between the charged and neutral pions ends up being of order or even larger . as a result , the correlation in direction between charged and neutral energy flow is lost , at least in part .thus , at low , non - perturbative effects specifically tend to wash out the charged - neutral angular correlation . , of the transverse momentum in a central circular patch of radius that is due to charged particles .it is separated into components according to the multiplicity of particles in the patch .the dashed and dotted histograms show the corresponding charged - fraction distributions for each of the two hardest anti- , jets in simulated dijet events , with two choices for the hard generation cut . ]this point is illustrated in fig .[ fig : why - npc - bad ] . we consider zero - bias events and examine a circular patch of radius centred at .the figure shows the distribution of the charged fraction , , in the patch ( filled histogram , broken into contributions where the patch contains , or more particles ) .the same plot also shows the distribution of the charged fraction in each of the two leading anti- , jets in dijet events ( dashed and dotted histograms ) . whereas the charged - to - total ratio for a jet has a distribution peaked around , as one would expect , albeit with a broad distribution ,the result for zero - bias events is striking : in about 60% of events the patch is either just charged or just neutral , quite often consisting of just a single particle ( weighting by the flow in the patch , the figure goes down to 30% ) .this is probably part of the reason why charged information provides only limited local information about neutral energy flow in pileup events .these considerations are confirmed by an analysis of the actual performance of npc and area median subtraction .we reconstruct jets using the anti- algorithm , as implemented in fastjet , with a jet radius parameter of .we study dijet and pileup events generated with pythia 8.176 , in tune 4c ; we assume idealised chs , treating the charged pileup particles as ghosts . in the dijet ( `` hard '' ) event alone , i.e. without pileup , we run the jet algorithm and identify jets with absolute rapidity and transverse momentum .then in the event with superposed pileup ( the `` full '' event ) we rerun the jet algorithm and identify the jets that match those selected in the hard event for the matching , we introduce a quantity , the scalar sum of the s of the constituents that are common to a given pair of hard and full jets . for a hard jet ,the matched jet in the full event is the one that has the largest . in principle , one full jet can match two hard jets , e.g. if two nearby hard jets end up merged into a single full jet due to back - reaction effects .however this is exceedingly rare . ] and subtract them using either npc , eq .( [ eq : npc - chs ] ) , or the area median method , eq .( [ eq : rho - subtraction ] ) , with estimated from the chs event .the hard events are generated with the underlying event turned off , which enables us to avoid subtleties related to the simultaneous subtraction of the underlying event . , the average difference in between a jet after pileup addition and subtraction and the corresponding matched jet in the hard sample , .the right - hand plot shows the standard deviation of ( lower values are better ) .npc is shown only for chs events , while area median subtraction is shown both for events with chs and for events without it ( `` full '' ) ., title="fig:",scaledwidth=48.0% ] , the average difference in between a jet after pileup addition and subtraction and the corresponding matched jet in the hard sample , .the right - hand plot shows the standard deviation of ( lower values are better ) .npc is shown only for chs events , while area median subtraction is shown both for events with chs and for events without it ( `` full '' ) ., title="fig:",scaledwidth=48.0% ] figure [ fig : npc - performance ] provides the resulting comparison of the performance of the npc and area median subtraction methods ( the latter in chs and in full events ) .the left - hand plot shows the average difference between the subtracted jet and the of the corresponding matched hard jet , as a function of the number of pileup interactions .both methods clearly perform well here , with the average difference systematically well below even for very high pileup levels .the right - hand plot shows the standard deviation of the difference between the hard and subtracted full jet .a lower value indicates better performance , and one sees that in chs events the area median method indeed appears to have a small , but consistent advantage over npc . comparing area median subtraction in chs and full events , one observes a significant degradation in resolution when one fails to use the available information about charged particles in correcting the charged component of pileup , as is to be expected for a particle - level study .the conclusion of this section is that the npc method fails to give a superior performance to the area median method in chs events .this is because the local correlations of neutral and charged energy flow are no greater than the correlations between local neutral energy flow and the global energy flow .we believe that part of the reason for this is that the hadronisation process for low particles intrinsically tends to produce hadrons separated by large angles , as illustrated concretely in the case of resonance decay .part of the original motivation for our work here was to cross check a method recently introduced by krohn , low , schwartz and wang ( klsw ) and called jet cleansing .cleansing comes in several variants .we will concentrate on linear cleansing , which was seen to perform well across a variety of observables by klsw . , the ratio of the charged from the leading vertex to the total charged ( including pileup ) in the subjet .gaussian is particularly interesting in that it effectively carries out a minimisation across different hypotheses for the ratio of charged to neutral energy flow , separately for the pileup and the hard event .however in klsw s results its performance was usually only marginally better than the much simpler linear cleansing .accordingly we concentrate on the latter . ]it involves several elements : it breaks a jet into multiple subjets , as done for grooming methods like filtering and trimming ( cf .also the early work by seymour ) . in its `` linear '' variant ,it then corrects individual subjets for pileup by a method that is essentially the same as the npc approach described in the previous section .cleansing may also be used in conjunction with trimming - style cuts to the subtracted subjets , specifically it can remove those whose corrected transverse momentum is less than some fraction of the overall jet s transverse momentum ( as evaluated before pileup removal ) . for cleansing , reflecting our understanding of the choices made in v1 of ref . , which stated `` [ we ] supplement cleansing by applying a cut on the ratio of the subjet ( after cleansing ) to the total jet .subjets with are discarded .[ ... ] where we do trim / cleanse we employ subjets and take . ''subsequent to the appearance of v1 of our article , the authors of ref . clarified that the results in their fig .4 had used .this is the choice that we adopt throughout most of this version , and it has an impact notably on the conclusions for the jet - mass performance . ] .upper right plot : similarly for the single jet mass .both plots are for a hadronically decaying sample with .decays are included to all flavours except and -hadrons are taken stable .lower - left plot : the correlation coefficient for the dijet - mass , as in the upper - left plot , but with a sample of bosons that decay only to , and quarks .jets are reconstructed , as in ref . , with the anti- algorithm with . for both trimming and cleansing ,subjets are reconstructed with the algorithm with and the value that is applied is .[ fig : f00 ] ] .upper right plot : similarly for the single jet mass .both plots are for a hadronically decaying sample with .decays are included to all flavours except and -hadrons are taken stable .lower - left plot : the correlation coefficient for the dijet - mass , as in the upper - left plot , but with a sample of bosons that decay only to , and quarks .jets are reconstructed , as in ref . , with the anti- algorithm with . for both trimming and cleansing, subjets are reconstructed with the algorithm with and the value that is applied is .[ fig : f00 ] ] + .upper right plot : similarly for the single jet mass .both plots are for a hadronically decaying sample with .decays are included to all flavours except and -hadrons are taken stable .lower - left plot : the correlation coefficient for the dijet - mass , as in the upper - left plot , but with a sample of bosons that decay only to , and quarks .jets are reconstructed , as in ref . , with the anti- algorithm with . for both trimming and cleansing ,subjets are reconstructed with the algorithm with and the value that is applied is . [fig : f00 ] ] the top left - hand plot of fig .[ fig : f00 ] shows the correlation coefficient between the dijet mass in a hard event and the dijet mass after addition of pileup and application of each of several pileup mitigation methods .the results are shown as a function of .the pileup mitigation methods include two forms of cleansing ( with ) , area median subtraction , chs+area subtraction , and chs+area subtraction in conjunction with trimming ( also with ) .the top right - hand plots shows the corresponding results for the jet mass . for the dijet masswe see that linear ( and gaussian ) cleansing performs worse than area subtraction , while in the right - hand plot , for the jet mass , we see linear ( and gaussian ) cleansing performing better than area subtraction , albeit not to the extent found in ref .these ( and , unless explicitly stated , our other results ) have been generated with the decaying to all flavours except , and -hadrons have been kept stable .- tagging studies .experimentally , in the future , one might even imagine an `` idealised '' form of particle flow that attempts to reconstruct -hadrons ( or at least their charged part ) from displaced tracks before jet clustering . ]the lower plot shows the dijet mass for a different sample , one that decays only to , and quarks , but not and quarks .most of the results are essentially unchanged .the exception is cleansing , which turns out to be very sensitive to the sample choice . without stable -hadrons in the sample ,its performance improves noticeably and at high pileup becomes comparable to that of area - subtraction . both of the left - hand plots in our fig .[ fig : f00 ] differ noticeably from fig . 4 ( left ) of ref . and in particular they are not consistent with klsw s observation of much improved correlation coefficients for the dijet mass with cleansing relative to area+chs subtraction . given our results on npc in section [ sec : npc ] ,we were puzzled by the difference between the performance of area - subtraction plus trimming versus that of cleansing : our expectation is that their performances should be very similar .decrease with increasing and suggest ( see also , pp .16 and 17 ) that this will improve the determination of this fraction and therefore the effectiveness of a method like cleansing , based on a neutral - proportional - to - charged approach . however , this does not happen because , while relative fluctuations around do indeed decrease proportionally to ( a result of the incoherent addition of many pileup events and of the central limit theorem ) , the absolute uncertainty that they induce on a pileup - subtracted quantity involves an additional factor .the product of the two terms is therefore proportional to , i.e. the same scaling as the area - median method .this is consistent with our observations .note that for area subtraction , the switch from full events to chs events has the effect of reducing the coefficient in front of . ]the strong sample - dependence of the cleansing performance also calls for an explanation .we thus continued our study of the question . according to the description in ref . , one additional characteristic of linear cleansing relative to area - subtraction is that it switches to jet - vertex - fraction ( jvf ) cleansing when the npc - style rescaling would give a negative answer .in contrast , area - subtraction plus trimming simply sets the ( sub)jet momentum to zero .we explicitly tried turning the switch to jvf - cleansing on and off and found it had a small effect and did not explain the differences .study of the public code for jet cleansing reveals an additional condition being applied to subjets : if a subjet contains no charged particles from the leading vertex ( lv ) , then its momentum is set to zero .this step appears not to have been mentioned in ref .since we will be discussing it extensively , we find it useful to give it a name , `` _ _ zeroing _ _ '' .zeroing can be thought of as an extreme limit of the charged - track based trimming procedure introduced by atlas , whereby a jvf - style cut is applied to reject subjets whose charged - momentum fraction from the leading vertex is too low .zeroing turns out to be crucial : if we use it in conjunction with chs area - subtraction ( or with npc subtraction ) and trimming , we obtain results that are very similar to those from cleansing .conversely , if we turn this step off in linear - cleansing , its results come into accord with those from ( chs ) area - subtraction or npc - subtraction with trimming . to help illustrate this , fig .[ fig : shifts - dispersions - r1 ] shows a `` fingerprint '' for each of several pileup - removal methods , for both the jet ( left ) and mass ( right ) .the fingerprint includes the average shift ( or ) of the observable after pileup removal , shown in black .it also includes two measures of the width of the and distributions : the dispersion ( i.e. standard deviation ) in red and an alternative peak - width measure in blue .the latter is defined as follows : one determines the width of the smallest window that contains of entries and then scales this width by a factor . for a gaussian distribution, the rescaling ensures that the resulting peak - width measure is equal to the dispersion . for a non - gaussian distributionthe two measures usually differ and the orange shaded region quantifies the extent of this difference .the solid black , blue and red lines have been obtained from samples in which the decays just to light quarks ; the dotted lines are for a sample including and decays ( with stable -hadrons ) , providing an indication of the sample dependence ; in many cases they are indistinguishable from the solid lines . comparing grooming for npc , area ( without zeroing ) and cleansing with zeroing manually disabled , all have very similar fingerprints . turning onzeroing in the different methods leads to a significant change in the fingerprints , but again npc , area and cleansing are very similar .case with zeroing , suggesting that there may be an advantage from combinations of different constraints on subjet momenta . ] ) in the jet after addition of pileup and removal by a range of methods .it shows the average shift ( in black ) and the peak width ( in blue ) and dispersion ( in red ) of the distribution .the peak width is defined as the smallest window of that contains 90% of the distribution , scaled by a factor such that in the case of a gaussian distribution the result agrees with the dispersion .the right - hand plot shows the same set of results for the jet mass .the results are obtained in a sample of events with the number of pileup vertices distributed uniformly between and .the hard events consist of hadronic decays : for the solid vertical lines the sample is , while for the dotted lines ( sometimes not visible because directly over the solid lines ) , the sample additionally includes with hadrons kept stable .the mass is and jets are reconstructed with the anti- algorithm with .all results in this figure include charged - hadron subtraction by default .the default form of cleansing , as used e.g. in fig .[ fig : f00 ] , is `` zeroing '' ., title="fig:",scaledwidth=48.0% ] ) in the jet after addition of pileup and removal by a range of methods .it shows the average shift ( in black ) and the peak width ( in blue ) and dispersion ( in red ) of the distribution .the peak width is defined as the smallest window of that contains 90% of the distribution , scaled by a factor such that in the case of a gaussian distribution the result agrees with the dispersion .the right - hand plot shows the same set of results for the jet mass .the results are obtained in a sample of events with the number of pileup vertices distributed uniformly between and .the hard events consist of hadronic decays : for the solid vertical lines the sample is , while for the dotted lines ( sometimes not visible because directly over the solid lines ) , the sample additionally includes with hadrons kept stable .the mass is and jets are reconstructed with the anti- algorithm with .all results in this figure include charged - hadron subtraction by default .the default form of cleansing , as used e.g. in fig .[ fig : f00 ] , is `` zeroing '' ., title="fig:",scaledwidth=48.0% ] when used with trimming , and when examining quality measures such as the dispersion ( in red , or the closely related correlation coefficient , cf .appendix [ sec : correlation - coefs ] ) , subjet zeroing appears to be advantageous for the jet mass , but potentially problematic for the jet and the dijet mass .however , the dispersion quality measure does not tell the full story regarding the impact of zeroing .examining simultaneously the peak - width measure ( in blue ) makes it easier to disentangle two different effects of zeroing .on one hand we find that zeroing correctly rejects subjets that are entirely due to fluctuations of the pileup .this narrows the peak of the or distribution , substantially reducing the ( blue ) peak - width measures in fig .[ fig : shifts - dispersions - r1 ] . on the other hand ,zeroing sometimes incorrectly rejects subjets that have no charged tracks from the lv but do have significant neutral energy flow from the lv .this can lead to long tails for the or distributions , adversely affecting the dispersion .it is the interplay between the narrower peak and the longer tails that affects whether overall the dispersion goes up or down with zeroing .in particular the tails appear to matter more for the jet and dijet mass than they do for the single - jet mass .note that accurate monte carlo simulation of such tails may be quite challenging : they appear to be associated with configurations where a subjet contains an unusually small number of energetic neutral particles .such configurations are similar to those that give rise to fake isolated photons or leptons and that are widely known to be difficult to simulate correctly .we commented earlier that the cleansing performance has a significant sample dependence .this is directly related to the zeroing : indeed fig .[ fig : shifts - dispersions - r1 ] shows that for cleansing without zeroing , the sample dependence ( dashed versus solid lines ) vanishes , while it is substantial with zeroing .our understanding of this feature is that the lower multiplicity of jets with undecayed -hadrons ( and related hard fragmentation of the -hadron ) results in a higher likelihood that a subjet will contain neutral but no charged particles from the lv , thus enhancing the impact of zeroing on the tail of the or sample .the long tails produced by the zeroing are not necessarily unavoidable .in particular , they can correspond to the loss of subjets with tens of gev , yet it is very unlikely that a subjet from a pileup collision will be responsible for such a large energy .therefore we introduce a modified procedure that we call `` _ _ protected zeroing _ _ '' : one rejects any subjet without lv tracks _ unless _ its after subtraction is times larger than the largest charged in the subjet from any single pileup vertex ( or , more simply , just above some threshold ; however , using times the largest charged subjet could arguably be better both in cases where one explores a wide range of and for situations involving a hard subjet from a pileup collision ) . taking ( or a fixed ) we have found reduced tails and , consequently , noticeable improvements in the jet and dijet mass dispersion ( with little effect for the jet mass ) .this is visible for area and npc subtraction in fig .[ fig : shifts - dispersions - r1 ] . protected zeroing also eliminates the sample dependence .. we thank david miller for exchanges on this point . ]several additional comments can be made about trimming combined with zeroing .firstly , trimming alone introduces a bias in the jet , which is clearly visible in the no - zeroing shifts in fig .[ fig : shifts - dispersions - r1 ] .this is because the trimming removes negative fluctuations of the pileup , but keeps the positive fluctuations .zeroing then counteracts that bias by removing some of the positive fluctuations , those that happened not to have any charged tracks from the lv .it also introduces further negative fluctuations for subjets that happened to have some neutral energy flow but no charged tracks .overall , one sees that the final net bias comes out to be relatively small .this kind of cancellation between different biases is common in noise - reducing pileup - reduction approaches .( left ) for a jet radius of , subjet radius ( where relevant ) of and a qcd continuum dijet sample generated with pythia 8 .the underlying event is turned off in the sample and hadrons decay .we consider only jets that in the hard sample have and .right : the dispersions for a subset of the methods , shown as a function of the number of pileup events .[ fig : shifts - dispersions - dijet - r0.4 ] ] ( left ) for a jet radius of , subjet radius ( where relevant ) of and a qcd continuum dijet sample generated with pythia 8 .the underlying event is turned off in the sample and hadrons decay .we consider only jets that in the hard sample have and .right : the dispersions for a subset of the methods , shown as a function of the number of pileup events .[ fig : shifts - dispersions - dijet - r0.4 ] ] ( right ) , showing the performance for the jet mass , but now with applied to both trimming and cleansing and in a sample of hadronically - decaying boosted bosons ( ) .the jets reconstructed after addition and subtraction of pileup are compared to trimmed hard jets .jets are reconstructed with a jet radius of and a subjet radius of . only hard jets with and ( before trimming ) are considered and we let hadrons decay . [fig : shifts - dispersions - ww500-r10 ] ] most of the studies so far in this section have been carried out with a setup that is similar to that of ref . , i.e. jets in a sample with trimming .this is not a common setup for most practical applications . for most uses of jets, is a standard choice and pileup is at its most severe at low to moderate .accordingly , in fig .[ fig : shifts - dispersions - dijet - r0.4 ] ( left ) we show the analogue of fig .[ fig : shifts - dispersions - r1 ] s summary for the jet , but now for , with in a qcd dijet sample , considering jets that in the hard event had .we see that qualitatively the pattern is quite similar to that in fig .[ fig : shifts - dispersions - r1 ] .s , the difference between zeroing and protected zeroing might be expected to disappear .this is because the long negative tails are suppressed by the low jet itself . ]quantitatively , the difference between the various choices is much smaller , with about a reduction in dispersion ( or width ) in going from ungroomed chs area - subtraction to the protected subjet - zeroing case .one should be aware that this study is only for a single , across a broad range of pileup .the dispersions for a subset of the methods are shown as a function of the number of pileup vertices in the right - hand plot of fig .[ fig : shifts - dispersions - dijet - r0.4 ] .that plot also includes results from the softkiller method and illustrates that the benefit from protected zeroing ( comparing the solid and dashed blue curves ) is about half of the benefit that is brought from softkiller ( comparing solid blue and black curves ) .these plots show that protected zeroing is potentially of interest for jet determinations in realistic conditions .thus it would probably benefit from further study : one should , for example , check its behaviour across a range of transverse momenta , determine optimal choices for the protection of the zeroing and investigate also how best to combine it with particle - level subtraction methods such as softkiller .pattern that is observed for area and npc subtraction alone . ] turning now to jet masses , the use of is a not uncommon choice , however most applications use a groomed jet mass with a non - zero ( or its equivalent ) : this improves mass resolution in the hard event even without pileup , and it also reduces backgrounds , changing the perturbative structure of the jet even in the absence of pileup .trimming , the jet structure is unchanced in the absence of pileup . ] accordingly in fig .[ fig : shifts - dispersions - ww500-r10 ] we show results ( with shifts and widths computed relative to trimmed hard jets ) for a hard sample where the hard fat jets are required to have . zeroing , whether protected or not , appears to have little impact . one potential explanation for this fact is as follows : zeroing s benefit comes primarily because it rejects fairly low- pileup subjets that happen to have no charged particles from the leading vertex .however for a pileup subjet to pass the filtering criterion in our sample , it would have to have .this is quite rare .thus filtering is already removing the pileup subjets , with little further to be gained from the charged - based zeroing . as in the plain jet - mass summary plot , protection of zeroing appears to have little impact for the trimmed jet mass .cleansing appears to perform slightly worse than trimming with npc or area subtraction .one difference in behaviour that might explain this is that the threshold for cleansing s trimming step is ( even in the chs - like ` input_nc_separate ` mode that we use ) .in contrast , for the area and npc - based results , it is . in both casesthe threshold , which is applied to subtracted subjets , is increased in the presence of pileup , but this increase is more substantial in the cleansing case .this could conceivably worsen the correspondence between trimming in the hard and full samples .for the area and npc cases , we investigated the option of using or and found that this brings a small additional benefit . ]does that mean that ( protected ) zeroing has no scope for improving the trimmed - jet mass ?the answer is `` not necessarily '' : one could for example imagine first applying protected zeroing to subjets on some angular scale in order to eliminate low- contamination ; then reclustering the remaining constituents on a scale , subtracting according to the area or npc methods , and finally applying the trimming momentum cut ( while also keeping in mind the considerations of footnote [ footnote : trimming - ref ] ) .we close this section with a summary of our findings . based on its description in ref . and our findings about npc v. area subtraction , cleansing with would be expected to have a performance very similar to that of chs+area subtraction with trimming . however reported large improvements for the correlation coefficients of the dijet mass and the single jet mass using jets . in the case of the dijet masswe do not see these improvements , though they do appear to be there for the jet mass .the differences in behaviour between cleansing and trimmed chs+area - subtraction call for an explanation , and appear to be due to a step in the cleansing code that was undocumented in ref . and that we dubbed `` zeroing '' : if a subjet contains no charged tracks from the leading vertex it is discarded .zeroing is an extreme form of a procedure described in ref .in can be used also with area or npc subtraction , and we find that it brings a benefit for the peak of the and distributions , but appears to introduce long tails in . a variant , `` protected zeroing '' , can avoid the long tails by still accepting subjets without leading - vertex tracks , if their is above some threshold , which may be chosen dynamically based on the properties of the pileup . in our opinion , a phenomenologicaly realistic estimate of the benefit of zeroing ( protected or not ) requires study not of plain jets , but instead of jets ( for the jet ) or larger- trimmed jets with a non - zero ( for the jet mass ) . in a first investigation , there appear to be some phenomenological benefits from protected zeroing for the jet , whereas to obtain benefits for large- trimmed jets would probably require further adaptation of the procedure . in any case , additional study is required for a full evaluation of protected zeroing and related procedures .it is interesting to further probe the relation between npc and the area median method , to establish whether there might be a benefit from combining them : the area median method makes a mistake in predicting local energy flow mainly because local energy flow fluctuates from region to region .npc makes a mistake because charged and neutral energy flow are not locally correlated .the key question is whether , for a given jet , npc and the area median method generally make the same mistake , or if instead they are making uncorrelated mistakes . in the latter case it should be possible to combine the information from the two methods to obtain an improvement in subtraction performance .let be the actual neutral pileup component flowing into a jet , while are , respectively , the estimates for the neutral pileup based on the local charged flow and on .we assume the use of chs events and , in particular , that is as determined from the chs event .concentrating on the transverse components , the extent to which the two estimates provide complementary information can be quantified in terms of ( one minus ) the correlation coefficient , , between and .that correlation is shown as a function of in fig .[ fig : correl - mistakes ] ( left ) , and it is quite high , in the range for commonly used choices . it is largely independent of the number of pileup vertices .let us now quantify the gain to be had from a linear combination of the two prediction methods , i.e. using an estimate where is to be chosen to as to minimise the dispersion of .given dispersions and respectively for and , the optimal is which is plotted as a function of in fig .[ fig : correl - mistakes ] ( right ) , and the resulting squared dispersion for is reading from fig .[ fig : correl - mistakes ] ( left ) for , and from fig .[ fig : correl - central ] ( right ) , one finds . because of the substantial correlation between the two methods , ones expects only a modest gain from their linear combination . and , shown as a function of .right : optimal weight for combining npc and area pileup subtraction , eq .( [ eq : best f ] ) , as a function of . , title="fig:",scaledwidth=48.0% ] and , shown as a function of .right : optimal weight for combining npc and area pileup subtraction , eq .( [ eq : best f ] ) , as a function of . ,title="fig:",scaledwidth=48.0% ] and the right - hand one for the jet mass.,title="fig:",scaledwidth=48.0% ] and the right - hand one for the jet mass.,title="fig:",scaledwidth=48.0% ] in fig .[ fig : chg+rhoa - improvement ] we compare the performance of pileup subtraction from the combination of the npc and the area median methods , using the optimal value that can be read from fig .[ fig : correl - mistakes ] ( right ) for , both for the jet and the jet mass .the expected small gain is indeed observed for the jet , and it is slightly larger for the jet mass .: we found that the true optimum value of in the monte carlo studies is slightly different from that predicted by eq .( [ eq : best f ] ) .however the dependence on around its minimum is very weak , rendering the details of its exact choice somewhat immaterial . ]given the modest size of the gain , one may wonder how phenomenologically relevant it is likely to be .nevertheless , one might still consider investigating whether the gain carries over also to a realistic experimental environment with full detector effects .one natural approach to pileup removal is to use the charged pileup particles in a given jet to estimate the amount of neutral pileup that needs to be removed from that same jet . in this article , with the help of particle - level simulations , we have studied such a method ( npc ) and found that it has a performance that is similar to , though slightly worse than the existing , widely used area median method .this can be related to the observation that the correlations between local charged and neutral energy flow are no larger than those between global and local energy flow .tentatively , we believe that this is in part because the non - perturbative effects that characterise typical inelastic proton - proton collisions act to destroy local charged - neutral correlation . the absence of benefit that we found from the npc method led us to question the substantial performance gains quoted for the method of cleansing in ref . , one of whose key differences with respect to earlier work is the replacement of the area median method with npc . for the dijet mass , we are unable to reproduce the large improvement observed in ref . , in the correlation coefficient performance measure , for cleansing relative to area subtraction .we do however see an improvement for the jet mass .we trace a key difference in the behaviour of cleansing and area subtraction to the use in the cleansing code of a step that was not documented in ref . and that discards subjets that contain no tracks from the leading vertex .this `` zeroing '' step , similar to the charged - track based trimming introduced by atlas , can indeed be of benefit .it has a drawback of introducing tails in some distributions due to subjets with a substantial neutral from the leading vertex , but no charged tracks . as a result ,different quality measures lead to different conclusions as to the benefits of zeroing .the tails can be alleviated by a variant of zeroing that we introduce here , `` protected zeroing '' , whereby subjets without lv charged tracks are rejected only if their is below some ( possibly pileup - dependent ) threshold .protected zeroing does in some cases appear to have phenomenological benefits , which are observed across all quality measures . given two different methods for pileup removal , npc and area median subtraction , it is natural to ask how independent they are and what benefit might be had by combining them .this was the question investigated in section [ sec : combination ] , where we provided a formula for an optimal linear combination of the two methods , as a function of their degree of correlation .ultimately we found that npc and area median subtraction are quite highly correlated , which limits the gains from their combination to about a percent reduction in dispersion . while modest , this might still be sufficient to warrant experimental investigation , as are other methods , currently being developed , that exploit constituent - level subtraction .a study of the integration of those methods with protected zeroing would also be of interest .code for our implementation of area subtraction with positive - definite mass is available as part of fastjet versions 3.1.0 and higher .public code and samples for carrying out a subset of the comparisons with cleansing described in section [ sec : appraisal ] , including also the npc subtraction tools , are available from ref . .our understanding of pileup effects in the lhc experiments has benefited extensively from discussions with peter loch , david miller , filip moortgat , sal rappoccio , ariel schwartzman and numerous others .we are grateful to david krohn , matthew low , matthew schwartz and liantao wang for exchanges about their results .this work was supported by erc advanced grant higgs , by the french agence nationale de la recherche , under grant anr-10-cexc-009 - 01 , by the eu itn grant lhcphenonet , pitn - ga-2010 - 264564 and by the ilp labex ( anr-10-labx-63 ) supported by french state funds managed by the anr within the investissements davenir programme under reference anr-11-idex-0004 - 02 .gps wishes to thank princeton university for hospitality while this work was being carried out .gs wishes to thank cern for hospitality while this work was being finalised .let us first fully specify what we have done in our study and then comment on ( possible ) differences relative to klsw .our hard event sample consists of dijet events from collisions at , simulated with pythia 8.176 , tune 4c , with a minimum in the scattering of and with the underlying event turned off , except for the plots presented in figs .[ fig : f00 ] and [ fig : shifts - dispersions - r1 ] , where we use events with .jets are reconstructed with the anti- algorithm after making all particles massless ( preserving their rapidity ) and keeping only particles with .we have , except for the some of the results presented in section [ sec : appraisal ] and appendix [ sec : correlation - coefs ] , where we use as in ref . .given a hard event , we select all the jets with and absolute rapidity .we then add pileup and cluster the resulting full event , i.e. including both the hard event and the pileup particles , without imposing any or rapidity cut on the resulting jets . for each jet selected in the hard eventas described above , we find the jet in the full event that overlaps the most with it . here, the overlap is defined as the scalar sum of all the common jet constituents , as described in footnote [ footnote : matching ] on p. .given a pair of jets , one in the hard event and the matching one in the full event , we can apply subtraction / grooming / cleansing to the latter and study the quality of the jet or jet mass reconstruction . for studies involving the dijet mass ( cf .[ fig : f00 ] ) we require that at least two jets pass the jet selection in the hard event and use those two hardest jets , and the corresponding matched ones in the full event , to reconstruct the dijet mass .events used for fig .[ fig : f00 ] , this does not exactly reflect how we would have chosen to perform a dijet ( resonance ) study ourselves .one crucial aspect is that searches for dijet resonances always impose a rapidity cut between the two leading jets , such as .this ensures that high dijet - mass events are not dominated by low forward - backward jet pairs , which are usually enhanced in qcd v. resonance production .those forward - backward pairs can affect conclusions about pileup , because for a given dijet mass the jet s in a forward - backward pair are lower than in a central - central pair , and so relatively more sensitive to pileup .also the experiments do not use for their dijet studies : atlas uses , while cms uses with a form of radiation recovery based on the inclusion of any additional jets with and within of either of the two leading jets ( `` wide jets '' ) .this too can affect conclusions about pileup .] this approach avoids having to additionally consider the impact of pileup on the efficiency for jet selection , which can not straightforwardly be folded into our quality measures .most of the studies shown in this paper use idealised particle - level chs events . in these events , we scale all charged pileup hadrons by a factor before clustering , to ensure that they do not induce any backreaction .the jet selection and matching procedures are independent of the use of chs or full events .when we plot results as a function of the quantity , this corresponds to the actual ( fixed ) number of zero - bias events superimposed onto the hard collision . for results shown as a function of , the average number of zero - bias events , the actual number of zero - bias events has a poisson distribution .clustering and area determination are performed with a development version of fastjet 3.1 ( which for the features used here behaves identically to the 3.0.x series ) and with fastjet 3.1.0 and 3.1.1 for the results in section [ sec : appraisal ] .details of how the area - median subtraction is performed could conceivably matter .jet areas are obtained using active area with explicit ghosts placed up to and with a default ghost area of 0.01 .we use fastjet s ` gridmedianbackgroundestimator ` with a grid spacing of 0.55 to estimate the event background density .the estimation is performed using the particles ( up to ) from the full or the chs event as appropriate .when subtracting pileup from jets , we account for the rapidity dependence of , based on the rapidity dependence in a pure pileup sample ( as discussed in refs .we carry out 4-vector subtraction .a few obviously unphysical situations need special care . for jets obtained from the full event ,if , we set to a vector with zero transverse momentum , zero mass , and the rapidity and azimuth of the original unsubtracted jet ; and if is negative , an unphysical situation since it would lead to an imaginary mass , we replace with a vector with the same transverse components , zero mass , and the rapidity of the original unsubtracted jet .this is essentially equivalent to replacing negative squared masses with zero .the case of chs events is a bit more delicate .let denote the 4-momentum of the charged component of the jet . then ,if , we set , and when , we replace with a vector with the same transverse components , and the mass and rapidity of . for jets with no charged component , whenever the resulting 4-vector has an ill - defined rapidity or azimuthal angle , we use those of the original jet .corresponding tests that the subtracted transverse momentum and mass are non - negative are also applied in our npc subtraction .these safety requirements have little impact on the single - jet , limited impact on the dijet mass , and for the single - jet mass improve the dispersion of the subtraction relative to the choice ( widespread in computer codes ) of taking when .one difference between our study and klsw s is that we carry out a particle - level study , whereas they project their event onto a toy detector with a tower granularity , removing charged particles with and placing a threshold on towers . in our original ( v1 ) studies with we tried including a simple detector simulation along these lines and did not find any significant modification to the pattern of our results , though chs+area subtraction is marginally closer to the cleansing curves in this case .cleansing has two options : one can give it jets clustered from the full event , and then it uses an analogue of eq .( [ eq : npc - full ] ) : this effectively subtracts the exact charged part and the npc estimate of the neutrals . or one can give it jets clustered from chs events , and it then applies the analogue of eq .( [ eq : npc - chs ] ) , which assumes that there is no charged pileup left in the jet and uses just the knowledge of the actual charged pileup to estimate ( and subtract ) the neutral pileup .these two approaches differ by contributions related to back - reaction .our understanding is that klsw took the former approach , while we used the latter .specifically , our charged - pileup hadrons , which are scaled down in the chs event , are scaled back up to their original before passing them to the cleansing code , in its ` input_nc_separate ` mode .if we use cleansing with full events , we find that its performance worsens , as is to be expected given the additional backreaction induced when clustering the full event . were it not for backreaction , cleansing applied to full or chs events should essentially be identical .regarding the npc and cleansing parameters , our value of differs slightly from that of klsw s , and corresponds to the actual fraction of charged pileup in our simulated events . in our tests with a detector simulation like that of klsw, we adjusted to its appropriate ( slightly lower ) value . finally , for trimming we use and the reference is taken unsubtracted , while the subjets are subtracted before performing the trimming cut , which removes subjets with below a fraction times the reference . compared to using the subtracted as the reference for trimming , this effectively places a somewhat harder cut as pileup is increased. will be the subtracted one . ] for comparisons with cleansing we generally use unless explicitly indicated otherwise .in this appendix , we discuss some characteristics of correlation coefficients that affect their appropriateness as generic quality measures for pileup studies .suppose we have an observable .define to be the difference , in a given event , between the pileup subtracted observable and the original `` hard '' value without pileup .two widely used quality measures for the performance of pileup subtraction are the average offset of , and the standard deviation of , which we write as .one might think there is a drawback in keeping track of two measures , in part because it is not clear which of the two is more important .it is our view that the two measures provide complementary information : if one aims to reduce systematic errors in a precision measurement then a near - zero average offset may be the most important requirement , so as not to be plagued by issues related to the systematic error on the offset . in a search for a resonance peak ,then one aims for the narrowest peak , and so the smallest possible standard deviation . distribution .for some methods the long tails can affect the relevance of the standard - deviation quality measure . ]the quality measure advocated in is instead the correlation coefficient between and .this has the apparent simplifying advantage of providing just a single quality measure .however , it comes at the expense of masking potentially important information : for example , a method with a large offset and one with no offset will give identical correlation coefficients , because the correlation coefficient is simply insensitive to ( constant ) offsets .pileup events and area median subtraction ( in chs events ) versus the dijet mass in the original hard event . the hard dijet sample and the analysisare as described in appendix [ sec : details ] , with a jet radius of .the right - hand plot is identical except for the following additional condition on the hard event : .note the lower correlation coefficient , even though the lower suggests better typical subtraction in this specific mass bin . ,title="fig:",scaledwidth=48.0% ] pileup events and area median subtraction ( in chs events ) versus the dijet mass in the original hard event . the hard dijet sample and the analysisare as described in appendix [ sec : details ] , with a jet radius of .the right - hand plot is identical except for the following additional condition on the hard event : .note the lower correlation coefficient , even though the lower suggests better typical subtraction in this specific mass bin ., title="fig:",scaledwidth=48.0% ] the correlation coefficient has a second , more fundamental flaw , as illustrated in fig .[ fig : correls - are - bad ] . on the left ,one has a scatter plot of the dijet mass in pu - subtracted events versus the dijet mass in the corresponding hard events , as obtained in an inclusive jet sample .there is a broad spread of dijet masses , much wider than the standard deviation of , and so the correlation coefficient comes out very high , .now suppose we are interested in reconstructing resonances with a mass near , and so consider only hard events in which ( right - hand plot ) .now the correlation coefficient is , i.e. much worse .this does not reflect a much worse subtraction : actually , is better ( lower ) in the sample with a limited window , , than in the full sample , .the reason for the puzzling decrease in the correlation coefficient is that the dispersion of is much smaller than before , and so the dispersion of is now comparable to that of : it is this , and not an actual degradation of performance , that leads to a small correlation .this can be understood quantitatively in a simple model with two variables : let have a standard deviation of , and for a given let be distributed with a mean value equal to ( plus an optional constant offset ) and a standard deviation of ( independent of ). then the correlation coefficient of and is i.e. it tends to zero for and to for large , in accord with the qualitative behaviour seen in fig .[ fig : correls - are - bad ] .the discussion becomes more involved if has a more complicated dependence on or if itself depends on , for example as is actually the case for the dijet mass with the analysis of appendix [ sec : details ] .the main conclusion from this appendix is that correlation coefficients mix together information about the quality of pileup mitigation and information about the hard event sample being studied .it is then highly non - trivial to extract just the information about the pileup subtraction .this can lead to considerable confusion , for example , when evaluating the robustness of a method against the choice of hard sample .overall therefore , it is our recommendation that one consider direct measures of the dispersion introduced by the pileup and subtraction and not correlation coefficients . in cases with severely non - gaussian tails in the distributions it can additionally be useful to consider quality measures more directly related to the peak structure of the distribution . the atlas collaboration , `` pile - up subtraction and suppression for jets in atlas , '' http://inspirehep.net/record/1260963/files/atlas-conf-2013-083.pdf[atlas-conf-2013-083 ] .cms collaboration , `` jet energy scale performance in 2011 , '' http://inspirehep.net/record/1230033/files/dp2012_006.pdf[cms-dp-2012-006 ] .m. cacciari and g. p. salam , phys .b * 659 * ( 2008 ) 119 [ arxiv:0707.1378 [ hep - ph ] ] .m. cacciari , g. p. salam and g. soyez , jhep * 0804 * ( 2008 ) 005 [ arxiv:0802.1188 [ hep - ph ] ] .cms collaboration , `` particle - flow event reconstruction in cms and performance for jets , taus , and met , '' cms - pas - pft-09 - 001 .g. aad _ et al . _ [ atlas collaboration ] , phys .b * 716 * ( 2012 ) 1 [ arxiv:1207.7214 [ hep - ex ] ] .j. colas _ et al . _[ atlas liquid argon calorimeter collaboration ] , nucl .instrum .a * 550 * ( 2005 ) 96 [ physics/0505127 ] .m. cacciari , g. p. salam and g. soyez , unpublished , presented at cms week , cern , geneva , switzerland , march 2011 , available publicly since then at http://www.lpthe.jussieu.fr/~salam/talks/repo/2011-cms-week.pdf .d. krohn , m. d. schwartz , m. low and l. t. wang , phys .d * 90 * ( 2014 ) 6 , 065020 [ arxiv:1309.4777 [ hep - ph ] ] .cms collaboration , `` pileup jet identification , '' http://cds.cern.ch/record/1581583?ln=en[cms-pas-jme-13-005 ] .atlas collaboration , `` tagging and suppression of pileup jets , '' https://cds.cern.ch/record/1643929?ln=en[atl-phys-pub-2014-001 ] .m. cacciari , j. rojo , g. p. salam and g. soyez , jhep * 0812 * ( 2008 ) 032 [ arxiv:0810.1304 [ hep - ph ] ] .a. altheimer , a. arce , l. asquith , j. backus mayes , e. bergeaas kuutmann , j. berger , d. bjergaard and l. bryngemark _ et al ._ , arxiv:1311.2708 [ hep - ex ] . j. m. butterworth , a. r. davison , m. rubin and g. p. salam , phys .* 100 * ( 2008 ) 242001 [ arxiv:0802.2470 [ hep - ph ] ] .d. krohn , j. thaler and l. -t .wang , jhep * 1002 * ( 2010 ) 084 [ arxiv:0912.1342 [ hep - ph ] ] .m. cacciari , j. rojo , g. p. salam and g. soyez , eur .j. c * 71 * ( 2011 ) 1539 [ arxiv:1010.1759 [ hep - ph ] ] .g. soyez , g. p. salam , j. kim , s. dutta and m. cacciari , phys .lett . * 110 * ( 2013 ) 16 , 162001 [ arxiv:1211.2811 [ hep - ph ] ] .m. cacciari , p. quiroga - arias , g. p. salam and g. soyez , eur .j. c * 73 * ( 2013 ) 2319 [ arxiv:1209.6086 [ hep - ph ] ] .m. cacciari , g. p. salam and g. soyez , jhep * 0804 * ( 2008 ) 063 [ arxiv:0802.1189 [ hep - ph ] ] .m. cacciari , g. p. salam and g. soyez , eur .j. c * 72 * ( 2012 ) 1896 [ arxiv:1111.6097 [ hep - ph ] ] .t. sjostrand , s. mrenna and p. z. skands , comput . phys .commun .* 178 * ( 2008 ) 852 [ arxiv:0710.3820 [ hep - ph ] ] .m. h. seymour , z. phys .c * 62 * ( 1994 ) 127 .et al _ , `` study of +jet channel in heavy ion collisions with cms , '' cms - note-1998 - 063 ; + v. gavrilov , a. oulianov , o. kodolova and i. vardanian , `` jet reconstruction with pileup subtraction , '' cms - rn-2003 - 004 ; + o. kodolova , i. vardanian , a. nikitenko and a. oulianov , eur .j. c * 50 * ( 2007 ) 117 .m. cacciari , g. p. salam and g. soyez , eur .j. c * 75 * ( 2015 ) 2 , 59 [ arxiv:1407.0408 [ hep - ph ] ] .d. bertolini , p. harris , m. low and n. tran , jhep * 1410 * ( 2014 ) 59 [ arxiv:1407.6013 [ hep - ph ] ] .m. dasgupta , a. fregoso , s. marzani and g. p. salam , jhep * 1309 * ( 2013 ) 029 [ arxiv:1307.0007 [ hep - ph ] ] . m. dasgupta , a. fregoso , s. marzani and a. powling , eur . phys .j. c * 73 * ( 2013 ) 11 , 2623 [ arxiv:1307.0013 [ hep - ph ] ] .p. berta , m. spousta , d. w. miller and r. leitner , jhep * 1406 * ( 2014 ) 092 [ arxiv:1403.3108 [ hep - ex ] ] .atlas collaboration , `` search for new phenomena in the dijet mass distribution updated using 13 of pp collisions at collected by the atlas detector , '' atlas - conf-2012 - 148 .j. alcaraz maestre _et al . _ [ sm and nlo multileg and sm mc working groups collaboration ] , arxiv:1203.6803 [ hep - ph ] .m. cacciari , g. p. salam and g. soyez , public code for validation of a subset of the results in this paper , https://github.com/npctests/1404.7353-validation .
|
the use of charged pileup tracks in a jet to predict the neutral pileup component in that same jet could potentially lead to improved pileup removal techniques , provided there is a strong local correlation between charged and neutral pileup . in monte carlo simulation we find that the correlation is however moderate , a feature that we attribute to characteristics of the underlying non - perturbative dynamics . consequently , ` neutral - proportional - to - charge ' ( npc ) pileup mitigation approaches do not outperform existing , area - based , pileup removal methods . this finding contrasts with the arguments made in favour of a new method , `` jet cleansing '' , in part based on the npc approach . we identify the critical differences between the performances of linear cleansing and trimmed npc as being due to the former s rejection of subjets that have no charged tracks from the leading vertex , a procedure that we name `` zeroing '' . zeroing , an extreme version of the `` charged - track trimming '' proposed by atlas , can be combined with a range of pileup - mitigation methods , and appears to have both benefits and drawbacks . we show how the latter can be straightforwardly alleviated . we also discuss the limited potential for improvement that can be obtained by linear combinations of the npc and area - subtraction methods . cern - ph - th/2014 - 052 + april 2014 + revised february 2015
|
markov chains ( mcs ) and markov decision processes ( mdps ) are widely used to study systems that exhibit both , probabilistic and nondeterministic choices .properties of these systems are often specified by temporal logic formulas , such as the branching time logic pctl , the linear time logic pltl , or their combination pctl * . while model checking is tractable for pctl , it is more expensive for pltl : pspace - complete for markov chains and 2exptime - complete for mdps . in classical model checking ,one checks whether a model satisfies an ltl formula by first constructing a nondeterministic bchiautomaton , which recognises the models of its negation .the model checking problem then reduces to an emptiness test for the product . the translation to bchiautomata may result in an exponential blow - up compared to the length of , this translation is mostly very efficient in practice , and highly optimised off - the - shelf tools like ltl3ba or spot are available . the quantitative analysis of a probabilistic model against an ltl specification is more involved . to compute the maximal probability that is satisfied in , the classic automata - based approach includes the determinisation of an intermediate bchiautomaton .if such a deterministic automaton is constructed for , then determining the probability reduces to solving an equation system for markov chains , and a linear programming problem for mdps , both in the product .such a determinisation step usually exploits a variant of safra s determinisation construction , such as the techniques presented in .kupferman , piterman , and vardi point out in that `` safra s determinization construction has been notoriously resistant to efficient implementations . '' even though analysing long ltl formulas would surely be useful as they allow for the description of more complex requirements on a system s behaviour , model checkers that employ determinisation to support ltl , such as liquor or prism , might fail to verify such properties . in this paperwe argue that applying the safra determinisation step in full generality is only required in some cases , while simpler subset and breakpoint constructions often suffice .moreover , where full determinisation is required , it can be replaced by a combination of the simpler constructions , and it suffices to apply it locally on a small share of the places .a subset construction is known to be sufficient to determinise finite automata , but it fails for bchiautomata .our first idea is to construct an under- and an over - approximation starting from the subset construction .that is , we construct two ( deterministic ) subset automata and such that where denotes the language defined by the automaton for .the subset automata and are the same automaton except for their accepting conditions .we build a product markov chain with the subset automata .we establish the useful property that the probability equals the probability of reaching some _ accepting _ bottom strongly connected components ( sccs ) in this product : for each bottom scc in the product , we can first use the accepting conditions in or to determine whether is accepting or rejecting , respectively .the challenge remains when the test is inconclusive . in this case , we first refine using a breakpoint construction .finally , if the breakpoint construction fails as well , we have two options : we can either perform a rabin - based determinisation for the part of the model where it is required , thus avoiding to construct the larger complete rabin product .alternatively , a refined multi - breakpoint construction is used .an important consequence is that we no longer need to implement a safra - style determinisation procedure : subset and breakpoint constructions are enough .from a theoretical point of view , this reduces the cost of the automata transformations involved from to for generalised bchiautomata with states and accepting sets . from a practical point of view , the easy symbolic encoding admitted by subset and breakpoint constructions is of equal value .we discuss that ( and how ) the framework can be adapted to mdps with the same complexity by analysing the end components .we have implemented our approach both explicit and symbolic versions in our iscasmctool , which we applied on various markov chain and mdp case studies .our experimental results confirm that our new algorithm outperforms the rabin - based approach in most of the properties considered .however , there are some cases in which the rabin determinisation approach performs better when compared to the multi - breakpoint construction : the construction of a single rabin automaton suffices to decide a given connected component , while the breakpoint construction may require several iterations .our experiments also show that our prototype can compete with mature tools like prism .to keep the presentation clear , the detailed proofs are provided in the appendix .nondeterministic bchiautomata are used to represent -regular languages over a finite alphabet . in this paper , we use automata with trace - based acceptance mechanisms .we denote by } ] .i.e. , if and if . [ def : nondetbuechiaut ] a _ nondeterministic generalised bchiautomaton _ ( ngba ) is a quintuple , consisting of * a finite alphabet of input letters , * a finite set of states with a non - empty subset of initial states , * a set of transitions from states through input letters to successor states , and * a family }\,\}} ] , , where .a word is _ accepted _ by if has an accepting run on , and the set of words accepted by is called its _ language_. right40 mm -4 mm ( -1.5,1.3 ) rectangle ( 1.5,0.2 ) ; ( ba ) at ( 0,0 ) ; ( bax ) at ( ) ; ( bay ) at ( ) ; ( baz ) at ( ) ; ( ) to node ( bax.north ) ; ( bax ) to node [ right , near end ] ( bay ) ; ( bay ) to[bend left=30 ] node [ left ] ( bax ) ; ( bax ) to node [ left , near end ] ( baz ) ; ( baz ) to[bend right=30 ] node [ right ] ( bax ) ; figure [ fig : examplebuechi ] shows an example of bchiautomaton .the number after the label as in the transition , when present , indicates that the transition belongs to the accepting set , i.e. , belongs to .the language generated by is a subset of and a word is accepted if each ( and ) is eventually followed by a ( by a , respectively ) .we call the automaton a _ nondeterministic bchiautomaton _ ( nba ) whenever and we denote it by . for technical conveniencewe also allow for finite runs with . in other words , a run may end with if action is not enabled from .naturally , no finite run satisfies the accepting condition , thus it is not accepting and has no influence on the language of an automaton . to simplify the notation ,the transition set can also be seen as a function assigning to each pair the set of successors according to , i.e. , .we extend to sets of states in the usual way , i.e. , by defining .[ def : nondetparityaut ] a _ ( transition - labelled ) nondeterministic parity automaton _ ( npa ) with priorities is a quintuple where , , , and are as in definition [ def : nondetbuechiaut ] and a function }} ] of priorities .a run of a npa is _ accepting _ if the lowest priority that occurs infinitely often is even , that is if is even .[ def : nondetrabinaut ] a _ ( transition - labelled ) nondeterministic rabin automaton _ ( nra ) with accepting pairs is a quintuple where , , , and are as in definition [ def : nondetbuechiaut ] and },\ { { \mathrm{a}}}_{i } , { { \mathrm{r}}}_{i } \subseteq { { \mathrm{t}}}\,\}} ] . ) a run of a nra is accepting if there exists } ] is a function that labels every node with a number from } ] such that .a _ markov chain ( mc ) _ is a tuple , where is a finite set of states , is a _ labelling function _ , is the _ initial distribution _ , and ] ( generalised bchimarkov chain , gmc ) ; * if , then where and for each } ] , then } ] such that and ; we call each an _ accepting state_. moreover , we call the union of all accepting bsccs the _ accepting region_. essentially , since a bscc is an ergodic set , once a path enters an accepting bscc , with probability it will take transitions from infinitely often ; since is finite , at least one transition from is taken infinitely often .now we have the following reduction : [ thm : biancoa95 ] given a mc and a bchiautomaton , consider .let be the accepting region and let denote the set of paths containing a state of .then , .when all bottom sccs are evaluated , the evaluation of the rabin mc is simple : we abstract all accepting bottom sccs to an absorbing goal state and perform a reachability analysis , which can be solved in polynomial time .thus , the outline of the traditional probabilistic model checking approach for ltl specifications is as follows : 1 .translate the ngba into an equivalent dra ; 2 .build ( the reachable fragment of ) the product automaton ; 3 . for each bscc , check whether is accepting .let be the union of these accepting sccs ; 4 .infer the probability .the construction of the deterministic rabin automaton used in the classical approach is often the bottleneck of the approach , as one exploits some variant of the approach proposed by safra , which is rather involved .the lazy determinisation technique we suggest in this paper follows a different approach .we first transform the high - level specification ( e.g. , given in the prismlanguage ) into its mdp or mc semantics .we then employ some tool ( e.g. , ltl3ba or spot ) to construct a bchiautomaton equivalent to the ltl specification .this nondeterministic automaton is used to obtain the deterministic bchiover- and under - approximation subset automata and , as described in subsection [ subset ] .the languages recognised by these two deterministic bchiautomata are such that .we build the product of these subset automata with the model mdp or mc ( cf .lemma [ lem : productsubsetisomorphicquotientmc ] ) .we then compute the maximal end components or bottom strongly connected components . according to lemma [lem : subset ] , we try to decide these components of the product by using the acceptance conditions and of and , respectively . for each of those componentswhere over- and under - approximation do not agree ( and which we therefore can not decide ) , we employ the breakpoint construction ( cf .corollary [ cor : breakpoint ] ) , involving the deterministic rabin over- and under - approximation breakpoint automata and , such that . for this, we take one state of the component under consideration and start the breakpoint construction with this state as initial state .this way , we obtain a product of a breakpoint automaton with parts of the model .if the resulting product contains an accepting component ( using the under - approximation ) , then the original component must be accepting , and if the resulting product contains a rejecting component ( using the over - approximation ) , then the original component must be rejecting .the remaining undecided components are decided either by using a safra - based construction , restricted to the undecided component , or only by using , where we start from possibly different states of the subset product component under consideration ; this approach always decides the remaining components , and we call it the multi - breakpoint construction . for the model states that are part of an accepting component , or from which no accepting component is reachable ,the probability to fulfil the specification is now already known to be or , respectively . to obtain the remaining state probabilities ,we construct and solve a linear programming ( lp ) problem ( or a linear equation system when we start with mcs ) .note that , even in case the multi - breakpoint ( or safra - based ) procedure is necessary in some places , our method is usually still more efficient than direct rabin determinisation , for instance based on some variation of .the reason for this is twofold .first , when starting the determinisation procedure from a component rather than from the initial state of the model , the number of states in the rabin product will be smaller , and second , we only need the multi - breakpoint determinisation to decide mecs or bottom sccs , such that the computation of transient probabilities can still be done in the smaller subset product . the following optimisations can be used to speed up the model checking algorithm .* we can compute the graph decomposition on the fly .thus , we first compute one component , then decide it , compute the next component , etc . *if we have shown a state of the subset product to be accepting and , then is accepting .* we can treat all states , from which we find that we can reach an accepting component with probability 1 , as accepting .+ note that , if such a state is part of a mec , this expands to the complete mec , and if the state is initial , we can already terminate the algorithm . *subset and breakpoint products can be effectively represented using bdds . in the remainder of this section ,we detail the proposed approach : we first introduce the theoretical background , and then present the incremental evaluation of the bottom sccs . in order to be able to apply our lazy approach , we exploit a number of acceptance equivalences in the rmc . given the dra and a state of , we denote by the label of the root node of the labelled ordered tree associated to ( cf . ) .[ thm : only_reach ] given a ngba and , let be an arbitrary state of .then , a word is accepted by if , and only if , it is accepted by .intuitively , a word is accepted by if there is an accepting sequence with and for each ; since each is the set of states reached from via , then in there is a sequence of states such that each is reached from some via ; such a sequence is accepting as well by the way is constructed .a similar argument applies for the other direction .the formal proof is a mild generalisation of the correctness proof of the dra construction .theorem [ thm : only_reach ] provides an immediate corollary . [ cor : same ] given a ngba , a mc , and the dra , a path in that starts from a state is accepted if , and only if , the word it defines is accepted by ; and if , then the probabilities of acceptance from a state and a state are equal , i.e. , .this property allows us to work on quotients _ and _ to swap between states with the same reachability set .if we ignore the accepting conditions , we have a product mc , and we can consider the quotient of such a product mc as follows . given a mc and a dra , the _ quotient mc _ ] where * = { \{\ , ( m,[d ] ) \mid ( m , d ) \in { m}\times { q},\ [ d ] = { \{\ , d ' \in { q}\mid { \mathsf{rchd}}(d ' ) = { \mathsf{rchd}}(d ) \,\ } } \,\}} ] , * (m,[d ] ) = { \mu_{0}}(m , d) ] .by abuse of notation , we define = ( m,[d]) ] .it is easy to see that , for each , ] is well defined : for ] holds .[ thm : bottom_component ] for a mc and dra , it holds that 1 .if is a bottom scc of then ] , 2 .if is a bottom scc of ] .together with definition [ def : acceptingsccofrmc ] and theorem [ thm : only_reach ] , theorem [ thm : bottom_component ] provides : [ cor : quotients ] let be a bottom scc of ] are accepting , or all states of with \in { \mathtt{s}} ] , and * for each } ] .the proof is easy as , in each and , the accepting transitions are over- and under - approximated . with this lemma , we are able to identify some accepting and rejecting bottom sccs in the product .we remark that and differ only in their accepting conditions .thus , the corresponding gmcs and also differ only for their accepting conditions .if we ignore the accepting conditions , we have the following result : [ lem : productsubsetisomorphicquotientmc ] let be a mc , a ngba , , and as defined above ; let be without the accepting conditions .then , and ] of ] ; * is rejecting if holds for some } ] the breakpoint set , where intuitively refers to the set of currently reached states in the root of an extended history tree ; refers to the index of the root ; and is the union of the -labels of the children of the root .that is , , also denoted by , where represents the nodes of that are children of , i.e. , is an abstraction of the tree .we build two dras and , called the _ breakpoint automata _ , as follows . from the breakpoint state ,let and .then an accepting transition with letter reaches if .this corresponds to the equivalence from step [ item : determinisation : accepting ] that determines acceptance .( note that step [ item : determinisation : stealing ] does not affect the union of the children s labels . ) since step [ item : determinisation : accepting ] removes all children , this is represented by using as label of the child .formally , mm -12 mm ( -1.95,0.15 ) rectangle ( 2.15,2.4 ) ; ( brka ) at ( 0,0 ) ; ( brk - x-2-o ) at ( ) ; ( brk - yz-1-y ) at ( ) ; ( brk - x-1-o ) at ( ) ; ( brk - yz-2-o ) at ( ) ; ( ) to node ( brk-x-1-o.north ) ; ( ) to node[above ] ( ) ; ( ) to node[below ] ( ) ; ( brk - yz-2-o ) to node[left ] ( brk - x-1-o ) ; ( ) to node[below ] ( ) ; ( ) to node[above ] ( ) ; ( brk - yz-1-y ) to node[right ] ( brk - x-2-o ) ; figure [ fig : examplebreakpoint ] shows the reachable fragment of the breakpoint construction for the ngba depicted in figure [ fig : examplebuechi ] .the double arrow transitions are in while the remaining transitions are in .[ theo : inclusions ] the following inclusions hold : }^{u } ) \subseteq { \mathcal{l}}({\mathcal{bp}}_{\langle d \rangle}^{u } ) \subseteq { \mathcal{l}}({\mathcal{a}}_{d } ) \subseteq { \mathcal{l}}({\mathcal{bp}}_{\langle d \rangle}^{o } ) , { \mathcal{l}}({\mathcal{s}}_{[d]}^{o})\text{.}\ ] ] we remark that the breakpoint construction can be refined further such that it is finer than }^{o}) ] , which is accepting ( i.e. , contains some transition in ) .* is rejecting if there exists a bottom scc in with ] is a state in a bottom scc of the quotient mc and = [ d'] ] or , and * if )}^{{\mathcal{m}}{\otimes}{\mathcal{s}}_{[d']}^{o}}({\mathcal{b } } ) < 1 ] ; 3. there exist and such that reaches with probability an accepting scc of .theorem [ thm : acceptanceofsubsetautomaton ] provides a practical way to check whether an scc of is accepting : it is enough to check whether some state of has for some in the accepting region of , or whether , for a state , reaches with probability the accepting region .we remark that , by construction of , if we change the initial state of to .e ., if we consider the run can only visit breakpoint states ; i.e. , it is actually a run of .based on the theorem [ thm : biancoa95 ] , the classical approach for evaluating mcs for ltl specifications is sketched as follows : 1 .translate the ngba into an equivalent dpa ; 2 .build ( the reachable fragment of ) the product automaton ; 3 . for each bottom scc , check whether is accepting .let be the union of these accepting sccs ; 4 .abstract all accepting bottom sccs to an absorbing goal state and perform a reachability analysis to infer , which can be solved in polynomial time .the classical approach is to construct a deterministic rabin automaton in step 1 and thus to evaluate rabin acceptance conditions in step 3 .the size of such deterministic rabin automaton is where and are the number of states and accepting sets of , respectively , and the number of states of .-1 mm by using the isomorphism between the product mc of and and the quotient mc ] for a set with .the transition is defined as follows : * update of subset part : ; * updating breakpoint states : let } \to r' ] ; * minimal acceptance number : let be the minimal integer such that is an accepting transition if such an integer exists , and otherwise ; * removing duplicate breakpoint states : let } \to r' ] such that and , by ; * minimal rejecting number : let be the minimal integer with if such an integer exists , and otherwise ; * removing blanks : let } \to r'' ] is a bijection with for all , , and ; and * transition priority : the priority of this transition is if and if .note that in the above definition , for the assignment of numbers with to elements of is arbitrary as long as is a bijection .we denote by the dpa constructed as above from . given a ngba , we write to denote the automaton and for a state of , we denote by the states reached in , i.e. , .the parity automaton follows the initial subset part of the semi - deterministic automaton in the part of a state .it simulates the final breakpoint part via the function that stores the nondeterministic choice of where to start in the breakpoint part by assigning them to the entries while preserving the previous choices .( -2.55,0.25 ) rectangle ( 11,4.5 ) ; \(p ) at ( 0,0 ) ; ( pl ) at ( ) ; ( blank ) at ( ) ; ( xo ) at ( ) ; ( yz1 ) at ( ) ; ( x2 ) at ( ) ; ( yz4 ) at ( ) ; ( x3 ) at ( ) ; ( blank ) edge[loop above ] node ( blank ) ; ( blank ) to[in=30 , out=60 , distance=5 mm ] node ( blank ) ; ( blank ) edge[loop right , distance=5 mm ] node ( blank ) ; ( ) to node ( ) ; ( xo ) to node[left ] ( yz1 ) ; ( xo ) to[bend left=15 ] node[above ] ( blank ) ; ( xo ) to[bend right=15 ] node[above ] ( blank ) ; ( yz1 ) to[bend left=15 ] node[right ] ( x2 ) ; ( yz1 ) to[bend right=15 ] node[above , near start ] ( blank ) ; ( x2 ) to[bend left=15 ] node[below ] ( yz1 ) ; ( x2 ) to[bend right=15 ] node[left ] ( blank ) ; ( x2 ) to[bend right=30 ] node[right ] ( blank ) ; ( yz1 ) to node[below ] ( x3 ) ; ( x3 ) to[bend left=15 ] node[above ] ( yz4 ) ; ( x3 ) .. controls ( ) and ( ) .. node[left , near start ] ( blank ) ; ( yz4 ) to[bend left=15 ] node[below ] ( x3 ) ; ( yz4 ) to node[above ] ( x2 ) ; ( yz4 ) .. controls ( ) and ( ) .. node[below , near start ] ( blank ) ; ( table ) at ( 7.5,2.5 ) [ cols="^,^,^,^,^",options="header " , ] ; figure [ fig : exampleparity ] shows the parity automaton obtained by applying the above determinisation to the semi - deterministic automaton depicted in figure [ fig : examplesemidet ] .function is completely arbitrary since functions , , and are all the empty function , so let us detail how to obtain the transition from to via action .table [ tab : exampleparityfunctions ] shows the functions , , , and we compute and whether is accepting ( i.e. , ) .as we can see , for the transition from to via action we have that both and have value since and , so the resulting transition has priority as . instead , for the transition from to via action , we have that both and have value since there is no accepting transition and no blank in , so the resulting transition has priority .note that in only positions and are determined by ; the remaining positions are again arbitrary and having is the result of a deliberate choice . in fact , a different choice would just make the resulting parity automaton larger than while accepting the same language .consider a word and the associated run : if , then the corresponding run of has as limiting minimum priority since means that there exists such that either , or and , or and , thus the state is reached via the transition .since enables only self - loops each one with priority , this is also the minimum priority appearing infinitely often . now, suppose that .this means that either , or ; in the former case , the automaton repeatedly switches between states and , and in the latter case between states and . in both cases ,it is immediate to see that the minimum priority appearing infinitely often is that is odd , thus is rejected .the semi - deterministic construction we presented in definition [ def : semi - determinisation ] preserves the accepted language , that is , a ngba and the resulting semi - deterministic automaton accept the same language ; moreover , the language accepted by starting from a state depends only on , the subset component .[ pro : buechilangequalsemidetlang ] given a ngba , let be constructed as above .then , .[ pro : semidetlanguageignorebandi ] given a ngba and two states of , .similarly , for a given ngba , also and the corresponding parity automaton are language equivalent , thus .[ pro : semidetlangeqparitylang ] given a sdba and , holds . given a semi - deterministic bchiautomaton , , and a state of , we remark that for we have .since is semi - deterministic , by definition [ def : semidetaut ] the reachable fragment of is a deterministic automaton so we can consider the product that is a mc extended with accepting conditions . in particular , the accepting sccs of and are strictly related by the function of states and the smallest priority occurring in the considered scc . in the following ,we say that ( or ) is accepting if the probability to eventually being trapped into an accepting scc is .[ lem : parityofsccandacceptingsccofsemidet ] given a sdba and , if forms an scc where the smallest priority of the transitions in the scc is , then is accepting .it is known by lemma [ lem : productsubsetisomorphicquotientmc ] that and are strictly related , so we can define the accepting scc of by means of the accepting states of .[ def : acceptingsccofsubsetapp ] given a mc and a ngba , for and , we say that a bottom scc of is accepting if , and only if , there exists a state in an accepting bottom scc of such that .note that corollary [ cor : same ] ensures that the accepting sccs of are well defined .[ thm : acceptanceofsubsetautomatonapp ] given a mc and a ngba , for and , the following facts are equivalent : 1 . is an accepting bottom scc of ; 2 . there exist and such that belongs to an accepting scc of for some } ] .then )}^{{\upsilon}}({\mathcal{s}}^{u}_{[d ] } ) \leq \sup_{{\upsilon}}{\mathfrak{p}}_{(m,\langle d \rangle)}^{{\upsilon}}({\mathcal{bp}}^{u}_{\langle d \rangle } ) \leq \sup_{{\upsilon}}{\mathfrak{p}}_{(m , d)}^{{\upsilon}}({\mathcal{a}}_{d } ) = \sup_{{\upsilon}}{\mathfrak{p}}_{(m , d')}^{{\upsilon}}({\mathcal{a}}_{d ' } ) \leq \sup_{{\upsilon}}{\mathfrak{p}}_{(m,\langle d \rangle)}^{{\upsilon}}({\mathcal{bp}}^{o}_{\langle d \rangle } ) , \\sup_{{\upsilon}}{\mathfrak{p}}_{(m,[d])}^{{\upsilon}}({\mathcal{s}}^{o}_{[d]}) ] is an ec of and is an accepting ec of . contains a state with ] , then }\,\}} ] , then } ] , then }\,\}} ] is the mdp , [ { l } ] , { \mathit{act } } , [ { \mu_{0 } } ] , [ { { \mathrm{p}}}]) ] , * (m,[d ] ) = { l}(m , d) ] , and * \big((m,[d ] ) , a , ( m',[d'])\big ) = { { \mathrm{p}}}\big((m , d ) , a , ( m',d')\big) ] and that for each ] . for the reader s convenience , we recall the reduction of to probabilistic reachability in the rmdp . first , we introduce some concept and corresponding notation , starting with the concept of maximal end component ( mec ) that is the mdp counterpart of the scc for a mc ; the formal definition is not so immediate as we have to take care of the role of the actions . given an mdp , a sub - mdp is a pair such that and satisfying : for each where denotes the enabled actions of , and and implies .an _ end component _ of is a sub - mdp such that the digraph induced by is strongly connected .an end component is a _maximal end component _ ( mec ) if it is not contained in some other end component with and for all . as for sccs, we define the transitions of the mec of the mdp as .given an mdp and a dra , let be a mec of the product rmdp .we say that is accepting if there exists an index } ] be the accepting region and . then , reduces to a probabilistic reachability : obviously , if a mec contains a state from the accepting region , then the probability of accepting the language of is the same for all of them , so it does not matter the particular state we reach when we enter .as argued in the body of the paper , the likelihood does not depend on itself , but only on ] ; is rejecting if does not contain transitions with for some } ] the set of variables encoding } ] to .this idea resembles , but there additional variables are introduced to enumerate states of subset ( or breakpoint ) automata . for our purposesthis is not needed .the construction of the single - breakpoint and multi - breakpoint automata is almost identical in terms of their bdd representations . according to , rabin automataare not well suited to be be constructed using bdds directly .it is however possible to construct them in an explicit way and convert them to a symbolic representation afterwards . for this , we assign a number to each of the explicit states of the rabin automaton .afterwards , we can refer to the state using bdd variables encoding this number .we emphasise that we can still compute rabin automata on - the - fly when using the bdd - based approach , so as to avoid having to construct parts of the rabin automaton which are not needed in the product with the mc or mdp .it might happen that in the symbolic computation of the reachable states of the product we note that a certain state of the rabin automaton is required . in this case, we compute this successor state in the explicit representation of the automaton , assign to it a new number , encode this number using bdds and then use this bdd as part of the reachable states .mdps can be represented similarly . to represent exact transition probabilities, one can involve multi - terminal bdds ( mtbdds ) .if they are not required , bdds are sufficient .products of symbolic model and automata can then be computed using ( mt)bdd operations , allowing for effective symbolic analyses . to compute the ( bottom ) sccs , we employ a slight variantion of . for mdps , we then employ a symbolic variant of the classical algorithm to obtain the set of mecs from the set of sccs .the acceptance of an scc / mec can be decided by a few bdd operations .to establish theorem [ thm : only_reach ] , we show inclusion in both directions .the proof is the same as the correctness proof for the determinisation construction in , but the claim is different , and the proof is therefore included for completeness .the difference in the claim is that is shown for with a singleton set that contains only the root and .the proof , however , does not use either of these properties . for an -word and ,we denote with the word .we denote with for a finite word that there is , for all a sequence with and for all . if one of these transitions is guaranteed to be in , we write . for an input word ,let be the run of the dra on .a node in the history tree is called _ stable _ if and _ accepting _ if it is accepting in the transition .* is stable for all with , and * the chain contains exactly those indices such that is accepting ; this implies that is updated exactly at these indices . exploiting knig s lemma ,this provides us with the existence of a run that visits all accepting sets of infinitely often .( note that the value of is circulating in the successive sequences of the run . )this run is accepting , and therefore belongs to the language of .let and be the run of on the input word ; let be the run of on .we then define the related sequence of host nodes .let be the shortest length of these nodes of the trees hosting that occurs infinitely many times .we follow the run and see that the initial sequence of length of the nodes in eventually stabilises .let be an infinite ascending chain of indices such that the length of the -th node is not smaller than for all , and equal to for all indices in this chain .this implies that , , , is a descending chain when the single nodes are compared by lexicographic order .as the domain is finite , almost all elements of the descending chain are equal , say . in particular , is eventually always stable .let us assume for contradicting that this stable prefix is accepting only finitely many times .we choose an index from the chain such that is stable for all .( note that is the host of for , and holds for all . ) as is accepting , there is a smallest index such that .now , as is not accepting , must henceforth be in the label of a child of , which contradicts the assumption that infinitely many nodes in have length . 1 .if there is a path from to in , then there is a path from ] in ] to ] and is reachable in , then there is a path from to some with in . to show ( 1 ), we have to run through the properties of a bottom scc .first , all states in ] is the successor of any state in the quotient mc .let us assume for contradiction that there is a state such that ] by lemma [ lem : paths ] , this implies that there is a state with reachable from . as is a bottom scc , holds , and = [ ( m',d '' ) ] \in [ { \mathtt{s}}] ] . = { \mathtt{s}} ] is a successor of ] .for the bottom scc , the construction of then implies , which is a contradiction .as is closed under successors , it contains some bottom scc , and we select to be such a bottom scc .we have shown that ] is a bottom scc that is contained in , and hence = { \mathtt{s}} ] . the subset automata , , and the breakpoint automata , are defined in section [ subset ] .let us assume for contradiction that is an infinite word such that the run of }^{u} ] . )but then , transition in is accepting , which is a contradiction .hold for all , and that if is accepting , then is accepting with accepting pair with index . as the root can not be rejecting , this implies that is accepting , too .hold for all , and that if is accepting with accepting pair with index ( recall that the root can not be rejecting ) , then is accepting . if is accepting , but not with index , then the node with position in the history is eventually always stable ( note that , whenever is not stable , no other node than is ) say from position onwards .but then can not be empty for any , thus .recall that is obtained by determinisation of the ngba , i.e. , .let be the run of a word that is rejected by }^{o} ] and the claim follows with since } ) = { \mathcal{l}}({\mathcal{a}}_{d}) ] that are reached after some accepting transition has been performed .* is included in the tree , * it is an initial sequence of a run of , * , and * for all indices ( with ) from the chain of breakpoints there has to be a accepting transition for some . as usual with the breakpoint construction, it is easy to show that there exists a run for all and .thus , we are left with an infinite and finitely branching tree .invoking knig s lemma , this tree contains an infinite path , which is an accepting run of by construction .let be a run of on an -word .for this run , we denote by ] .moreover , let be the sequence where .\ { \rho}'(i ) = q \,\}} ] and each , we have the following relations between the widths : and .in fact , by definition of , we have that , and similarly for . since holds for each } ] . for doing this , we select an such that holds .note that such an exists due to the monotonicity of in and the fact that has as lower bound .moreover , holds for each . for a given , we can now choose an arbitrary such that .( as is accepting , arbitrarily large such exists . )we have and .now , implies , which together with provides .a simple inductive argument thus implies for all , and thus .together with , this provides . as can be chosen arbitrarily large , this implies . with this observation, we can construct an accepting run of as follows .we start with an initial sequence in where by definition . note that this sequence is well defined and deterministic ; moreover , for each , holds . since , we have that , thus and we use such transition to extend the sequence to where .again , . note that the choice of using the accepting set is arbitrary .the remainder of the run , is well defined , as this second part is again deterministic , and still for each . since for each , the deterministic automaton in the second part does not block .moreover , holds by a simple inductive argument .let .a simple inductive argument provides that holds for all .thus , there is a with .but since , we have that , which contradicts .let us assume for contradiction that is the accepting run of while is the rejecting run of on an input word .note that since ; similarly , and as well as and .we can first establish with a simple inductive argument that for all ( and that has a run on ) .in fact , for , we have already noted that ; suppose that ; by construction of , it follows that .since is accepting , for each and this implies that , i.e. , has a run on .as is rejecting , there exists an such that , for each , we have that , otherwise would be accepting ; note that since we have that , this implies that . by definition of , it follows that and , thus for each we have .as is accepting , transitions from are taken infinitely often , thus the indices } ] , this implies that there exists an such that , as effect of the transition .however , since is rejecting by assumption , we know that . moreover 1 . by definition of ,the breakpoints sets are reset to after an accepting transition .2 . for non - accepting transitions ,the breakpoint construction is monotonic in the sense that for each , , , , and such that and , it follows that is defined and , for , we have .3 . as is rejecting , its breakpoint sets are never reset after position ( and remain to be ) . from 1.)-3 . ) , it follows by induction that , thus for each .this , however , implies due to the definition of that in no further accepting transitions follow , thus .this implies that is rejecting , contradicting the initial assumption . as for proposition[ pro : buechilangequalsemidetlang ] , we split the proof of proposition [ pro : semidetlangeqparitylang ] into two lemmas , stating that for each , and hold , respectively . as notation , for states , , and , we write if , and if there exists with .[ lem : prerun ] given a semi - deterministic bchiautomaton and , for each , each with , and each input word , there is a pre - run of if , and only if , there is a pre - run of with .let be an accepting run of on a word with dominating priority ; let be a natural number such that holds for all ; and let for all . by lemma [ lem : prerun ], there is a pre - run with of .the observation that no priority less than occurs from the -th transition onwards in provides with the construction of that this pre - run can be continued to a unique run with for all .further , for all with , we have . as there are infinitely many such , is accepting .let with be an accepting run of on an input word .then there is a minimal such that thus for all and for all . by a simple inductive argumentwe can show that has a run on , such that for all .moreover , there is a descending chain of indices such that .this chain stabilises at some point to .consequently , we have that is even _ or _ no smaller than for all .( assuming that the priority is an odd number less than would imply that there is a sign in at a position less than or equal to , which would contradict that the index has stabilised . ) for all positions with , is an even number less than or equal to .the smallest priority occurring infinitely often in the transitions of is therefore an even number less than or equal to . to prove the lemma, we first note that no transition of any run of the product ( which is an scc ) can see a priority smaller than .thus , for all such runs , the sequence is a run , and a transition like is accepting if , and only if , the priority is minimum and even ; more precisely , we have that . : : let be an accepting bottom scc of , be a run of trapped into , and be the associated word . by definition [ def : acceptingsccofsubset ] , it follows that there exists an scc of ] for each and that is trapped into an of with \subseteq { \mathtt{s}}' ] . since is accepting , by construction of , proposition [ pro : semidetlangeqparitylang ] , and lemma [ lem : parityofsccandacceptingsccofsemidet ] , it follows that there exists and integer such that and is accepting .if is already in an accepting scc of , then by definition of accepting scc for , by theorem [ thm : bottom_component ] and corollary [ cor : quotients ] we have that ] .let such and is even and minimum , say .this implies by construction of that where for some .since and is accepting , by corollary [ cor : quotients ] it follows that all states in are accepting , thus by definition [ def : acceptingsccofsubsetapp ] , is accepting as well . : : this equivalence follows directly from a combination of the proofs of propositions [ pro : buechilangequalsemidetlang ] and proposition [ pro : semidetlangeqparitylang ] ; in particular , the proof of lemma [ lem : buechilangsubseteqsemidetlang ] ( stating that ) provides the singleton needed for the implication .
|
the bottleneck in the quantitative analysis of markov chains and markov decision processes against specifications given in ltl or as some form of nondeterministic bchiautomata is the inclusion of a determinisation step of the automaton under consideration . in this paper , we show that full determinisation can be avoided : subset and breakpoint constructions suffice . we have implemented our approach both explicit and symbolic versions in a prototype tool . our experiments show that our prototype can compete with mature tools like prism .
|
in historical perspective , light doubtlessly is among the most classical phenomena of physics .the oldest known treatise on this subject , optics " by euclid , dates back to approximately 300 b.c . yet in contemporary physics , light is one of the strongest manifestations of quantum .electromagnetic field quanta , the _ photons _ , are certainly real : they can be emitted and detected one by one , delivering discrete portions of energy and momentum , and in this sense may be viewed as particles of light . at the same time ,wave properties of light are most readily observed in diffraction and interference experiments . before the quantum mechanical principle of _ duality _was realized , this twofold nature of light has lead to curious oscillations in its understanding by scientists and philosophers starting from antiquity .pythagoras believed light to consist of moving particles , but aristotle later compared it to ocean waves .much later sir isaac newton has revived the concept of corpuscles , which again yielded the ground to the wave theory when interference and diffraction were discovered .then the sum of evidence for each , the wave and the corpuscular nature of light , became undeniable and quantum optics emerged .so what is classical and what is non - classical light ?it appears logical to call classical " the phenomena that can be quantitatively described without invoking quantum mechanics , e.g. in terms of maxwell s equations .interference and diffraction are obvious examples from classical optics .somewhat less obvious examples are photon bunching " , hanbury brown and twiss type interference of thermal light , and a few other phenomena that occasionally raise the quantum - or - classical debate in conference halls and in the literature . conversely , non - classical ( quantum ) are those phenomena that can _ only _ be described in quantum mechanics . it should be noted that in many cases it is _ convenient _ to describe classical light in terms of quantum optics , which , however , does not make it non - classical in the sense mentioned above .this is done in order to use the same language for classical and quantum phenomena analysis .it is worth mentioning , that nature does not make this distinction .it is our choice to employ a classical model with limited applicability as much as possible .one of the most useful quantum vs. classical distinction criteria is based on the various correlation functions of optical fields .such correlation functions are computed by averaging the observables over their joint probability distribution . for a simple example let us consider a normalized auto - correlation function of light intensity : using the cauchy - schwarz inequality , it is easy to see that .smaller values for this observable are impossible in classical optics , but they do occur in nature , e.g. for photon number states and for amplitude - squeezed light . therefore _ antibunching _ can be taken as a sufficient but not necessary criterion for non - classical light .a similar argument can be made for the intensity correlation ( [ g2 ] ) as a function of spatial coordinates instead of time . in this caseinequalities similar to cauchy - schwarz lead to such non - classicality criteria as bell s inequalities violation and negative conditional von neumann entropy .another criterion , likewise sufficient but not necessary , is the negativity of the phase space distribution function . in quantum mechanicssuch distribution functions can be introduced with limiting cases being the wigner function , mandel s -function ( also referred to as the husimi function ) , or the glauber - sudarshan -function .negative , complex or irregular values of these functions can also be used as indications of non - classical light .both criteria are sufficient but not necessary as can be seen by the following examples : photon number states are non - classical according to both criteria ; amplitude squeezed states are non - classical according to the first but not the second criterion ; superpositions of coherent states , so - called cat states ) coherent states , e.g. .] , are non - classical according to the second but not the first criterion .the qualifier not necessary " in the above criteria is essential . currently we know of no simple general criterion , which is sufficient _ and _ necessary .but we can make the following statement : classical states can involve either no fluctuations at all , or only statistical fluctuations . in quantum physics such states either do not exist , or they are described as mixed states .pure quantum states exhibit a so called quantum uncertainty , which is the result of the projection noise due to the measurement .some aspects of a quantum uncertainty can be described by a classical stochastic model . butsuch models are always limited .allowing for all possible experimental scenarios , a pure quantum state can never be described by one and the same classical stochastic model . in this sensecoherent states , being pure quantum states in their own right , have to be classified as non - classical .this statement calls for a more detailed justification which is provided in the appendix in the end of this chapter .a key concept in the following discussion will be an optical mode .this concept is fundamental to electromagnetic field quantization : the spatio - temporal character of a mode is decribed by a real - valued mode function , which is a function of position and time and is normalized . in classical physicsthis amplitude function is multiplied by a complex number describing amplitude and phase , or alternatively two orthogonal field quadratures . in quantum opticsthese amplitudes are described by operators ; superpositions of the photon creation and annihilation operators , to be precise .quantum - statistical properties of single - mode light are conveniently illustrated by phase space diagrams where the optical state wigner function is plotted against the canonical harmonic oscillator coordinates and .we recall that for an optical mode with central frequency the corresponding quadrature operators are related to the photon creation and annihilation operators and as in fig .[ fig : pq ] we show some examples of classical and non - classical light phase diagrams . here( a ) is the vacuum state , and ( b ) , ( c ) are the thermal and coherent states with the mean photon number , respectively ( the plots are scaled for ) .the diagram ( d ) represents squeezed vacuum with . for the squeezed vacuum statesthe mean photon number is uniquely related to squeezing ; our example requires db of squeezing .the diagrams ( e ) and ( f ) show the quadrature and amplitude ( or photon - number ) squeezed states , respectively , with the same squeezing factor as in ( d ) and the same mean photon number as in ( b ) and ( c ) ., for ( d ) .squeezing parameter for states ( d)-(f ) is 7.66 db . ]the thermal state represented in fig .[ fig : pq](b ) is clearly classical .the vacuum state ( a ) and the coherent state ( c ) , which is also called displaced vacuum state , are said to be at the quantum - classical boundary .they do not violate either of the two criteria formulated above , but the boundary value implies that the optical field does not fluctuate , see eq.([g2 ] ) .then the poissonian statistics of photocounts , observed with a coherent field , must be attributed to stochastic character of the detection process .but if this were the case , it would not be possible to observe sub shot noise correlation of two independent detectors signals ( such as measured in two - mode squeezing experiments ) even with 100%-efficient photodetectors .therefore one is lead to a conclusion that detection of coherent light can not be fully described in the semiclassical approximation , and in this sense such light is non - classical . despite these notational difficulties coherent states as any other pure quantum state can be used as resource for optical quantum engineering , such as in quantum key distribution .wigner functions for the classical and non - classical states shown in fig .[ fig : pq ] are symmetric or distorted gaussian .other non - classical states may be non - gaussian , and clearly displaying wigner function negativity .such are the fock states and the already mentioned cat states , shown in fig . [fig : pq1 ] .their wigner functions are respectively where is the -th order laguerre polynomial .it is interesting to observe that the wigner function of a photon - number eigenstate indicates a non - zero quasi - probability for the mode to be found in a _different _ number - state , and in fact reaches the maximum for the vacuum state .this reminds us to be cautious with physical interpretations of quasi - probability functions such as the wigner function .coherent states such as shown in fig .[ fig : pq](c ) . ]the standard way of measuring and gauging the intensity fluctuations is to split the optical beam with a 50/50 beam splitter and either subtract or multiply the photocurrents of two detectors placed at each output .time dependence may be obtained by introducing a variable optical or electronic delay in one channel .this approach would identify the photon - number squeezed state shown in fig .[ fig : pq](f ) as non - classical , but would not reveal the non - classical properties of the squeezed states in diagrams ( d ) and ( e ) . in fact , an ensemble of measurements on such identically prepared states ( which is often equivalent to a time - sequential measurement on one system ) would show excessive photon - number fluctuation , above the shot noise limit . to measurethe quadrature squeezing one has to set up a measurement sensitive to the wigner function projection on the squeezed quadrature , rather than in the radial direction .this can be achieved in a heterodyne measurement , when a coherent local oscillator field is injected into the unused port of the beam splitter .changing the local oscillator phase one can chose the projection direction .the right choice of the phase leads to a sub shot noise measurement revealing the non - classicallity : .the same situation can be described in a different language , by saying that the beam splitter transforms the input ( e.g. quadrature - squeezed and coherent ) modes to output modes , both of which are photon - number squeezed , or anti - bunched .non - classical phenomena in two or more optical modes are usually associated with the term _entanglement_. a quantum optical system comprising modes labeled and is said to be entangled if its wave function does not factorize : .this concept can be applied to systems of more than two modes , in which case one has multipartite entanglement .entangled states can also be described in density operators notation , which allows to consider the states that are not quantum - mechanically pure .perhaps the most common examples of entangled states in optics are so - called bell states of a polarization - entangled photon pair and one of the two spatial modes .hence we work in four - dimensional hilbert space where single - photon base states can be mapped as follows : , , , . ] where and designate spatial modes .it is also possible to have a photon pair simultaneously entangled in _ both _ polarization and frequency ( or equivalently , time ) .not only a single pair of photons may be entangled .it is also possible to create an entangled state with larger certain or uncertain photon numbers .one of the examples is vacuum entangled with a fock state , dubbed noon - state " .macroscopical states can be entangled not only in photon numbers , but also in the canonical coordinates ( quadratures ) and . to distinguish it from the entanglement in the discrete photon numbers , quadrature entanglementis also called continuous - variable entanglement .it can be generated e.g. by combining two squeezed vacuum states on a beam splitter , and forms a foundation for continuous - variable quantum information processing and quantum state teleportation .the discrete and continuous variable descriptions correspond to expanding the wave function in two different bases .one and the same state can be represented by either one of them . on the practical side : photon number resolving detectors measure in terms of the discrete fock state basis and homodyne detection measures in terms of quadrature basis .graphic representation of two- or multi - mode non - classical states on a phase diagram is more complicated than for a single mode . in general, it requires as many diagrams as there are modes , with a color coding indicating the quantum - correlated sub - spaces _ within _ each diagram .this appears to allow for a better photon localization in phase space than is permitted by heisenberg uncertainty .however the uncertainty principle in not really violated , because the localization occurs in superpositions of quadratures of different field operators that do commute , e.g. =0 $ ] , giving rise to einstein - podolski - rosen correlations .the quantum state of any one mode of a system comprising many modes and described by a multi - partite state can be found by taking a trace over the unobserved modes .if initially the entire system was in a pure entangled state , the single mode sub - system will be found in a mixed state , as can e.g. be seen starting from equations ( [ bellstates ] ) and ( [ freqent ] ) .this also can be understood following a von neumann entropy analysis .indeed , if a bipartite system is in a pure state with , and the conditional entropy is negative because of entanglement , then the entropy of a sub - system is positive , , which means that it is in a mixed state .remarkably , in some cases this does not preclude this mode from being in a non - classical state .for example , the twin beams of an optical parametric oscillator ( opo ) that are well - known to be quantum - correlated ( or two - mode squeezed ) , are predicted and demonstrated to be also single - mode squeezed when the opo is well above the threshold . in this case one finds a mixed squeezed state which occupies a larger area in phase space than required by the uncertainty relation .it should be noted that two - mode quantum correlation and single spatial mode non - classical photon statistics are often viewed as two sides of the same coin .this affinity , emphasized by the use of the term two - mode squeezing " in analogy with the two - mode entanglement " , arises from the simplicity of conversion between these types of photon statistics .the conversion is performed with a linear beamsplitter , and can be elegantly described by an su(2 ) operator converting two input states to two output states .this operation leads to a conversion of phase fluctuations into amplitude fluctuations , and of two - mode entanglement into single - mode squeezing . a special case of two - mode squeezing is realized when the modes are associated with orthogonal polarizations of the same optical beam .just like a spatial mode can be associated with any function from an orthogonal set of helmholtz equation solutions ( e.g. laguerre - gauss modes ) , here we are free to chose any polarization basis to designate polarization modes .it is often convenient to chose a linear basis . in this case polarizationstokes operators are introduced as like the canonical coordinate or quadrature operators ( [ pq ] ) , stokes operators do not commute .they too span a phase space ( three - dimensional instead of two - dimensional , since we now have two independent polarization modes ) where a pure state occupies the minimum volume allowed by the uncertainty relations .its shape , however , can be distorted - squeezed .for example , squeezing in the quadrature can be observed as sub shot noise fluctuations of the difference of the currents generated by two photo detectors set to measure optical powers in the and linear polarizations . with increasing the number of modes , which in optics may be associated with the hilbert space dimension, the list of possible non - classical states rapidly grows .some examples are the entangled states of multiple photons in different modes , such as optical ghz states , w states , as well as cluster and graph states , smolin states and others . in quantum communications ,higher - dimensional entanglement provides a higher information capacity . from a fundamental point of view, higher - dimensional entanglement leads to stronger violations of generalized bell s inequalities .this has been experimentally demonstrated in a 16-dimentional hilbert space spanned by the optical polarization states and in a 12-dimensional hilbert space spanned by the optical orbital angular momentum ( oam ) states .entanglement in the hilbert space spanned by oam states is a relatively novel and very promising approach to generating multi - mode entanglement .two - photon entanglement in 100 - dimensional space was demonstrated following this approach , as was the _ four - photon _entanglement .entanglement of a 100 , and with certain allowances of even a 1000 optical modes based on polarization , rather than spacial , variables has also been theoretically discussed and shown to be within reach with the existing technology . however applying the entanglement metrics such as negativity or concurrence shows that such states are very close to classical light .let us now review some of the practical applications that make non - classical light such an important topic in optics .the fact that light can posses non - classical properties that can only be explained in the framework of quantum mechanics is remarkable and important for our understanding of nature . besides that , nonclasical light can have useful technological applications . [[ absolute - calibration - of - light - detectors - and - sources . ] ] absolute calibration of light detectors and sources .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + perhaps the oldest application of non - classical light , proposed back in 1969 - 1970 and further developed by david n. klyshko in 1980 , is the absolute calibration of the quantum efficiency of photon counting detectors .the concept underlying this method is very simple .suppose a process generating photon pairs , such as spontaneous parametric down conversion ( spdc ) , produces signal and idler photon pairs per second .the photons are sent into photon counting detectors with quantum efficiencies for the signal channel and for the idler channel .imperfect detection leads to _ random _ loss of photons in both detectors .then the mean values for the number of photocounts and for coincidence counts are found as and .therefore both quantum efficiencies can be inferred by counting the individual and coincidence detections : . in practical applicationsone needs to account for multiple pairs occasionally generated in spdc during a coincidence window , dark noise and dead time of the detectors , and other factors that make the calibration formula and procedure more complicated .a single - detector implementation of this technique was also discussed in .this requires a photon number resolving detector collecting all of spdc light ( both the signal and the idler components ) near degeneracy .this technique is based on comparing the single- and double - photon detection probabilities . like the two - detector method, it also received further development .another possibility that was mention in is calibration of photo detectors operating in the photo current ( continuous ) regime instead of photon counting ( geiger ) regime . in this case, a correlation function of two photo currents is used instead of the coincidence counting rate . note that since the discrete character of photo detections is no longer required , this method allows for using the two - mode squeezed light instead of two - photon spdc light .a multimode version of this method was used for calibration of ccd cameras . similarly to spontaneous emission by excited atoms , spdc can be viewed as amplification of vacuum unsertainty of the optical field .this vacuum uncertainty is often referred to as _ vacuum fluctuations_. but strictly speaking this is a time independent uncertainty which is stochastically projected on a single value when measured . when repeating the process of state preparation and measurement many times , the uncertainty is transformed into an apparent fluctuation .note that a measurement does not necessarily involve the action of a human experimenter .coupling the system under study to some environment which then looses coherence ( i.e. which decoheres ) has the same effect .the vacuum uncertainty has a spectral brightness of . since parametric amplification of weak signals is linear , it is possible to perform absolute calibration of a light source directly in the units of by seeding its light into a parametric amplifier and comparing the emitted parametric signals with and without seeding .[ [ sub - shot - noise - measurements . ] ] sub shot noise measurements .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + we already noted that the power fluctuations , or noise of non - classical light may well be reduced below the classical shot noise limit .this effect may be used for low - noise measurements of a variable of interest .first application of squeezed vacuum for sub shot noise interferometric phase measurements has been demonstrated already in 1987 , followed by another publication from a different group . in these workssqueezed vacuum was generated in a degenerate - wavelength opo pumped below the threshold by the second harmonic of the coherent laser light used in the interferometer .this technique , now commonly used in the field , fixes the frequency and phase relation between the coherent signal and squeezed vacuum .injecting the squeezed vacuum into a dark port of an interferometer reduces the signal fluctuations below the shot noise by an amount which depends on the degree of squeezing .a reduction figure of 3.5 db was reached with this approach in the geo 600 setup of the ligo project , see fig .[ fig : ligo ] . in this casethe state - of - the - art 10 db squeezed vacuum resource was used . however , imperfect transmission of the complex multi - path interferometer increased the observed signal variance ( i.e. , noise ) from the squeezed vacuum source value to .this calculated variance agrees well with the reported 3.5 db of shot noise suppression . besides interferometry, non - classical light can facilitate sub shot noise measurements in spectroscopy and in biological research .on the other hand , strong intensity fluctuations can enhance the two - photon absorption in atoms and other systems , compared with light of the same average intensity but poisson or sub - poisson fluctuations .theoretical analysis of this phenomenon in two - phoon and squeezed light predicts a linear ( rather than quadratic ) dependence of the absorption rate on the optical intensity for weak fields , the possibility of a decreasing absorption rate with increasing intensity , and a significant differences between absorption rates for the pase- and amplitude - squeezed beams of the same intensity .further theoretical analysis including the second harmonic generation is provided in .two - photom absorption of non - classical light has been observed with cesium and rubidium atoms . in both casesatomic two - photon transitions were excited by non - degenerate squeezed light generated in an opo cavity .excitation rate scaling as the power 1.3 ( instead of 2 ) of the light intensity was observed in .conversely , it is possible to characterize photon bunching by observing two - photon response in semiconductors .speaking of spectroscopy , we must mention yet another application of non - classical light , not related to noise reduction but remarkable nonetheless . in this applicationstrongly non - degenerate spdc light propagates in a nonlinear interferometer filled with a sample of refractive material .as expected , a strong dispersion in e.g. infrared range is indicated by the characteristic distortion patterns of interference fringes in the infrared ( idler ) port .however it also leads to similar distortions arising in the signal port , which allows for performing infrared spectroscopy using visible light optics and detectors .[ [ high - resolution - imaging . ] ] high - resolution imaging .+ + + + + + + + + + + + + + + + + + + + + + + + the term imaging " may refer to both creating and reading of patterns , as well as to optical detection of small displacements .all these functionalities have been shown to benefit from applications of non - classical light . creating lithographic images with higher than diffraction - limited resolution has been proposed in year 2000 .this proposal is based on using photo - polymers sensitive to -photon absorption in conjunction with already mentioned entangled noon states .it was theoretically shown that using these states in a mach - zehnder interferometer can generate -spaced fringes of the order intensity distribution that would imprint in the polymer .it should be noted that even with classical light the -photon material response by itself provides a reduction of the optical point - spread function . with special modulation techniques this reduction factorcan be further pushed to reach the quantum limit of .therefore the practical benefit of the quantum lithography proposal turned out to be limited .however its originality and intellectual value have stimulated a number of follow - up works .particularly for , it was theoretically proven that not only faint two - photon light , but also stronger two - mode squeezed light can be used for this purpose . on the experimental side, we would like to acknowledge the success in driving coherent and incoherent two - photon processes with spdc light .discerning the objects features with resolution exceeding the rayleigh diffraction limit is possible in setups similar to two - photon _ ghost imaging _setup but relying on multi - photon entangled states such as ghz or w states .alternatively , axial resolution can be enhanced by a factor of two realizing a quantum version of optical coherence tomography measurement with two - photon light . in this case onemakes use of the signal - idler intensity correlation time being much shorter that their individual coherence times . and designate a formerly mode modified by split phase plates .dashed lines show 532 nm light ; solid lines show 1064 nm light .reprinted from .,scaledwidth=80.0% ] the resolution of small lateral displacement measurements is limited by the shot noise to the value where is the gaussian width of a probe beam focused onto a split - field detector , and is the number of detected photons .it has been shown that by composing the probe beam out of coherent and squeezed optical beams as shown in fig .[ fig : point ] , the shot - noise resolution limit ( [ d0 ] ) can be improved by approximately a factor of two .[ [ quantum - information - processing . ] ] quantum information processing .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the concept of quantum information processing , or quantum computing , was conceived in 1982 by richard feynman . at the heart of this conceptis a notion that a quantum superposition principle can be utilized to implement a large number of computations in parallel . to implement such quantum parallelism , logic operations of a quantum computermust be performed by quantum systems .it should be noted , however , that in order to access the results of this parallel computation one has to perform a measurement which is equivalent to a projection onto just one result .therefore , one benefits from this parallelism only if the single measurement already provides an advantage , such as in the shor algorithm where a quantum interference phenomenon is utilized to find prime factors of a large number faster that it is possible by the classical search .note that the _ classical _ optical interference can be used in a similar way . instead of encoding information in bits that take on binary values 0 or 1 ,these systems encode it in _ qubits _ , allowing any superposition of the binary values .a qubit may be implemented in various two - level physical systems , such as an atom , ion , spin-1/2 particle , and many others . to distinguish such systems from photons, we will call them _massive_. polarization of a photon , as well as its localization in two spatial or frequency modes , also can be used as a qubit .the advantage of optical qubits over massive ones is slow decoherence of the former : photons hardly interact with ambient electromagnetic or gravity fields .this advantage however turns into a disadvantage when it comes to implementation of quantum logical operations that require photon - photon interaction .such interaction can be facilitated using optical nonlinearity at the single photon level .several approaches to building quantum gates based on nonlinear response of optical media have been theoretically discussed .one of these approaches is the quantum zeno blocade which can be realized based on two - photon absorption , electromagnetically induced transparancy , or on the second - order polarizability of optical nonlinear crystals enhanced by high- cavities .several experimental demonstrations of these techniques have been performed with multi - photon ( typically , weak coherent ) states , however functional photonic quantum gates so far remain beyond the reach .this difficulty has lead to the concept of quantum network , where transmission of information is performed by photonic qubits , while its processing is performed by massive qubits .various types of massive qubits have been successfully coupled to single photons , including atoms and quantum dots .nitrogen vacancy centers in diamonds have been also proposed for this application .building a quantum network requires non - classical light sources whose central wavelength and optical bandwidth are compatible with the massive qubits . in the most straightforward way this can be achieved by using the same atomic transition for the generation of non - classical light ( see discussion in section [ sec : atoms ] ) , and then for transferring quantum information to atomic qubits .alternatively , narrow - line parametric light sources discussed in section [ sec : spdc ] can be used . note that while generating narrow - band squeezed light or squeezed vacuum is relatively easy by opertating an opo source above the threshold , generation of equally narrow - band photon pairs below the threshold is more difficult , as it requires tunable resonators with very high -factor . for many quantum information applicationssuch sources also need to be strictly single - mode , which has been recently achieved using whispering gallery mode ( wgm ) and waveguide resonators . using massive qubitsoften requires low temperatures , very low pressure vacuum , thorough shielding of ambient fields , and entails other serious technical complications .the concept of _ linear _ quantum computing strives to avoid these complications .there are no massive qubits in a linear quantum computer , but there are also no photon - photon interactions .this interaction is replaced by a measurement process followed by feed - forward to or post - selection of the remaining photons .this procedure is certainly nonlinear ( and even non - unitary ) , and can be used to implement quantum logic operations over a sub - space of a larger hilbert space . in higher - dimensional hilbert spaces photonic _ qutrits _ and even _ ququarts _ can be introduced as useful notions . as an example , a photon qutrit encoded in polarization has three basis states : , , and .a ququart basis consists of four states and can be easily envision if we further lift the frequency degeneracy , or couple the photon pair into different spatial modes .usually these states are discussed in the context of quantum secure communications using alphabets with higher than binary basis .transmission of information by photonic qubits presents sufficient interest by itself , besides being a quantum computer building block .the fundamental property of a qubit is that it can not be cloned , or duplicated .such cloning would be incompatible with the linearity of quantum mechanics .therefore , the information encoded in qubits can be read only once ; in other words , it can not be covertly intercepted .this property of qubits served as a foundation for the original quantum key distribution ( qkd ) protocol bb84 , and for numerous and diverse qkd protocols that emerged later .qkd is the least demanding application of non - classical light reviewed in this chapter , and the only quantum optics application known to us that has been relatively broadly commercialized to - date .discrete variables qkd can be successfully implemented even with weak coherent light , e.g. strongly attenuated laser pulses , which adequately approximate single - photon states .similarly , non - orthogonal coherent states of light can be successfully used in continuous variables qkd .coherent states are pure quantum states unlike thermal states and thus qualify as non - classical states ( see appendix for discussion ) . for some quantum protocolscoherent states suffice , for others they do not .furthermore , it is often argued that much of their properties can be described by classical models .for all these reasons we concentrate the discussion on states which are more non - classical than coherent states .some proposed quantum information protocols relying on non - classical light fall between the qkd and quantum computing in terms of architecture and complexity .one of such protocols is the _quantum commitment_. it is designed to allow alice to fix ( commit " ) an observable value in such a way that bob can not learn this value until alice reveals it .alice , on the other hand , can not change her commitment after it has been made . originally proposed in 1988 , this protocol has been experimentally demonstrated in 2013 with an added benefit of closing a loophole present in the original proposal .other protocols proposed for implementing quantum secret sharing among multiple parties and quantum digital signatures may be used in the context of quantum money , quantum voting , and other visionary applications . in the following section we review the sources of non - classical light , which is the main objective of this chapter .[ [ atoms ] ] atoms + + + + + the early interest in non - classical , and in particular entangled , optical states was stimulated by the quest for experimental violations of bell s inequalities .the first successful and statistically reliable violation was reported in 1972 by freedman and clauser .they used a cascade two - photon transition in calcium beam producing a polarization - entangled pair of blue and green photons , and performed a polarization - based bell measurement which has shown a six standard deviations violation .therefore the conceptually more advanced two - photon entanglement was observed with atomic sources prior to a more straightforward antibunching effect .photon antibunching in resonance fluorescence from a coherently driven two - level atom is easy to understand .once the atom emits a photon , it occupies the ground state and can not emit another photon for a period of time of the order of the excited state lifetime ( in the weak excitation regime ) , until the interaction dynamics drives the atom back to the excited state . hence the poissonian statistics of the coherent pump photons is converted to sub - poissonian statistics of fluorescence photons , leading to a state whose photon - number fluctuation is reduced below the shot noise limit typical for coherent light , such as shown in fig . [fig : pq](f ) .antibunching in resonance atomic fluorescence was predicted back in 1976 by carmichael and walls and observed in 1977 - 78 by two different research groups using beams of sodium atoms , see the review for details .more recently , four - wave mixing in a rubidium vapor cell was used to produce and characterize heralded fock - basis qubits .a sodium atomic beam passing through an optical cavity was also used for the first demonstration of squeezed light in 1985 . soon after that the first magneto - optical traps were implemented .they allowed to suppress the thermal motion of atoms and - associated with it - the dephasing , which increased the observed squeezing form 0.3 db to 2.2 db .even stronger was the two - mode squeezing observed by seeding one or both of these modes with weak coherent light . in these experimentsthe squeezing was measured to be 3.5 db ( 8.1 db corrected for losses ) and 3 db ( over 3.5 db corrected for losses ) , respectively .this technique has a potential for tailoring the spatial structure of multimode non - classical light , e.g. generating twin beams carrying orbital angular momentum .a single pump laser was used in experiments . to suppress the effect of thermal motion ,the four - wave mixing process can be driven by two _ different _ , conterpropagating , laser beams in a configuration typical for saturation absorption spectroscopy .this technique has allowed for generation of very high flux of photon pairs with controllable waveform , see and references therein .such pairs can be used for heralded preparation of nearly single - photon pulses .moreover , the gound - state coherence in cold atomic ensembles is sufficiently long - lived to allow the read " laser pulse to arrive with a substantial delay after the write " pulse , which allows one to control the delay between the emitted heralding and the heralded photons .a controlled delay is in fact just a special case of temporal shaping of the biphoton correlation function , which can be achieved with the read " pulse profile manipulation .configuration of transitions involved in the four - wave mixing process generating non - classical light and the experimental diagram . reprinted from .,scaledwidth=90.0% ]quantum optics researchers favored alkali atomic gases because of their strong resonant kerr response .a typical energy diagram of this process , called a double- configuration , is shown in fig .[ fig : levels ] .this diagram is drawn specifically for d1 and d2 manifolds , but its analogues can be realized in various atomic species .strong pump and control optical fields have frequencies and , corresponding to d2 and d1 transition wavelengths , respectively . generated quantum ( two mode squeezed ) light has the stokes and anti - stokes frequencies and , respectively .the energy and momentum conservation requires and , where the approximations arise from neglecting the momentum recoil and kinetic energy that maybe carried away by the atom .note that the momentum conservation allows for a very broad angular spectrum of the emitted light in the case of counter - propagating ( ) beams .another important feature of atomic kerr media is that its response may be sensitive to light polarization .this can lead to nonlinear phenomena such as polarization self - rotation ( see and references therein ) , where one polarization is amplified while the orthogonal polarization is deamplified .if the input light is polarized linearly or circularly , the vacuum field in the orthogonal polarization becomes squeezed .coupling atomic media with optical cavities opens up the field of cavity quantum electrodynamics ( cqed ) , rich with non - classical phenomena .even a single atom strongly interacting with an optical mode can generate squeezed light .it can also be used to implement a photonic blockade , leading to a photon turnstile capable of generating single photons on demand . in terms of quantum systems engineering , this can be considered as a next step after delayed heralded single - photon generation , and two steps after single photons generated at random times .a real or artificial atom strongly coupled to a cavity mode is also predicted to be capable of generating the n - photon bundles " , arguably equivalent to flying fock states .once generated , the non - classical states need to be routed in a decoherence - free manner towards the information - processing nodes or to detectors .the single photon routing controlled by other single - photon states would enable quantum logic operations on photons , and make an optical quantum computer possible .serious efforts have been made in this direction .an optical transistor was reported , in which a single control photon induced a ground - state coherence in a cold cesium cloud , affecting the transmission of a dealyed probe pulse . in amore recent work , a single - photon switch based on a single rubidium atom interacting with the evanescent field of a fused silica microsphere resonator was demonstrated .this system was shown capable of switching from a high reflection ( 65% ) to a high transmission ( 90% ) state triggered by as few as three control photons on average ( 1.5 photons , if correction for linear losses is made ) .[ [ artificial - atoms ] ] artificial atoms + + + + + + + + + + + + + + + + discrete level spectra are available not only in atoms but also in solid - state nanosystems , such as quantum dots or nitrogen vacancy ( nv ) centers in diamond . because of this property such systems are often referred to as artificial atoms " .they too have been actively utilized as sources of non - classical light .the physical mechanism regulating the photons statistics of an aritificial atom emission is very similar to that of real atoms .while an optical photon absorption by an atom causes an electron transition from the ground to an excited state , in quantum dot it causes generation of an electron - hole pair , called an exciton .the recombination of this exciton is responsible for the resonance fluorescence of the quantum dot .applications of this process for single - photon sources are reviewed in .such sources often require liquid helium cooling , although the first demonstration of non - classical light emitted from a quantum dot was done in year 2000 by michler __ at room temperature . in this experimenta single cdse / zns quantum dot was driven by a resonant constant wave ( cw ) pump laser .its fluorescence had sub - poissonian photon - number distribution with .more recent quantum dot based sources also can operate at room temperature exhibiting non - classical anti - bunched photon statistics in pulsed regime , although their anti - bunching is significantly stronger at liquid helium temperatures .in contrast with quantum dots , nv centers in diamond provide the most stable quantum emitters at room temperature . in , a cw emission from a single nv center in a diamond nanocrystalwas coupled to a 4.84 m in diameter polystyrene microspherical resonator .the non - classical character of the single quantum emitter was verified by measuring , while the coupling to the wgms was evident from a discrete spectrum of the emission .nv - center based pulsed single - photon sources also operate at room temperature reaching nearly the same anti - bunching figure .the power fluctuation measurements carried out in allowed to probe the wigner function only in the radial direction ( c.f . fig .[ fig : pq ] ) .a more advanced measurement also providing the access to the orthogonal quadratures was carried out by schulte _ et al ._ , who used a local oscillator with variable phase in a heterodyne setup .they also studied the amount of squeezing as function of the excitation power .just as with real atoms , coupling quantum dots to microcavities provides access to the benefits of cqed .one of these benefits is the improved collection efficiency . because of the high purcell factor of the microcavities , a quantum dot fluorescenceis preferentially radiated into the cavity modes and can be conveniently collected . have been using micro - pillar structures for this purpose . a micro - pillar resonator shown in fig .[ fig : pillar](a ) measures about a micron in diameter and five micron tall .it is complete with bragg mirrors at both ends , each consisting of approximately 30 pairs of alas / gaas layers .a layer of ingaas quantum dots is grown in the central anti - node of the cavity . the structure is cooled to 6 - 40 k and pumped by a pulsed mode - locked laser .photons collected from the cavity were antibunched with . a different design shown in fig .[ fig : pillar](b ) uses a layered structure where a pillar cavity is defined by cutting trenches of various shapes .this shape allows one to control the polarization dispersion of the resonator and to generate single photons with a desired polarization .quantum dots have been coupled not only to pillar or planar cavities , but also to wgm resonators .for example , strong coupling regime was achieved with a single gaas and inas quantum dots . instead of a cavity ,a quantum dot can be coupled to a single - mode on - chip waveguide .this approach not only allows to generate strongly non - classical ( ) light , but also leverages scalable on - chip photonic technology .operating these systems in pulsed mode gives them a much desired single photon on demand " quality .quantum dots can support not only single excitons , but also biexitons .recombination of a biexciton leads to emission of a photon pair , similarly to a two - photon emission from an atom in a freedman and clauser experiment . in the experiment carried out by akopian _ , this process can go through two different intermediate states , realizing two quantum - mechanical paths for biexitonic recombination .the photons of a pair emitted along one path are both polarized vertically ; along the other , horizontally .thus recombination of such a biexciton should create an optical bell state introduced in ( [ bellstates ] ) , provided that the polarization terms are not tagged " by either the final ( ground ) state of the quantum dot , or the optical wavelength .it turned out that they were in fact distinguishable based on the wavelength measurement .however the recombination process is broadband enough to provide a significant wavelength overlap ._ leveraged this circumstance to erase the wavelength distinguishablility by spectral filtering , and realized a polarization - entangled state . with this state they demonstrated a violation of bell s inequality by more than three standard deviations .generation of entangled photon pairs by quantum dots is unique in that the pairs themselves have sub - poissonian statistics , which allows to generate single photon pairs using a pulsed pump .this aspect of the quantum dot entangled light sources was highlighted by young _ , who demonstrated the triggered emission of polarization - entangled photon pairs from the biexciton cascade of a single inas quantum dot embedded in a gaas / alas planar microcavity .they also showed that quantum dot engineering can reduce the energy gap between the intermediate states , minimizing or removing the need for thorough spectral filtering .deterministically exciting biexcitons by optical -pulse , mller _ et al ._ have demonstrated a true polarization entangled photon pair on demand " operation with unprecedented anibunching parameter and high entanglement fidelity .spontaneous parametric down conversion ( spdc ) , optical parametric amplification ( opa ) and oscillation ( opo ) are among the most important sources of non - classical light .all these closely related processes are enabled by the second - order nonlinear response of non - centrosymmetric optical crystals , characterized by quadratic susceptibility .this process , originally called parametric scattering or parametric fluorescence , was first observed in 1965 and widely studied later . from the quantum point of view , i.e. in terms of photon pair emission , this process was first discussed in 1969 by klyshko .one year later , the simultaneity " of these photons ( called the _ signal _ and _ idler _ ) was observed by burnham and weinberg .we now know that the reported simultaneity " reflected the resolution of the time measurements rather than the physical nature of the biphoton wavefunction .the signal - idler correlation time is finite , and is closely related to their optical spectra and the group velocity dispersion ( gvd ) of the parametric nonlinear crystal .the temporal correlation function can take on different forms for different types of phase matching , with the width ranging over six orders of magnitude : from 14 femtoseconds for free - space spdc in a very short crystal to 10 - 30 nanoseconds for spdc in a high - finesse optical resonators . shaping this correlation functionis an important problem in quantum communications . with spdc lacking the photon - storage capability available to the atomic sources ,this problem is quite challenging .one possible approach is by interferometric tailoring of the spdc spectra using two or possibly more coherently pumped crystals .another approach is based on using a dispersive media .there is also a proposal for leveraging the _ temporal ghost imaging _ , which is similar to spatial ghost imaging but relies on temporal rather than spatial masks ( implemented e.g. by electro - optical modulators ) .parametric down conversion has been described in great detail in many books and papers , which spares us the necessity to reproduce all the analysis and derivations here .let us just list the most fundamental facts .the energy and momentum conservation for the pump , signal and idler photons impose the phase matching conditions where the frequencies are related to the wave numbers by the dispersion relations .it is the combination of these three constraints that is responsible for the free - space spdc light appearing as a set of colorful rings . in most materials , normal chromatic dispersion of the refractive index prohibits parametric phase matching by making ( [ pmom ] ) and ( [ pmk ] ) incompatible .however it can be compensated by polarization dispersion in birefringent materials . for example, the pump polarization can be made orthogonal to both signal and idler polarizations , which is known as type - i pdc configuration .alternatively , either signal or idler polarization can be parallel to that of the pump in type - ii pdc .type-0 pdc , when all three fields are polarized in the same plane , can be attained by using various periodical poling techniques which modifies ( [ pmk ] ) by adding or subtracting a multiple of the poling structure wave vector , where is the poling period and is its direction . a pair of coupled signal and idler modes with photon annihilation operators and , respectively , is governed by the evolution operator this is an approximation assuming that the pump field can be treated classically , i.e. that one can neglect the annihilation of one pump photon for every creation of a signal / idler photon pair .the function in ( [ uparam ] ) describes parametric interaction : where is the scalar product of the nonlinear susceptibility tensor with the interacting fields unit vectors .the overlap integral is calculated for the normalized modes eigenfunctions and the pump field envelope .this integral enforces the momentum conservation ( [ pmk ] ) for the plane - wave modes . the time integral is called the parametric gain .here the interaction time is determined by the crystal length .the effective interaction length however can be shorter than the crystal length for short pump pulses , when significant longitudinal walk - off between the pump and parametric pulses occurs due to the gvd .note that depending on the pump phase may take on negative values , leading to de - amplification .[ [ spontaneous - parametric - down - conversion ] ] spontaneous parametric down conversion + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + vacuum - seeded parametric down conversion , or spdc , is probably the most widely used nonlcassical light source made famous by bell s inequality violations , early qkd demonstrations , quantum teleportation , and a number of other remarkable achievements pdc has made possible .this process has been realized in low and high gain regimes , in free space , single transverse mode waveguides , and in optical resonators . in the low - gain regime , this process is adequately described by expanding the evolution operator from ( [ uparam ] ) into a power series .the leading term of the expansion represents a two - mode vacuum , next term is a signal - idler photon pair , the third term represents two such pairs , and so on .the amplitudes of these terms form the same power series as for a thermally populated mode , which determines the peak value of the glauber correlation function for a weakly populated spdc mode : .this also allows one to introduce the _ effective temperature _ for spdc emission .free - space spdc provides for a multimode source of spatially - entangled biphotons , which can be used in two - photon imaging discussed in section [ sec : aps ] .this type of entanglement arises from the momentum conservation ( [ pmk ] ) .indeed , even with strictly constrained ( e.g. , by band - pass filters ) optical wavelengths , there are many indistinguishable ways the transverse momentum conservation can be achieved . on the other hand , selecting a single transverse signal mode , andthe paired with it idler mode , one obtains a frequency - entangled state ( [ freqent ] ) .type - ii spdc offers an interesting configuration wherein the same pair of spatial modes and can be populated by orthogonally polarized signal and idler in both possible permutations , leading to a polarization - entangled state such as in ( [ bellstates ] ) .a closer look shows that this state is also frequency - entangled as in ( [ freqent ] ) .such states that are entangled in more than one degree of freedom at once are called hyperentangled .polarization entangled photon pairs can also be generated in type - i spdc , in a clever configuration of two crystals whose optical axis planes are perpendicular to each other .this configuration provides even more flexibility than the polarization entanglement generation in a type - ii crystal : by varying the phase between the pump field projection on the two crystals axes ( e.g. , varying the pump polarization ellipticity ) , as well as manipulating the polarization and phase of the signal and idler photons between the crystals , one can generate any polarization - entangled state in hilbert space spanned by the bell - states basis ( [ bellstates ] ) , as well as some of mixed states . parameters of spdc biphoton sources such as their wavelengths , bandwidth and pair production rate may vary considerably . because of accidental generation of multiple photon pairs , ultra high pair rate associated with large is not always desirable .it is often more important to minimize the chance of accidentally generating a second pair during the measurement . in the limit of very fast measurementsit is also important to generate sufficiently few ( much less than one on average ) photons per _ coherence time _ , i.e. per longitudinal mode .if this number exceeds unity , then as well , and the power series expansion of the evolution operator does not converge .this means that the already generated parametric photons make a stronger contribution to the further pdc process than the vacuum photons , i.e. we enter the regime of parametric super luminescence .this is accompanied by a transition from thermal ( gaussian ) photon number statistics to poissonian statistics , typical for laser light .however the parametric light remains non - classical even in the high - gain regime .when the signal and idler are distinguishable , the light is two - mode squeezed , which can be established by measuring the photocurrents difference in the signal and the idler detectors and finding it below the shot noise level .when the signal and idler are indistinguishable , we have the squeezed vacuum state such as shown in fig .[ fig : pq](d ) , whose photon - number basis expansion consists of only even terms and .let us recall that if the signal and idler have the same frequency and the distinguishability is only based on polarization or emission direction , a conversion between two - mode squeezing and squeezed vacuum is trivially accomplished with a polarizing or non - polarizing beam splitter , respectively . in these cases the terms two - mode squeezing " , squeezed vacuum " and even two - mode squeezed vacuum " are often used interchangeably .parametric gain determines the mean photon number in a mode as well as the squeezing parameters : .we have used these relations calculating the wigner function shape in fig .[ fig : pq](d ) . even in strongly pumped parametric processes, is typically less than ten .the record value of is reported in . but let us not be deceived by these small numbers . unlike a gain of a common amplifier , parametric gain is exponential , see ( [ uparam ] ) , so spdc with produces over photons per mode .therefore multimode light generated in parametric down conversion can be quite strong in terms of the optical power , see fig .[ fig : bright ] , but still non - classical .multimode spdc light is useful for imaging and similar applications . herethe number of modes can be compared to the number of pixels , and directly translates to the spatial resolution .single mode spdc , on the other hand , is often desirable for quantum communication applications , when the presence of multiple mutually incoherent modes is equivalent to the loss of the phase information , or decoherence .spatial and frequency filtering can be employed to purify the spdc mode structure , but this approach is not power - efficient if the initial source has too many modes .the number of excited transverse modes can be reduced , even to one , by using waveguides instead of bulk crystals .this provides a dramatic benefit over the filtering approach in terms of useful photon pair rate .for example , it is possible to generate and collect about 100,000 photon pairs per second with only 0.1 mw pump .the number of frequency or temporal modes can be controlled by matching the spdc linewidth , determined by the source length , geometry and gvd , with the transform - limited spectrum of the pump pulse .this can be done e.g. by adjusting the pump pulse duration . combining these two techniques , nearly single - mode parametric sourcescan be realized .let us also mention that the birefringent properties of parametric crystals can make the gain so selective that in the super luminescence regime even free - space parametric sources can approach single - mode operation .multipartite multiphoton states can be prepared in spdc process by combining two or more identical coherently pumped sources , or by splitting multiphoton states from a single source .these experiments are difficult because of the thermal statistics of spdc pairs .although higher photon - number states are less likely to emerge , they are more likely to cause a detection event with imperfect ( ) detectors .suppressing such events requires limiting the overall photon flux , which leads to very low data rate , typically of the order of 1/second for four - photon measurements and 1/hour for six - photon measurements .[ [ optical - parametric - amplification ] ] optical parametric amplification + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + if a degenerate or non - degenerate parametric process has non - vacuum inputs in the signal and idler modes , it may amplify or de - amplify the input beam(s ) depending on the relation between the sum of their phases and the phase of the pump , which determines the sign of .if one of the inputs , e.g. the idler , is in vacuum state for which the phase is not defined , then the signal will always be amplified . on the phase space diagramit will appear as both displacement and quadrature - squeezing .like spdc , opa is a common technique for generating non - classical light .this technique is most suitable for squeezing coherent light pulses seeding the opa .a 2 db and then 5.8 db squeezing of 270 ns long pulses in a degenerate type - ii parametric amplifier was demonstrated .a thousand times shorter squeezed vacuum pulses ( 250 fs , 1.7 db squeezing ) were generated in a sagnac interferometer configuration using periodically poled lithium niobate crystal .continuous wave coherent states can also be used for seeding the opas , which allows for precise control of the local oscillator phase .this technique has been used to generate quadrature - squeezed light by injecting fundamental laser light into a degenerate opa waveguide made from periodically poled ktp and pumped by the second harmonic of the fundamental laser light , reaching 2.2 db of squeezing . realizing a similar process in a monolitic cavity with highly reflective coating on the parametric crystal facets , 6 db of squeezing has been reached . using a type - ii opa in a bow - tie cavity yielded 3.6 db of polarization squeezing , which corresponds to reduction of the quantum uncertainty of the observables associated with the stokes operators ( [ stokesops ] ) .often the opa seed signal itself is generated in another spdc process taking place in a similar crystal and pumped by the same pump .this configuration is sometimes called a nonlinear interferometer .we have already encountered it discussing the spectroscopy applications in section [ sec : aps ] .the high mode selectivity of such interferometers has allowed to implement a nearly single - mode squeezed vacuum source without a significant decrease in the output brightness .it is also possible to cascade more than two opas .a system of three opas reported in has boosted the two - mode squeezing from 5.3 db measured after the first opa to 8.1 db after the third one .[ [ parametric - processes - in - cavities ] ] parametric processes in cavities + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + an amplifier can be turned into an oscillator by providing a positive feedback , e.g. by placing the amplifying media into an optical cavity . such a setup was used in the first demonstration of parametric squeezing in 1986 by wu __ . in this experiment ,frequency - doubled 1064 nm laser light pumped a degenerate opo system consisting of a lithium niobate crystal inside a fabri - perot resonator .the same fundamental laser light was used as a local oscillator in homodyne detection of the squeezed vacuum .3.5 db of squeezing was measured . in 1992 this resultwas slightly improved to 3.8 db with a bow - tie cavity .this configuration was further improved by using periodically poled ktp crystals , which reduced the linear and pump - induced absorption and eliminated the transverse walk - off .7.2 db of squeezing was demonstrated in 2006 , and 9 db in 2007 .thorough stabilzation of a cavity allowed generation of a narrow - band , 5 db squeezed vacuum matching the rubidium d1 line . using a monolithic cavity boosted the squeezing to 10 db in 2008 and to 12.7 db in 2010 .most recently , a new record , 15 db of squeezing , was reported .the experiments were carried out below the opo threshold .this means that the mean photon number per mode was below unity , or in other words , the process was predominantly vacuum - seeded , in contrast to the case of self - sustained oscillations . in this sensea sub - threshold opo can be compared to a very long crystal in an spdc experiment . by contrast , operating an opo _ above _ the threshold turns it into a laser .this is not an ordinary classical laser , however .a non - degenerate opo laser emits two beams that are quantum - correlated , or two - mode squeezed .this has been demonstrated already in 1987 by heidmann _et al . _ , who used a type - ii opo to generate a few milliwatts in each near - degenerate signal and idler beams .in a few years the same approach yielded 8.5 db of two - mode squeezing , which had remained a squeezing world record perhaps for the longest time .the photon number correlation between the signal and idler beams can be used to prepare sub - poissonian light in either one of these beams .this was demonstrated in 1988 by tapster et al . who detected the fluctuations of the signal beam power emitted in a type - i spdc process from a kdp crystal in frequency - degenerate but non - collinear configuration , and fed them back to the pump power thereby achieving the photon - number squeezing in the idler beam . a variation of this experimentwas performed later with a sub - threshold opo , in which case the signal measurement was _ fed forward _ to a fast intensity modulator placed in the idler beam . in one can also find an extensive theoretical analysis of both feedforward and feedback techniques applied for preparing sub - poissonian light in pdc . in section [ sec : lasers ] we will see how both these techniques can be applied to other types of lasers to generate non - classical light .this approach received an interesting developement in 2003 , when instead of actively using the signal power fluctuations in a feedback or feedforward loops , laurat _ et al ._ used them for conditioning the signal - idler squeezing measurement .only those measurements were retained when the fluctuations were below a certain threshold .thereby a continuous - variables post - selective measurement was implemented , which allowed to observe 7.5 db of squeezing .discussing the quantum information applications of non - classical light , we have mentioned the importance of making the source narrow - band enough to match the optical transitions widths in gas phase ensemble quantum memories , often implemented with atoms or ions .an opo provides such an opportunity . above the threshold, its line can be considerably narrower than the cold cavity linewidth due to the schawlow - townes effect .thus even with modest cavities opo light can match the narrow atomic transitions ._ used this approach to observe spin squeezing of cold atomic ensemble induced by interaction with squeezed vacuum .later it was shown that such a spin - squeezed atomic state can regenerate the squeezed vacuum , thereby verifying its storage .it is more difficult to achieve narrow - line opo operation below the threshold .usually it requires external high- filter cavities or post - selection techniques that considerably reduce the signal rate , as well as introduce inevitable losses at the edges of the filter windows .it would be desirable to generate photon pairs directly into a single or a few easily separated modes .this became possible by using wgm micro - resonators . in wgm resonators lightis guided along a smooth optical surface of rotation by continuous total internal reflection , similarly to how sound is guided in their namesake acoustical analogues .wgm resonators defy the postulate of light propagating in a straight line in the most profound way : here the light ray bends at every point .the wgm eigenfunctions inside of a spherical resonator are where are usual spherical coordinates , is the spherical bessel function of order , are the associated legendre polynomials , and is the amplitude .the eigenvalue for a given radial mode number is found by matching the internal bessel and external hankel eigenfunctions according to the boundary condition at the resonator rim .for relatively large wgm resonators with small evanescent field the approximation yields quite accurate results .it is convenient to introduce which gives the mode order in the direction , similarly to how gives it in the radial direction .intensity distributions in the fundamental and three higher - order wgms are shown in fig .[ fig : wgmr](a)-(d ) .coupling of wgm resonators to external optical beams is usually done via frustrated total internal reflection , which is achieved by placing a higher - index waveguide or prism in the evanescent field of the resonator , see fig .[ fig : wgmr](e ) . cross section of a wgm resonator for the fundamental mode ( a ) , higher - order modes ( b ) , ( c ) , ( d ) ; and the top view of a resonator with the coupling prism ( e ) .optical beams , visible inside the prism because of fluorescence , are focused at the coupling region where the total internal reflection is locally frustrated . ]more detailed discussion of wgm resonators and their properties can be found in review papers .here we only make two comments regarding wgm resonators that are relevant to our topic .first , the quality factor of wgm resonators made from optically nonlinear crystals typically ranges from 10 to 100 millions . for a resonator with 1 mm circumference and 1 m wavelengththis translates to the finesse . limited by absorption of the material , high persists within its entire transparency window , which for a good optical crystal may well exceed an octave. therefore the pump , signal and idler are all high - finesse modes , which increases the nonlinear optical conversion efficiency by a factor of compared to the same millimeter - long crystal without a cavity .this is a very strong enhancement which allows to seriously discuss the perspectives of doing nonlinear and quantum optics with a few or even single photons , in particular implementing optical quantum logic gates .the second note concerns the spdc phase matching .while the formalism ( [ uparam ] ) , ( [ omega ] ) , ( [ ovlp ] ) still applies , the overlap integral ( [ ovlp ] ) leads to selection rules that are much less restrictive than the usual phase matching ( [ pmk ] ) .in fact the angular part of this integral yields the clebsch - gordan coefficients , reminding us that in spherical geometry the orbital momenta are conserved , rather than linear momenta . the radial part leads to no strict selection rules , but it favors such combinations when .spdc was observed in wgm resonators made from various optically nonlinear crystals and at various pump wavelengths both above and below the opo threshold which for such resonators can be as low as several microwatts .two - mode squeezing above the threshold was reported by frst _the emitted signal and idler wavelengths can be tuned in a very wide range but at the same time with a great precision using a combination of temperature tuning , pump mode selection and evanescent field manipulation . adjusting these parameters , schunk _ et al ._ have been able to tune the signal wavelength to an atomic transition and observe fluorescence induced by single heralded photons . in this experimentboth cesium and rubidium d1 transitions were accessed using the same laser and the same resonator with the resonator temperature change by less than 2 .narrow linewidth of wgms leads to a relatively sparse spectrum .leveraging the selection rules , this can be used for engineering a single - mode parametric light source . strictly single mode operation attested to by a glauber correlation function measurement on the signal beam demonstrated by frtsch _et al . _ with only minimal spectral filtering . in this experimentthe spectral width of the pulsed pump was transform - limited to approximately 20 mhz , exceeding the signal and idler spectral widths ( both equal to the resonator linewidth ) by more than a factor of two .hence even a very careful measurement of the signal frequency would not allow to identify its idler twin photon among the others using relation ( [ pmom ] ) , and true single - mode regime is achieved . by the same argument, single - mode operation should not be expected with a cw pump having a linewidth smaller than that of a resonator mode .however an experiment using sub - khz wide cw laser pumping a wmg resonator with several mhz linewidth showed surprisingly few ( approximately three , where it should be thousands ) spdc modes , consistently with .note that in this experiment the parasitic " spdc into a wrong family of signal and idler wgms has not been filtered out .such filtering has improved from 1.5 to 2 in the pulsed light experiment , see above . therefore the extra modes observed in are more likely to be associated with different mode families than with photons distinguishability within a single wgm .the apparent paradox is resolved if we contemplate the fact that limitation of the observation time prevents us , even in principle , from performing a frequency measurement of the signal photon with the resolution required to localize the idler photon within a wgm linewidth . in this respect , gating a photon - detection measurement is equivalent to pulsing the pump . in both experiments and the measurement time was defined by the resolution of the instrument recording the signal - idler coincidences , 1 ns and 162 ps respectively , much too short for resolving the wgm linewidth . closing this sectionwe would like to make two remarks regarding the cavity - assisted nonlinear optical processes .the first one is that squeezing can be attained not only in pdc but also in other such processes .for example , both the second harmonic and fundamental pump wavelength in the frequency - doubling processes may be squeezed .but in such processes the amount of squeezing is inherently limited and most likely does not present significant practical interest .the second remark is that the parametric down - conversion near degeneracy may populate multiple pairs of quantum - correlated signal and idler modes , leading to an optical comb .such quantum - correlated optical combs may be used for creating multipartite entangled states , highly desired in many quantum information applications , e.g. in linear quantum computing .finally , we would like to point out that wgm is not the only type of the optically nonlinear monolithic resonators based on total internal reflection .opo based on square - shaped monolithic resonators has been recently implemented to generate 2.6 db of vacuum squeezing . a monochromatic wave propagating in kerr media experiences self - phase modulation ( spm ) that can be described by the kerr hamiltonian and by the associated time evolution operator .if the nonlinear phase shift is small enough this interaction can be approximated by a dependence of the index of refraction on the intensity : relation ( [ n2 ] ) is applicable to classical and quantum fluctuations of intensity .expanding e.g. a coherent state in the photon - number basis we observe that the spm advances a higher - number state further in the phase space than a lower - number state . as a result ,a characteristic shearing of the wigner function occurs , as illustrated in fig .[ fig : kerr ] , eventually leading to a crescent shape similar to fig .[ fig : pq](f ) and indicating the number - phase squeezing .the direction of shearing is opposite for materials with self - focusing ( ) and self - defocusing ( ) .note that spm broadens the optical spectrum , leading to generation of frequency - shifted fields , but preserves the initial field energy .this process can be also described as degenerate four - wave mixing ; in continuous - spectrum systems there is no clear boundary between these two processes .broad - band kerr response in transparent dielectrics is much weaker than the resonant kerr response in atoms , or the quadratic response in optical crystals .however , the kerr nonlinearity in dielectrics has an important advantage : it is present also in amorphous materials such as fused silica , that can be shaped into long single - mode fibers with very low loss .this advantage allowed shelby _ to observe kerr squeezing in fiber already in 1986 , the same year as the first opo squeezing was reported and a year after the first demonstration of squeezing in a sodium beam .they used 114 m of liquid helium cooled single - mode optical fiber pumped with cw 647 nm laser light . reflecting the output light off a single - ended cavity they varied the phase between the pump ( also serving as the local oscillator ) and the squeezed sideband to observe 0.6 db of squeezing .liquid helium had to be used to suppress stimulated brillouin oscillations and spontaneous guided acoustic - wave brillouin scattering ( gawbs ) , the acousto - optic phenomena presenting the main obstacles to cw kerr - squeezing in fibers .these obstacles can be circumvented by using short pulses and high peak intensities .because of different power dependence of the kerr and brillouin responses this effectively minimizes the latter .bergman and haus observed 5 db of squeezing with 100-ps pulses proagating in a 50 m fiber loop sagnac interferometer . alleviating the problem with gawbs , short pulses bring about a difficulty of their own : gvd causes them to spread , losing the advantage of high peak power .this problem can be solved using optical solitons .rosenbluh and shelby have detected a modest ( 1.1 db ) squeezing of 200-fs soliton pulses propagating at room temperature in 5 m of optical fiber symmetric sagnac interferometer .asymmetric sagnac interferometers were later used to produce stronger amplitude squeezing of solitons : 3.9 db ( 6.0 db corrected for losses ) with 126-fs pulses , and 5.7 db ( 6.2 db corrected for losses ) with 182-fs pulses .sagnac loops are convenient because they naturally facilitate a homodyne measurement . however , detecting the photon - number squeezing in a direct measurement is also possible .this was accomplished in a unidirectionally pumped 1.5 km fiber , yielding 2.3 db ( 3.7 db corrected for losses ) squeezing of 2.3-ps soliton pulses . in combination withthe propagation length dependent spectral filtering , this technique has lead to even stronger ( 3.8 db ) squeezing of 130-fs pulses . squeezing bandwidth in this experimentis shown to be at least 2 ghz .even higher bandwidths are theoretically possible .research aiming at ultra high bandwidth squeezing of individual multi - thz pulses is currently in progress .the benefit of squeezing solitons does not come entirely for free : solitonic propagation requires specific input pulse shape and area , which makes the squeezing depend on the pulse energy . butstabilizing the pulse energy is a much more tractable problem than suppressing gawbs and managing gvd . andin addition , if the input energy is large enough for the given pulse parameters , the nonlinear dynamical evolution of the the pulse will lead to a soliton solution .polarization squeezing can be prepared from quadrature squeezing of two orthogonaly polarized modes by projecting them onto a new polarization basis .used for this purpose polarization - maintaining ( pm ) optical fibers in sagnac configuration , producing about 1 db of squeezing .better results were obtained with a unidirectionally pumped 13.3 m pm fiber , in which case 130-fs soliton pulses were squeezed to 5.1 db .this result was later improved to 6.8 db ( 10.4 db corrected for losses ) , but raman scattering was found to become a limiting factor at that level .an interesting approach was taken by margalit _ , who used _ off - diagonal _ components of the tensor to cross - phase modulate orthogonal polarizations . in this caselinearly polarized 1-nj 150-fs pulses propagating unidirectionally in a non - pm fiber induced 3 db of vacuum squeezing in the orthogonal polarization .invention of microstructured , hollow - core and photonic crystal fibers opened new opportunities in kerr squeezing . in microstructured fibers ,light is confined primarily in a thin solid core which concentrates the optical field in a smaller volume and increases the kerr interaction strength .furthermore , gvd in such fibers can be engineered by designing the structure around the core . pumping a microstructured fiber near its zero gvd with 38-fs pulses , hirosawa _ et al ._ observed a spectrally broadened optical signal with up to 4.6 db ( 10.3 db corrected for losses ) squeezing for some sidebands ._ observed 3.9 db of squeezing and a reduction of excess noise , i.e. an increase in purity , as compared to standard fiber squeezing experiments .four wave mixing in microstructured fibers has been also used to create pulsed photon pairs at a rate rivaling the best spdc souces .another opportunity lies in combining the benefits of strong kerr response of atomic transition with the field confinement and gvd engineering accessible in hollow - core optical fibers , in particular those with cross section resembling a traditional japanese woven basket , which earned them a nickname kagome fibers , see fig .[ fig : kagome ] . in kagome fibers ,light propagates mainly inside the central hollow channel , which can be filed with a kerr media of choice .gvd can still be tailored by designing the fiber microstructure surrounding the channel , but it can furthermore by dynamically fine - tuned by changing the gas pressure , literally inflating the kagome fiber during the drawing process or even during the measurement . at the same time ,brioullin and raman processes in the fiber material are virtually avoided .first results have demonstrated squeezing in fibers filled with high pressure argon and mercury vapour .filling kagome fibers with alkali atom vapors has been proposed and attempted , but has not yet led to success because of chemically aggressive properties of such vapors . extended interaction ofstrongly confined optical fields can be achieved not only in fibers , but also in resonators .in contrast to waveguides , resonators have discrete spectra consisting of nearly - equidistant modes . in this case the spm , cross - phase modulation ( xpm ) and four - wave mixing processes are clearly distinct .all these processes play their roles in the formation of kerr - combs in crystalline wgm resonators , such as shown in fig .[ fig : comb ] .wgm combs have been extensively discussed recently , see e.g. and references therein .the aspect that is directly relevant to our discussion is the photon - number correlation between multiple pairs of sidebands placed symmetrically on both sides of the pump wavelength labeled in fig .[ fig : comb ] .this correlation arises from the degenerate four - wave mixing ( or hyperparametric ) process of annihilation of two pump photons and creation a photon pair in two symmetric modes . below the oscillation thresholdthis process leads to the generation of entangled photon pairs .a number of experimental demonstrations of such pairs has emerged recently using on - chip fabricated silicon microring resonators .the time - energy entanglement was proved by violating bell s inequality in , and has demonstrated time - energy and polarization hyper - entanglement , also confirmed by bell s inequality violation . above the threshold ,hyperparametric conversion leads to two - mode squeezing in a multitude of mode pairs .such squeezing was demonstrated in a microfabricated si ring , which is not strictly speaking a wgm resonator , but is closely related .the free spectral range of this resonator was large enough to allow selection of a single pair of squeezed modes by spectral filtering .these modes were found to be squeezed at the level of 1.7 db ( 5 db corrected for losses ) .broadband quadrature squeezing in a similar resonator has been theoretically predicted . closing this section, we would like to mention that interaction of light with mechanical vibrations is not always harmful for preparation of non - classical light as in the case of gawbs .it can be used to one s advantage .recently , it was shown that squeezed light can be created by coupling light with a mechanical oscillator . herethe radiation pressure quantum fluctuations induce the resonator motion which in turn imparts a phase shift to the laser light .intensity - dependent phase shift leads to optical squeezing in close analogy to the kerr effect . in this way squeezing of 1.7 db was demonstrated in a bulk cavity setup containing a thin partially transparent mechanical membrane .laser light is commonly believed to be the best real - world approximation of a coherent state of an optical mode .however this is not always the case .the nonlinear response of a laser cavity can lead to sub - poisson statistics of the emitted light , i.e. photon - number squeezing illustrated in fig .[ fig : pq](f ) . to understand the physical mechanisms of intensity fluctuation suppression in lasers ,consider an experiment with a vacuum tube filled with mercury vapor , carried out in 1985 . in this experiment , a constant current flowing through the tube caused the fluorescence with the photon rate fluctuations below the shot noise . while the electrons emitted from the cathode have poisson statistics , their flow through the vacuum tube is regulated by both the anode potential and the space charge of the electron flow .if the current increases , so does the negative space charge , which leads to the current fluctuation suppression .in other words , the space charge acts as a compressible buffer , smoothing out these fluctuations below the classical limit , which is reflected in the emitted photons statistics .this is the same mechanism , which allowed schottky and spehnke to observe a sub shot noise electron current in a vacuum tube in 1937 .a similar mechanism is present in semiconductor lasers operating in the constant - current ( but not in the constant - voltage ) regime , where the junction voltage provides a negative feedback regulating the current in the region of recombination .this experiment was carried out using laser diodes at room temperature and at 77 k . in both casesapproximately 1.7 db amplitude squeezing ( corrected for detectors efficiency ) was detected in a very broad frequency range .evidently , the squeezing measurement in these experiments was impeded by low collection efficiency .improving this efficiency by face - to - face " coupling of the laser diode and the photo diode , and cooling the assembly down to 66 k , the same group was able to boost the squeezing to 8.3 db .considering the 89% quantum efficiency of the photodiode , this corresponds to 14 db inferred squeezing .however , neither this nor other groups were later able to reproduce this large squeezing in a semiconductor laser , showing that there must be parameters not well understood and controlled in the initial experiment .nevertheless their experiment initiated work in other groups , which eventually led to a better understanding .although the space charge model gives a qualitative understanding of the phenomenon , it does not capture many important details . in 1995 ,marin _ et ._ conceded that the very mechanisms capable of explaining why some laser diodes and not others were generating sub - shot - noise light remained unclear " .they came to the conclusion that one of these mechanisms is the cross - talk between the main mode and other weakly excited modes , which should lead to their anti - correlation , i.e. two - mode or even multipartite squeezing .later , the same group developed a theoretical understanding by identifying two excess noise sources , the petermann excess noise and the leakage current noise , to explain the limitations of the squeezing observed .another relevant factor is the optical injection into the laser cavity .the effects of an external laser injection at 10 k and self - injection at room temperature were studied in quantum - well lasers . over 3 db and 1.8 db photon - number squeezing was observed .a weak squeezing in a free - running quantum - well laser was also observed at room temperature .the negative feedback suppressing the current ( and hence the optical power ) fluctuations does not necessarily have to be facilitated by the laser cavity . in section [ sec : spdc ]we already discussed an example of the electronic feedback derived from the signal measurement to control the idler photon statistics in pdc .a similar technique was applied to a semiconductor laser in 1986 by yamamoto _ et .because the laser beam lacks a quantum - correlated twin , yamamoto employed a xpm - based quantum nondemolition ( qnd ) measurement to monitor the output laser power .the power fluctuations of the laser beam were imprinted onto the phase of a probe beam , recovered in a heterodyne measurement , and fed back to the laser current . as a result , the amplitude squeezing ranging from 5 db at 16 mhz to 10 db below 2 mhz was observed .it might seem that a linear beam splitter could provide a simpler alternative to a qnd measurement in preparation of non - classical light with the feedback technique .this approach indeed leads to a very interesting field dynamics known as _ squashing _ .the term squashing " pertains to the fields propagating inside the loop , and is fundamentlly different from squeezing .the most remarkable property of the in - loop squashed optical field , theoretically shown by shapiro _ , is that such a field does not obey the usual commutation relations .therefore it is not subject to heisenberg uncertainty principle , and its photon - number uncertainty can be reduced below the classical limit without the phase noise penalty .it is worth noting that not only a state of an optical mode , but also a motional state of a trapped ion can be squashed in a feedback loop . in the context of nonclassical light applications ,the possibility of generating optical fields not constrained by the heisenberg uncertainty relations appears too good to be true . andindeed , it has been shown that out - coupling the squashed field from the loop destroys its remarkable properties .in fact , it has been pointed out that even fully characterizing these properties , which is only possible within the loop , is a highly nontrivial experimental problem that requires a qnd measurement . therefore using the electronic feedback systems for generating non - classical lighthas not attracted much of practical interest .using feedforward , on the other hand , is quite common in commercial optical devices known as noise eaters " that can suppress power fluctuations within the classical limit .it would seem that diode lasers offer the most robust and easily scalable technology for generating non - classical ( photon - number squeezed , or sub - poissonian ) light .they have also shown a promise in generating strongly squeezed states . howeverthe interest to this field apparently waned in the first decade of the 21 century .the reason for this skepticism could be that the discovery of the excess noise sources by maurin et al . made it clear that it is difficult to fabricate a laser that would _ predictably _ generate strong squeezing .if this is the case , a new advance in the field may be expected from improving the semiconductor technology .non - classical light has played an important role in development of quantum theory , starting form the early tests of local realism performed with entangled photons in 1972 . following this pioneering experiment , many striking quantum phenomena have been discovered via non - classical optics research .fluctuations of the optical field intensity have been suppressed below the shot - noise limit , which in classical notation requires negative probabilities .the concept of a _ biphoton _ , and later of a multipartite entangled state , was proven to be tangible .thus physicists gained hands - on experience with a system that may consist of space - like separated parts and yet constitute a single physical entity .quantum teleportation _ has been made possible with such systems .not only fundamental , but also applied science and technology have a lot to benefit from non - classical light .sub shot noise characteristics of the squeezed light directly points to one group of such applications : high resolution metrology . optical phase in an interferometer , optical beam displacement , sub - wavelength image discerning and recording are just a few topics from this group .information encoded in non - orthogonal single photon states or in any other non - orthogonal pure quantum states is protected from copying by fundamental laws of physics , which gives rise to another large group of applications concerned with information security .furthermore , this information can be processed using mind - blowing quantum logic operations ( such as e.g. a gate ) allowing , in perspective , to realize a quantum computer and the quantum internet . but how is this wonderful non - classical light generated ?the purpose of this chapter has been to provide a brief introductory tour over the most common sources of quantum light .the variety of physical systems capable of generating non - classical light is very broad .we encountered atomic beams , vapor cells , laser - cooled atomic clouds and even individual trapped atoms or ions ; optical crystals and fibers ; semiconductor nanoparticles and diode lasers .with such a great variety of physical systems to discuss , we did not have an opportunity to provide much of detail regarding each system and its performance .instead , we rely on references that are strategically placed so that an interested reader would be able to easily zoom in " on any part of our review by downloading the appropriate publications . * _ optical nonlinearity . _this is a driving mechanism for generating non - classical light .strongly nonlinear optical systems require less pump power and as a consequence are less noisy and more technologically acceptable .resonant nonlinearity of natural or artificial atoms , and broad - band nonlinearity of laser gain media are two examples that may surpass other systems by far . *_ optical loss ._ when photons are randomly removed from the system , statistics of the remaining photons becomes more and more poissonian .for many ( but not all ) quantum states this results in diminishing their non - classical characteristics , such as e. g. squeezing . for quantum states with zero displacement in phase space such as fock - states ,squeezed vacuum states , cat states ( i.e. superpositions of coherent states ) the statistical loss of a photon on average is enough to largely reduce the non - classical property .good examples of low - loss systems are optical nonlinear crystals and fibers . * _ mode structure ._ while imaging applications require multiple transverse modes , the applications concerned with sensing and information processing may require strictly single - mode light .bulk nonlinear crystals are natural sources for multimode light ; on the other hand wavequides , fibers and optical cavities can be used to achieve single - mode operation . * _ wavelength and bandwidth . _ using non - classical light in conjunction with massive " qubits , as suggested by the quantum network paradigm , requires matching their central wavelengths and bandwidths .therefore these source parameters either need to be precisely engineered , which is possible with atom - based sources , or tunable .wavelength tuning is readily available in bulk crystals , but their emission is usually very broad - band .fine - tuning of parametric light to atomic transitions in both central wavelength and bandwidth has been achieved with sub- or above - threshold opos . *_ practical utility ._ it is generally desirable to avoid cryogenic temperatures and other stringent environment requirements .unfortunately , many of quantum dot , quantum well and trapped atoms sources of non - classical light fail this requirement .therefore , progress may come via two different routes : ( 1 ) improving room temperature systems , or ( 2 ) developing compact sources and low cost cryogenic fridges . in general , we see that progress in quantum optics comes from developing : ( 1 ) light sources , ( 2 ) light confinement strategies , ( 3 ) materials with strong optically non - linear response . for a particular goal, one achieves the best result if one optimizes the combination of items from these three categories .we have already discussed one such example , a hollow - core fiber filled with atom media , and it appears plausible that more such examples may emerge in the future .let us consider the following series of thought experiments .the toolbox we need contains a source of laser light , a beam splitter , two time resolving detectors of high bandwidth , and electronic equipment to analyze the detector signals . in the first experiment ( 1 ) measuring the intensity correlations after splitting the laser light with the beam splitter yields a which is independent of time .this can be described by a classical model namely classical light fields without fluctuations - fine .now the second experiment ( 2 ) is to measure the intensity of the laser light as a function of time .the result is a fluctuating detector signal ( corresponding to the poisson statistics of the photons in a quantum language ) .this can also be described by a classical model in which the classical electric fields fluctuate - this is also fine , but note that the models required are not compatible .you may not be satisfied and argue that the fluctuation observed in experiment ( 2 ) may well come from the detector itself contributing noise .this would average out in experiment ( 1 ) because the noise introduced by the two detectors is of course not correlated . but suppose the lab next door happens to have amplitude squeezed light , with intensity fluctuations suppressed by 15 db below the shot noise .measuring the squeezed light intensity noise you convince yourself easily that the detector does not introduce enough noise to explain experiment ( 2 ) .note that this test should convince you even if you have no clue what the squeezed light is .but you do not want to give up so easily and you say what if a classically noisy light field enters the second input port , uncorrelated with the laser light but likewise modeled by classical stochastic fluctuations " .and you are right , this more involved classical model would explain both experiments ( 1 ) and ( 2 ) - yet there is ( 3 ) a third experiment we can do .we can check the intensity of the light arriving at this second input port of the beam splitter and no matter how sensitive the intensity measuring detectors are they will detect no signal .but this is not compatible with a classical model : classical fluctuations always lead to measurable intensity noise .we conclude by noting that obviously coherent states are non - classical because there is no single classical stochastic model which describes all possible experiments with laser light . butas we have seen it is tedious to go through these arguments , and no simple measure of nonclassicality was found so far qualifying a coherent state as noclassical . on the other hand ,the nonclassical nature of a coherent state is used in some quantum protocols . having gone through this lengthy argument it seems important to note that also thermal states , i.e. mixed quantum states , can still be somewhat nonclassical in nature if the classical excess noise is not too much larger than the underlying quantum uncertainty .b. vlastakis , g. kirchmair , z. leghtas , s. e. nigg , l. frunzio , s. m. girvin , m. mirrahimi , m. h. devoret , and r. j. schoelkopf .deterministically encoding quantum information using 100-photon schrdinger cat states ., 342:607610 , nov 2013 .k. y. spasibko , f. tppel , t. s. iskhakov , m. stobiska , m. v. chekhova , and g. leuchs .interference of macroscopic beams on a beam splitter : phase uncertainty converted into photon - number uncertainty ., 16:013025 , jan 2014 .g. brida , m. genovese , i. ruo - berchera , m. chekhova , and a. penin .possibility of absolute calibration of analog detectors by using parametric downconversion : a systematic study ., 23:21852193 , oct 2006 . h. vahlbruch , m. mehmet , k. danzmann , and r. schnabel .detection of 15 db squeezed states of light and their application for the absolute calibration of photoelectric quantum efficiency . , 117:110801 , sep 2016 .g. brida , i. p. degiovanni , m. genovese , m. l. rastello , and i. ruo - berchera .detection of multimode spatial correlation in pdc and application to the absolute calibration of a ccd camera ., 18:2057220584 , sep 2010 . a. n. boto , p. kok , d. s. abrams , s. l. braunstein , c. p. williams , and j. p. dowling . quantum interferometric optical lithography : exploiting entanglement to beat the diffraction limit . , 85:27332736 , sep 2000 .s. m. hendrickson , c. n. weiler , r. m. camacho , p. t. rakich , a. i. young , m. j. shaw , t. b. pittman , j. d. franson , and b. c. jacobs .all - optical - switching demonstration using two - photon absorption and the zeno effect ., 87:23808 , feb 2013 .t. aoki , a. s. parkins , d. j. alton , c. a. regal , b. dayan , e. ostby , k. j. vahala , and h. j. kimble .efficient routing of single photons by one atom and a microtoroidal cavity ., 102:083601 , feb 2009 .j. beugnon , m. p. a. jones , j. dingjan , b. darqui , g. messin , a. browaeys , and p. grangier .quantum interference between two single photons emitted by independently trapped atoms ., 440:779782 , apr 2006 .v. leong , s. kosen , b. srivathsan , g. k. gulati , a. cer , and c. kurtsiefer .hong - ou - mandel interference between triggered and heralded single photons from separate atomic systems ., 91:063829 , jun 2015 .g. schunk , u. vogl , d. v. strekalov , m. frtsch , f. sedlmeir , h. g. l. schwefel , m. gbelt , s. christiansen , g. leuchs , and c. marquardt . interfacing transitions of different alkali atoms and telecom bands using one narrowband photon pair source . , 2:773778 , sep 2015 .g. schunk , u. vogl , f. sedlmeir , d. v. strekalov , a. otterpohl , v. averchenko , h. g. l. schwefel , g. leuchs , and c. marquardt .frequency tuning of single photons from a whispering - gallery mode resonator to mhz - wide transitions ., 63:20582073 , jan 2016 .m. frtsch , g. schunk , j. u. frst , d. strekalov , t. gerrits , m. j. stevens , f. sedlmeir , h. g. l. schwefel , s. w. nam , g. leuchs , and c. marquardt .highly efficient generation of single - mode photon pairs from a crystalline whispering - gallery - mode resonator source ., 91:023812 , feb 2015 .luo , h. herrmann , s. krapick , b. brecht , r. ricken , v. quiring , h. suche , w. sohler , and c. silberhorn .direct generation of genuine single - longitudinal - mode narrowband photon pairs ., 17:073039 , jul 2015 . c. h. bennett and g. brassard .quantum cryptography : public key distribution and coin tossing . in _ proceedings of ieee international conference on computers , systems andsignal processing _ , volume 175 , page 8 , 1984 .t. lunghi , j. kaniewski , f. bussires , r. houlmann , m. tomamichel , a. kent , n. gisin , s. wehner , and h. zbinden .experimental bit commitment based on quantum communication and special relativity ., 111:180504 , nov 2013 .s. bounouar , m. elouneg - jamroz , m. d. hertog , c. morchutt , e. bellet - amalric , r. andr , c. bougerol , y. genuist , j .-poizat , s. tatarenko , and k. kheng .ultrafast room temperature single - photon source from nanowire - quantum dots ., 12:29772981 , jun 2012 .d. press , s. gtzinger , s. reitzenstein , c. hofmann , a. lffler , m. kamp , a. forchel , and y. yamamoto .photon antibunching from a single quantum - dot - microcavity system in the strong coupling regime ., 98:117402 , mar 2007 .m. n. makhonin , j. e. dixon , r. j. coles , b. royall , i. j. luxmoore , e. clarke , m. hugues , m. s. skolnick , and a. m. fox .waveguide coupled resonance fluorescence from on - chip quantum emitter ., 14:69977002 , dec 2014 .m. frtsch , j. u. frst , c. wittmann , d. strekalov , a. aiello , m. v. chekhova , c. silberhorn , g. leuchs , and c. marquardt . a versatile source of single photons for quantum information processing . , 4:1818 , jan 2013 .a. v. burlakov , m. v. chekhova , d. n. klyshko , s. p. kulik , a. n. penin , y. h. shih , and d. v. strekalov .interference effects in spontaneous two - photon parametric scattering from two macroscopic regions . , 56:32143225 , oct 1997 .m. rdmark , m. zukowski , and m. bourennane .experimental test of fidelity limits in six - photon interferometry and of rotational invariance properties of the photonic six - qubit entanglement singlet state ., 103:150501 , oct 2009 .k. hirosawa , y. ito , h. ushio , h. nakagome , and f. kannari .generation of squeezed vacuum pulses using cascaded second - order optical nonlinearity of periodically poled lithium niobate in a sagnac interferometer ., 80:043832 , oct 2009 .g. htet , o. glckl , k. a. pilypas , c. c. harb , b. c. buchler , h .- a .bachor , and p. k. lam . squeezed light for bandwidth - limited atom optics experiments at the rubidium d1 line ., 40:221226 , jan 2007 .h. vahlbruch , m. mehmet , s. chelkowski , b. hage , a. franzen , n. lastzka , s. gossler , k. danzmann , and r. schnabel . observation of squeezed light with 10-db quantum - noise reduction ., 100:033602 , jan 2008 .t. eberle , s. steinlechner , j. bauchrowitz , v. hndchen , h. vahlbruch , m. mehmet , h. mller - ebhardt , and r. schnabel . quantum enhancement of the zero - area sagnac interferometer topology for gravitational wave detection ., 104:251102 , jun 2010 .j. laurat , t. coudreau , n. treps , a. matre , and c. fabre .conditional preparation of a quantum state in the continuous variable regime : generation of a sub - poissonian state from twin beams ., 91:213601 , nov 2003 .j. u. frst , d. v. strekalov , d. elser , a. aiello , u. l. andersen , c. marquardt , and g. leuchs .low - threshold optical parametric oscillations in a whispering gallery mode resonator . , 105:263904 , dec 2010 .t. beckmann , h. linnenbank , h. steigerwald , b. sturman , d. haertle , k. buse , and i. breunig .highly tunable low - threshold optical parametric oscillation in radially poled whispering gallery resonators . , 106:143903 , apr 2011 .m. frtsch , t. gerrits , m. j. stevens , d. strekalov , g. schunk , j. u. frst , u. vogl , f. sedlmeir , h. g. l. schwefel , g. leuchs , s. w. nam , and c. marquardt . near - infrared single - photon spectroscopy of a whispering gallery mode resonator using energy - resolving transition edge sensors . , 17:065501 , jan 2015 .a. brieussel , y. shen , g. campbell , g. guccione , j. janousek , b. hage , b. c. buchler , n. treps , c. fabre , f. z. fang , x. y. li , t. symul , and p. k. lam .squeezed light from a diamond - turned monolithic cavity ., 24:4042 , feb 2016 .t. d. bradley , y. wang , m. alharbi , b. debord , c. fourcade - dutin , b. beaudou , f. gerome , and f. benabid .optical properties of low loss ( 70 db / km ) hypocycloid - core kagome hollow core photonic crystal fiber for rb and cs based optical applications . , 31:27522755 , jan 2013 .j. nold , p. hlzer , n. y. joly , g. k. l. wong , a. nazarkin , a. podlipensky , m. scharrer , and p. s. j. russell .pressure - controlled phase matching to third harmonic in ar - filled hollow - core photonic crystal fiber ., 35:29222924 , sep 2010 .m. a. finger , t. s. iskhakov , n. y. joly , m. v. chekhova , and p. s. j. russell .raman - free , noble - gas - filled photonic - crystal fiber source for ultrafast , very bright twin - beam squeezed vacuum ., 115:143602 , oct 2015 .w. liang , a. a. savchenkov , z. xie , j. f. mcmillan , j. burkhart , v. s. ilchenko , c. w. wong , a. b. matsko , and l. maleki .miniature multioctave light source based on a monolithic microcavity ., 2:40 , jan 2015 .e. engin , d. bonneau , c. m. natarajan , a. s. clark , m. g. tanner , r. h. hadfield , s. n. dorenbos , v. zwiller , k. ohira , n. suzuki , h. yoshida , n. iizuka , m. ezaki , j. l. obrien , and m. g. thompson . ., 21(23):27826 , november 2013 .i. maurin , i. protsenko , j .-hermier , a. bramati , p. grangier , and e. giacobino .light intensity - voltage correlations and leakage - current excess noise in a single - mode semiconductor laser . , 72:033823 , sep 2005 .
|
non - classical concerns light whose properties can not be explained by classical electrodynamics and which requires invoking quantum principles to be understood . its existence is a direct consequence of field quantization ; its study is a source of our understanding of many quantum phenomena . non - classical light also has properties that may be of technological significance . we start this chapter by discussing the definition of non - classical light and basic examples . then some of the most prominent applications of non - classical light are reviewed . after that , as the principal part of our discourse , we review the most common sources of non - classical light . we will find them surprisingly diverse , including physical systems of various sizes and complexity , ranging from single atoms to optical crystals and to semiconductor lasers . putting all these dissimilar optical devices in the common perspective we attempt to establish a trend in the field and to foresee the new cross - disciplinary approaches and techniques of generating non - classical light .
|
n - body simulations are a key tool in astrophysics .applications range from cosmological problems involving dark matter to stellar systems and dynamics of galaxies . in many astrophysical problems, can be very large . for precision calibration of statistical weak lensing ,large dynamic range is required and this provides a challenge to existing computational resources .a recent development has been the move towards large massively parallel computers with cheap commodity components and relatively slow interconnects .the burden of coding in the presence of a large memory hierarchy ( commonly several layers of cache , local memory , remote memory , and secondary storage ) , and distributed message passing libraries is now placed on the scientist who wishes to utilize the large machines .our goal is to provide the community with a generic n - body code which runs close to optimally on inexpensive clusters . in this paperwe describe the design and implementation of the algorithm , and performance numbers for cosmological applications .most real world applications on modern microprocessors achieve a small fraction of theoretical peak speed , often only a few percent .an order of magnitude in speedup is available through the use of assembly coded libraries .these include routines such as fft s that have been optimized to take advantage of the particular benefits that a given hardware manufacturer can offer in terms of instruction set and processor architecture developments .a second limiting factor is the amount of physical memory .most n - body codes are not very efficient in memory use . in principle , one only requires 6 numbers per particle to store the positions and velocities . in practice , other data structures such as density fields and force fields dominate memory usage .in this paper we present an algorithm that approaches minimal memory overhead , using only seven numbers per particle , plus temporary storage which is small .the computation is off - loaded onto highly optimized fft s , and the communication cost on parallel machines is mitigated by a two level mesh hierarchy .in this section we describe the physical decomposition of our algorithm .gravity is a long range force , and every particle interacts pairwise with every other particle .the use of a mesh allows a reduction in computational cost from to .unfortunately , fft s are highly non - local , and would in principle require global transposes that move large amounts of data between processors. this can be costly in terms of network resources , especially in economical parallel clusters that employ long latency slow ethernet .a two level mesh can circumvent this drastic demand on communication hardware resources .we follow the lines of hydra , which decomposes the gravitational force into long and short range components .several authors have described parallel implementations of particle mesh algorithms .tpm and gotpm merge particle mesh and tree algorithms , and have full implementations of the particle - mesh ( pm ) algorithm if one turns off the trees .these codes are not publicly available , and they were not designed as optimal pm codes . have also implemented a distributed memory pm scheme , but which requires significant bandwidth .the long range components can be computed on a coarse mesh .we use a global coarse mesh which is four times coarser in each dimension than the fine mesh , resulting in a 64-fold savings in global mesh communications .the fine mesh does not need to be globally stored all the time , as we only need to store the tiles that are being worked on . for coarse mesh fourier transforms , we used the freely available parallel fftw library .this library is based on slab decomposition , which our current code adheres to . in order to obtain optimal performance on shared memory multiprocessor nodes within a clustered environment , the fine mesh is computed on independent cubic sections of the slab .this allows for multiple processors to update fine mesh forces in parallel and reduces memory overhead by only requiring a fraction of the mesh and its associated structures to exist in memory at a given time .coarse mesh calculations and particle indexing are also parallelized through shared memory at the loop level to maintain high processor load .thread - level parallelization has thus been implemented through the use of openmp on the majority of the code , with the only exception being the particle passing routine . due to the lack of freely available thread - safe mpi implementations we have limited message passing to only single thread executed regions of the code , a design choice that maximizes portability .first , we describe the two - level mesh gravity solver as covered in .their method is based on a spherically symmetric decomposition of the potential and force laws . here, we consider the decomposition of the gravitational potential , although this method is applied equally well to the direct gravitational force . the gravitational potential is obtained through a convolution , of the density field with a kernel . in order to solve this on a two - level mesh we separate the kernel into a short - range component and a long - range component where the short - range cutoff is a free parameter that will dictate the size of the buffer used between fine mesh tiles and consequently the amount of particles required for passing between nodes .the function is chosen to be a polynomial , whose coefficients , are determined from the conditions these restrictions ensure that the long - range kernel smoothly turns over near the cutoff and that the short - range term smoothly goes to zero at the cutoff . the long - range potential is computed by performing the convolution over the coarse - grained global density field .the superscript denotes that the discrete fields are constructed on a coarse grid .mass assignment onto the coarse grid is accomplished using the cloud - in - cell ( cic ) interpolation scheme with cloud shape being the same as a coarse cell .the long - range force field is obtained by finite differencing the long - range potential and force interpolation is carried out using the same cic scheme to ensure no fictitious self - force . since the two - level mesh scheme uses grids at different resolutions , the decomposition given by equations ( [ eqn : ws ] ) and ( [ eqn : wl ] ) needs to be modified . in fourier space , we can write the long - range potential as [\tilde{w}_l({\boldsymbol{k}})\tilde{s}_w({\boldsymbol{k}})],\ ] ] where and are the fourier transforms of the mass smoothing window and kernel sampling window , respectively . the mass smoothing window takes into account the cic mass assignment scheme for constructing the coarse density field .the kernel sampling window corrects for the fact that the long - range kernel given by equation ( [ eqn : wl ] ) is sampled on a coarse grid . in fourier space , the corrected short - range potential kernel is now given by and can be slightly anisotropic , particularly near the short - range cutoff . in figure [ force_res ]we display the contribution to the short and long range force by randomly placed particle pairs on the mesh using the spherically symmetric force matching method .the errors associated with this data set are shown in figure [ sph_force_comp_err ] . in this sectionwe present a general procedure that we use to minimize the error from the two level mesh .the basic strategy is to minimize the error variance in the total force . sincea variance is a quadratic quantity in the linear sum of two kernels , minimization is a linear problem in the value of the kernel at each point .we will formulate the problem , and show its solution .this generalizes the standard procedure of matching spherically symmetric kernels as described in section [ sec : sph ] .since our fine grid is cubical and not spherical , we can utilize its anisotropy to minimize the force matching error .we also discuss some residual freedom in the error weights .we define the error variance of the true force to the grid force ^ 2 w_i .\label{eqn : err}\ ] ] each error term is given a weight , which may be chosen to give constant fractional error , constant radial error , or any other prescription .our goal is to find a grid force law which minimizes the error given by equation ( [ eqn : err ] ) . for a two level grid , we decompose the grid force into two parts , where is given as the numerical gradient of a potential .the forces are cic interpolated from the nearest grid cell .so at each fine grid cell , one has a unique linear coarse force , and the force is also defined at arbitrary separations .the primary source of grid error arises from the inhomogeneity of the cic interpolation : the force between particles depends not only on their separation , but also on their position relative to the grid cells .intuitively , one expects the force error to be minimized when the coarse force is smoothly varying , since its inhomogeneity is greatest . in the potential and force calculation, we perform a convolution over the density field . to restrict the communication overhead , we require the fine grid force to be short range , in our case 16 fine grid cells .the total number of non - redundant entries in the force kernel is .since equation ( [ eqn : err ] ) is quadratic in both the short range potential and the long range force , the exact solution is given by the solution of a linear equation .we evaluate the sum in expression equation ( [ eqn : err ] ) by placing particles at all integral fine grid cell separations .the minimization may not be unique , so we use an eigenvalue decomposition and discard zero eigenvectors .we note that the pair weighting in equation ( [ eqn : err ] ) gives more weight to large separations since there are more wide separation pairs .also , as written it minimizes the total force error , not the fractional error .we use a weight function that weighs pairs depending on their separation . in our implementation , each pair is weighted by the actual euclidean separation , which corresponds to constant fractional error . the coarse grid force at a separation of zero and one coarse grid cell are also set to be zero . in the actual implementation , we generate a vector of 816 variables for the non - redundant entries of the fine grid potential , and a vector of 360 variables to represent the non - redundant three components of the coarse grid force on a grid . call this vector of 1176 unknowns .we then produce a list of 12320 equations , which over - constrains the solutions . for each fine grid cell , we have two sets of three equations , one for each of the three force components .we generate equations on an extended grid of fine grid cells , zero padding the fine grid entries beyond the cutoff .this results in a set of equations .the least squares solution yields .the square matrix may not always be invertible , so we perform a singular value decomposed solution .the actual condition number of the system is ( apart from singular values ) .double precision is useful to see the spectrum , where one sees a clear break of eight orders of magnitude between the zero eigenvalues and the non - zero ones . despite the large amount of over - determinacy, there are 255 singular values which are left undetermined , and set to zero .most of them probably correspond to coarse grid entries that are at too large separations to be constrained .the actual solution took less than one minute on a laptop .in contrast , storing the full grid of kernel entries ( which allows one to shortcut symmetries ) would result in 64 times more unknowns , and require a supercomputer to solve the times more expensive problem .since we probe only one eighth of one octant in the force kernel , we need to explicitly enforce boundary conditions on the minimization .this is done by requiring the long range force to have zero transverse force along the axes , and to be symmetric along the diagonals .this optimal force matching results in an anisotropic short range kernel with cubic support .this differs from most approaches which usually impose spherical symmetry on the decomposition .the resulting errors , shown in figure [ lsq_force_comp_err ] , have a smaller scatter than those in figure [ sph_force_comp_err ] .the basic logic of the 2-level particle mesh algorithm is presented in figure [ algor ] . in this method ,particles local to each node are stored in a non - ordered list . to reduce time spent organizing the particles based on their locations within the mesh, a linked list is constructed by threads in parallel that associates particles contained within each cell of the coarse mesh .this is achieved by storing the tail of each threads chain as well as the head , allowing for a merger of the individual lists .since the linked list is used for determining which particles are to be passed to adjacent nodes it is generated at the beginning of the program execution , as well as following particle passing each time - step . ` ` + ` subroutine particle_mesh(code ) ` + ` if ( first_step ) call link_list ` + ` call position_update ` + ` call particle_pass ` + ` call link_list ` + ` ! ` `omp end parallel ` + ` callcoarse_mesh ` + ` call particle_deletion ` + ` end subroutine particle_mesh ` + density attribution and velocity updating is implemented with the cic interpolation scheme on both mesh levels . on the fine mesh we use a potential kernel and calculate the force by finite - differencing the potential ,while on the coarse mesh we directly calculate the force utilizing a force kernel .direct calculation of the force requires extra fourier transforms per coarse time - step , however , this prevents the loss of accuracy associated with finite - differencing and only incurs a minor overhead relative to the fine mesh calculations since there is less data to process .the code can support any type of force kernel that one would like to construct and is easily interchangeable .kernels generated with the methods illustrated in sections [ sec : sph ] and [ sec : lsm ] are included with the code .each fine mesh cube requires a buffer density region along its surface area to calculate fine mesh forces within the fine range force cut - off . for the dimensions perpendicular to the decompositionthis can be readily obtained using the particles in the slab , however , additional information is required about the density in the dimension along the decomposition from adjacent nodes .this layout is presented in figure [ decomp ] .we communicate buffer particles from adjacent nodes to locally calculate the density in the buffer region .this approach removes the communication dependency from mesh calculations and allows it to be done in tandem with the passing of migratory particles , minimizing local processing cost as well as removing potentially complicated communication patterns that could lead to deadlock .the slab decomposition of the physical volume guides the approach that is used to pass particles between nodes .referencing of particles based on their location within the mesh is implemented through a linked list spanning the coarse mesh .the interface between nodes is searched using the linked list to determine if particles lie within the region for buffer construction or migration .particles that migrate are indexed in an additional deletion list and all of the particles to be passed are included in a buffer for passing .the passing then occurs over all nodes synchronously and is repeated in the other direction , re - using the buffer .this is a suitable approach for cosmological applications as the particle flux is relatively balanced between nodes and is dominated by the buffer region . rather than an additional loop at the end of the step for deletion of buffer particles and particles that exited the node , this process is done during the passing routine andis illustrated in figure [ partlist ] .particles that are to be deleted are shuffled to the end of the particle list using the deletion list .the incoming buffer region is then searched for new local particles and these are swapped to the end of the now contiguous local particle list . in this fashion one need only change the index of the total number of particles in the list at the end of the step to delete particles that lie outside of the local mesh . in an effort to maintain a high processor loadwe have developed a file transfer interface for the mpi fftw library .the mpi fftw routines currently are not thread - safe and rather than having only one thread per node participate in the calculation of the coarse mesh fourier transform we execute a separate program which runs an fftw process for each processor on each node .while this offers no gain in performance for single processor nodes it can provide nearly linear speed - up on multi - processors with a memory overhead equal to that used to store the coarse grid density . the data that is to be transformed is first decomposed on each node by the pmfast process into a number of slabs equal to the number of processors that the node contains .this data is then written to a file - system , at which point the fftw processes read in the data , perform the transform and write it back .the pmfast process then reads the decomposed slab and resumes operation . by using temporary file - systems in ramwe avoid the latency cost of having to communicate this information to disk .we use the time step constraint where is the grid spacing and is the maximal gravitational acceleration .the coarse grid has four times the grid spacing , and a smoother gravitational field , so the time step is typically limited by the fine grid .we can exploit this and compute the coarse grid forces less frequently than the fine grid .second order accuracy in time can be maintained using strang - type operator splitting .the code currently supports a variable time - step scheme in which multiple fine grid updates are performed for every coarse grid step .this is currently done in an ratio , where is an odd integer and represents the number of fine steps calculated per coarse step sweep . in order to maintain second order accuracyall of the fine - steps within the sweep are calculated with the same time interval , and the coarse step is done halfway through the sweep with the time interval used for the fine steps .the length of the time - step is variable and limited by the maximum acceleration calculated on both the fine and coarse meshes as well as expansion to maintain integration accuracy .for example , if , we perform two fine step , one combined fine - coarse step , followed by two more fine steps . the code computes and displays the maximal acceleration on the fine and coarse grids at each time - step .we have already described a range of design choices which minimizes memory and network requirements . to further optimize the code in the presence of memory hierarchies ( cache ) , we use the linked lists in each coarse mesh cell to compute densities and update velocities . on the fine grid ,the forces are computed by taking the gradient of the potential on a small sub - grid .then we loop over all particles on the fine sub - grid , which leaves the forces in cache . for cosmological applications ,we use periodic boundary conditions .the advantage of the two level mesh is that isolated boundary conditions are easily applied .the standard procedure of using a kernel of twice the size of the computational domain usually results in an eightfold computational cost penalty . in the two level grid, we only need to double the coarse grid .even a doubled coarse grid is only 1/8th the size of the fine grid , and still a small cost for the whole computation .this is currently not implemented in the code .cosmological initial conditions for use in simulations with pmfast can be obtained from the website for the code , or one may employ another generator such as grafic2 , a gaussian random field generator which can be obtained from http://arcturus.mit.edu/grafic/ our goal is to be able to control errors to enable precise cosmological simulations with a goal of achieving 1% accuracy on the non - linear dark matter power spectrum down to scales below one mpc .this is about an order of magnitude smaller than the non - linear scale .errors arise from a range of approximations .the grid forces deviate at the grid scale , the coarse - fine overlap scale , and on the box scale .particle discreteness leads to poisson noise .and the finite time step leads to time truncation error .the contribution of the fractional error for randomly placed particle pairs in both the radial and tangential directions is displayed in figure [ sph_force_comp_err ] using the spherically matched kernel and [ lsq_force_comp_err ] using the least squares method matched kernel . in order to gauge the cosmological accuracy of the code we have included in figure [ pow_spectrum ] a comparison between a mesh simulation computed using pmfast and the power spectrum as generated by the halofit algorithm .generation of the spectrum from the simulation data requires more memory than is currently available in any single node .we thus plotted a spliced curve composed of three spectra calculated from the same distribution .inspection of the power at different wavebands was achieved by first scaling the data - set to a mesh , followed by scalings to and grids which were then folded into 64 and 4096 cubes respectively and superimposed onto a mesh .cell mesh simulation using 6.4 billion particles ( solid line ) and the spectrum as computed using the halofit algorithm at a redshift of 0 ( dotted line),width=384 ] figure [ fig : slice ] shows the distribution of particles within a kpc thick slice .a region is shown in higher resolution in figure [ fig : slice_inset ] .the code also generates on the fly two - dimensional density projections , which are used for weak gravitational lensing analysis .the projection of the density field to the mid - plane is shown in figure [ fig : rho_proj ] .billion particle cosmological simulation at a redshift of 0 .the inset is shown in figure [ fig : slice_inset].,width=384 ] cell density projection at a redshift of 0.033 calculated from the particle simulation .the box width is 200 mpc.,width=576 ] our production platform is an ia-64 cluster consisting of 8 nodes , each of which contains four 733 mhz itanium-1 processors and 64 gb ram .the cluster has point - to - point gigabit ethernet connections between each node .the maximum fine mesh grid size that we have run is using particles .the total fine mesh grid length depends on the number of nodes used in the simulation , the width of each fine mesh cube ( dashed boundary in figure [ decomp ] and the buffer length such that with moderate clustering and a maximum particle imbalance of 12% from mean density each time - step sweep at a 5:1 fine to coarse ratio takes approximately to complete .this time does not vary significantly for minor load imbalances .the time estimate is obtained by taking the time taken for 5 fine steps plus one coarse step , and dividing by 5 .the fine grid fft s account for less than 20% of the computation time .the code has also been timed on the local cita beowulf cluster , composed of dual xeon 2.4 ghz processors nodes with 1 gb ram and gigabit ethernet using smaller grid sizes .table [ timing ] includes timing data for the simulation on the two platforms using a fine to coarse mesh time - step ratio .we performed a weak scaling test , where the size of each fine sub - grid is 128 grid cells .this results in an effective 80 usable fine grid cells after overlap . in this regime ,the overlap makes density assignment and fine grid fft s a factor of 4 inefficient . on the ia-64 production platformthis overlap only accounts for 25% of the volume using a fine mesh of 512 cells . due to the overlap between fine grids ,it is not easy to time a pure strong scaling test while keeping the total grid size fixed .our fine grid was restricted to be a power of two .the code also exhibits good performance under weak scaling on the ia-32 platform , becoming memory limited at 12 nodes utilizing a total fine mesh .the weak scaling curve is displayed in figure [ wscale ] .l r r r | r platform & & ia-64 + nodes&4&8&12&8 + particles / node&&&& + position update&0.1&0.3&0.7&99.6 + particle passing&0.8&3.6&7.9&262.1 + link list&0.2&0.9&2.1&60.1 + fine mesh&3.6&14.7&34.8&1,513.8 + coarse mesh&2.7&3.2&3.6&166.2 + timestep&7.3&22.7&49.1&2,101.8 + particles / sec&&&& + [ timing ]the current code works on a one - dimensional slab decomposition , which limits the degree of parallelism that can be achieved before surface area effects begin to dominate the computing cost .the local cita beowulf cluster has 256 nodes , which makes a 1-d decomposition across the whole cluster impractical .of course a 3-d decomposition can be implemented along the same lines , which is in progress .the current code works on two levels .this dictates the number of overlap cells needed between coarse and fine grid forces . in principle, one could use a larger number of grids , and reduce the overlap range by a factor of two .similarly , one can trade - off the global communications bandwidth with the local buffer size . in a multi - level implementation, only the top level would be done globally .this could be on an even coarser grid than our current implementation .if one passed density fields instead of particles , the buffer regions would also be hierarchical .the communication costs are then dominated by the overlap between the finest and second finest grids , which could be reduced to 8 fine grid cells .the buffers for the coarser cells are still 8 grid cells on each coarsened level , but these are a factor of 4 cheaper , and asymptotically only add up to a 1/3 overhead .the total coarse grid must be at least a factor of eight finer than the width of the logical computer lattice if one does not want buffers to span more than the nearest neighbors .for the proposed universe simulator with 10000 nodes , this would be 21 processors on a side , corresponding to a coarse grid , whose global communication is completely negligible . with the current speedup and efficiency ,the code is memory limited on most existing machines .while we have already reduced the memory overhead close to the theoretical minimum , one could gain many orders of magnitude in capacity by implementing an out - of - core design in which simulation data is cached to disk .in such a scheme , a multi - level grid as described in the previous section would be needed .we have presented a new freely available parallel particle - mesh n - body code that takes a significant step towards achieving optimality in cpu , communication and memory performance .the only memory required is six floating point and one integer per particle .a two level force decomposition allows for the use of a short range force which minimizes communication .it also eliminates the need to store a global fine grid density field .cpu performance is optimized by the use of vendor optimized fft libraries , which allows one to deploy very fine grids .the code is available for download at : hockney , r. w. & eastwood , j. w. , 1988 , computer simulation using particles ( philadelphia : iop publishing ) couchman , h. m. p. , 1991 ,apj , 368 , l23 xu , g. , 1995 , apjs , 98 , 355 dubinski , j. & kim , j. & park , c. , & humble , r. , 2004 , newa , 9 , 111 ferrel , r. c. & bertschinger , e. , 1995 , arxiv : astro - ph/9503042 frigo , m. & johnson , s. g. , 1998 , proc .1998 ieee intl .acoustics speech and signal processing , 3 , 1381 trac , h. & pen , u. , 2004 , aas , 203 bertschinger , e. , 2001 , apjs , 137 , 1 smith , r. e. & peacock , j. a. & jenkins , a. & white , s. d. m. & frenk , c. s. & pearce , f. r. & thomas , p. a. & efstathiou , g. & couchman , h. m. p. , 2003 ,mnras , 341 , 1311 dubinski , j. & humble , r. & pen , u. & loken , c. & martin , p. , 2003, arxiv : astro - ph/0305109
|
we present a new parallel pm n - body code named pmfast that is freely available to the public . pmfast is based on a two - level mesh gravity solver where the gravitational forces are separated into long and short range components . the decomposition scheme minimizes communication costs and allows tolerance for slow networks . the code approaches optimality in several dimensions . the force computations are local and exploit highly optimized vendor fft libraries . it features minimal memory overhead , with the particle positions and velocities being the main cost . the code features support for distributed and shared memory parallelization through the use of mpi and openmp respectively . the current release version uses two grid levels on a slab decomposition , with periodic boundary conditions for cosmological applications . open boundary conditions could be added with little computational overhead . we present timing information and results from a recent cosmological production run of the code using a mesh with particles . pmfast is cost - effective , memory - efficient , and is publicly available . methods : numerical , cosmology : theory , large - scale structure of universe 02.60.-cb , 95.75.pq , 98.80-k
|
water - limited landscapes can generally be described as mosaics of vegetation and bare - soil patches of various forms .increasing empirical evidence supports the view that this type of vegetation patchiness is a self - organization phenomenon that would have occurred even in perfectly homogeneous physical environments .much insight into the mechanisms that drive self - organized vegetation patchiness has been achieved using mathematical models of water - limited landscapes .these models first demonstrate that uniform vegetation states can go through spatial instabilities to periodic vegetation patterns upon increasing environmental - stress parameters .they further highlight two main feedbacks that are capable of producing such instabilities .the first is a positive feedback between biomass and water that develops as a result of an infiltration contrast between bare and vegetated areas ( infiltration feedback ) .the second is a positive feedback between above - ground and below - ground biomass , related to the root - to - shoot ratio , a characteristic trait of any plant species ( root - augmentation feedback ) .model studies of vegetation pattern formation along a rainfall gradient have revealed five basic vegetation states : uniform vegetation , gap patterns , stripe ( labyrinth ) patterns , spot patterns and uniform bare - soil .another significant result is the existence of precipitation ranges where alternative stable vegetation states coexist .these are generally bistability ranges of any consecutive pair of basic states : bare - soil and spots , spots and stripes , stripes and gaps and gaps and uniform vegetation . within any bistability range , spatial mixtures of the two alternative stable states can form long transient patterns that culminate in one of the two alternative states , or stable asymptotic hybrid patterns .the mathematical theory of hybrid patterns is far from being complete .much progress , however , has been made for the simpler case of bistability of uniform and periodic - pattern states , using simple pattern formation models such as the swift - hohenberg equation .according to this theory a bistability range of uniform and patterned states may contain a subrange ( or an overlapping range ) of stable localized patterns , coexisting with the two alternative stable states .for bistability of bare - soil and vegetation spot patterns these localized patterns would correspond to isolated spot - pattern domains in an otherwise bare - soil , or conversely , to isolated bare - soil domains in an otherwise periodic spot pattern .the appearance of these mixed - pattern or hybrid states can be understood intuitively by focusing on the dynamics of the transition zones that separate the two alternative stable states .these zones , are fronts that can be stationary or propagating . in the case of bistability of two uniform states ,isolated fronts always propagate , except for a singular control - parameter value , the so called maxwell point , at which the direction of propagation changes .bistability of uniform and pattern states , on the other hand , allow for an additional behavior ; isolated fronts can be stationary or pinned in a _ range _ of the control parameter .such a range can give rise to many hybrid states , because the fronts that constitute the boundaries of the alternative - state domains are stationary . in a diagram that shows the various states as functions of the control parameter , the hybrid states often appear as solution branches that `` snake '' down from the periodic - pattern branch towards the uniform ( zero ) state as fig . [fig : snaking ] illustrates . the control - parameter range where these solutions existsis often called the snaking range and the appearance of such solutions is described as homoclinic snaking . in the following we will refer to this range as the `` hybrid - state range '' to allow for multistability of hybrid states that is not associated with homoclinic snaking .bistability of alternative stable states has been studied extensively in the context of ecosystem regime shifts , i.e. sudden transitions to a contrasting state in response to gradual changes in environmental conditions .such shifts have been observed in lakes , coral reefs , oceans , forests and arid lands .global shifts from one stable state to another , however , may not necessarily be abrupt .ecosystems are continuously subjected to local disturbances whose effects are spatially limited .examples of such disturbances in the context of water - limited vegetation include clear cutting , grazing , infestation and limited fires .these disturbances can induce fast _ local _ transitions to the alternative stable state , but , according to pattern formation theory , the subsequent dynamics may proceed slowly by the expansion and coalescence of the domains of the alternative stable state through front propagation and front collisions .such a succession of processes eventually leads to a global regime shift , but in a gradual manner .how slow can gradual shifts be ? when the two alternative stable states are spatially uniform the pace of a gradual shift depends on the value of the control parameter relative to the maxwell point ; the larger the distance from that point the faster the gradual shift .this result often holds for bistability of uniform and patterned states too , except for one important difference - the value of the control parameter should be outside the hybrid - state range ( but still within the bistability range ) .the difference between abrupt and gradual shifts can be dramatic , as fig .[ fig : abrupt_vs_gradual ] illustrates .for systems whose spatial extent is much larger than the size of a spot , gradual shifts can occur on time scales that are orders of magnitude longer than those of abrupt shifts . within the hybrid - state range global regime shiftsare not expected to occur in steady environments .the system rather shows spatial plasticity ; any spatial disturbance pattern shifts the system to the closest hybrid pattern , which is a stable stationary state and therefore involves no further dynamics .it is worth noting that transitions from the periodic pattern state to hybrid patterns , within the hybrid - state range , can also occur as a result of global uniform environmental changes , such as a precipitation drop or a uniform disturbance , provided the initial pattern is not perfectly periodic , e.g. hexagonal spot pattern containing penta - hepta defects .bistability of uniform and patterned states is most relevant to desertification , a regime shift involving a transition from a productive vegetation - pattern state to an unproductive uniform bare - soil state . to what extent are the general results of pattern formation theory displayed in figs .[ fig : snaking ] and [ fig : abrupt_vs_gradual ] applicable to the specific context of desertification ?we address this question by studying specific models of vegetation pattern formation of various degrees of complexity .the manuscript is organized as follows . in section[ sec : vegmodels ] we briefly review the models of water - limited vegetation considered here . in section [ sec : results ] we present numerical results for these models , distinguishing between models for which we found indications for hybrid states ( homoclinic snaking ) and models for which we have not found such indications .these results are discussed and summarized in section [ sec : summary ] .we chose to study several representative models of increasing complexity . all models are deterministic and specifically constructed to describe vegetation patchiness in water - limited flat terrains ( unlike the variant of the swift - hohenberg equation used to produce figs .[ fig : snaking ] and [ fig : abrupt_vs_gradual ] ) .the degree of complexity is reflected by the number of dynamical variables and by the number of pattern - forming feedbacks the model captures .the models consist of partial differential equations ( pdes ) for a continuous biomass variable and possibly for additional water variables , depending on the model .all models capture an instability of a uniform vegetation state to a periodic - pattern state and a bistability range of periodic patterns and bare soil .the simplest model we consider is a single - variable model for a vegetation biomass density , , introduced by lefever and lejune .we chose to study a simplified version of this model whose form in terms of non - dimensional variables and parameters is in this equation the parameter is the mortality to growth ratio , is the degree of facilitative , relative to competitive , local interactions experienced by the plants , and is the ratio between the spatial range of facilitative interactions and the range of competitive interactions . the spatial derivative terms represent short range facilitation and long range competition , a well known pattern formation mechanism .the agents responsible for this mechanism in actual dryland landscapes are nonlocal feedbacks involving water transport towards growing vegetation patches .explicit modeling of these feedbacks requires the addition of water variables .although the model does not include a precipitation parameter , water stress can be accounted for by increasing the mortality parameter . in what followswe will refer to this model as the ll model .next in degree of complexity is a modified version of a model introduced by klausmeier , hereafter the k model .in addition to a biomass density variable , , this model contains a water variable , , which we regard as representing soil - water content .the model equations , expressed in terms of non - dimensional quantities , are where . according to equation the biomass growth rate , , increases with the biomass density , reflecting a positive local facilitation feedback .natural mortality at a rate , acts to reduce the biomass , and local seed dispersal or clonal growth , represented by the diffusion term , act to distribute the biomass to adjacent areas .the water dynamics ( equation ) is affected by precipitation with a rate , evaporation and drainage ( ) , biomass - dependent water - uptake rate ( ) , and by soil - water diffusion .the pattern - forming feedback in this model is induced by the combined effect of higher local water - uptake rate in denser vegetation patches and fast water diffusion towards these patches , which inhibits the growth in the patch surroundings .this mechanism may be applicable to sandy soils for which is relatively large .this third type of pattern - forming feedback ( besides the infiltration and root - augmentation feedbacks ) has not been stressed in earlier studies .the original klausmeier model does not include a water diffusion term , but rather an advection term to describe runoff on a slope .while accounting for banded vegetation on a slope , the original model does not produce stationary vegetation patterns in flat terrains . to capture the latter we added the soil - water diffusion term . since we focus on plane terrains we do not need an advection term and therefore omitted it .the third model we consider , the r model , distinguishes between below - ground and above - ground water dynamics by introducing two water variables ; , representing soil water , and , representing surface water .this three - variable model has been introduced by rietkerk et al . and consists of the following non - dimensional equations denotes the variables in ) : , , , , , , , , , , , , where , . ] : where in equation the biomass growth rate , , depends on the soil - water variable only ( no biomass dependence as in the k model ) ; the dependence is linear at small soil - water contents and approaches a constant value at high contents , representing full plant turgor .biomass growth is also affected by mortality ( ) and by seed dispersal or clonal growth ( ) .soil - water content ( equation ) is increased by the infiltration of surface water ( ) .the biomass dependence of the infiltration rate , , captures the infiltration contrast that exists between bare soil ( low infiltration rate ) and vegetated soil ( high infiltration rate ) for .the other terms affecting the dynamics of the soil water represent loss of water due to evaporation and drainage ( ) , water uptake by the plants ( ) , and moisture diffusion within the soil .the dynamics of surface water ( equation ) are affected by precipitation at a rate , by water infiltration into the soil , and by overland flow modeled as a diffusion process .the r model captures an important pattern - forming feedback - the infiltration feedback . when the infiltration contrast is high ( ) patches with growing vegetation act as sinks for runoff waterthis accelerates the vegetation growth , sharpens the infiltration contrast and increase even further the soil water content in the patch areas .the water flow towards vegetation patches inhibits the growth in the patch surroundings thereby promoting vegetation pattern formation .the infiltration feedback allows vegetation pattern formation at lower , more realistic , values of the soil - water diffusion constant in comparsion to the k model . the fourth model to be studied , the g model ,was introduced by gilad et al . and contains the same three dynamical variables , , and as the r model , but with the interpretation of as representing the _ above - ground _ biomass .this is because the g model explicitly considers the root system and the relation between the root - zone size and the above - ground biomass .this additional element allows the introduction of another important pattern - forming feedback besides the infiltration feedback , the root - augmentation feedback .the model equations , in non - dimensional forms , read like in the k model , the biomass growth rate , , depends both on and but in a non - local way that accounts for the contribution of soil - water availability at point to biomass growth at point through a biomass - dependent root system that extends from point to point .similarly , the water - uptake rate , , by the plants roots depends on and in a nonlocal manner to account for the uptake at a point by a plant located at whose roots extend to . specifically ,\ , .\label{g } \end{aligned}\ ] ] the root - augmentation feedback is captured by letting the width of the root kernel , which represents the lateral root - zone size , to linearly increase with the above - ground biomass .as a plant grows its root zone extends to new soil regions . as a resultthe amount of water available to the plant increases and the plant can grow even further . while accelerating the local plant growth , this process also depletes the soil - water content in the plant surroundings , thereby inhibiting the growth there and promoting vegetation pattern formation .the proportionality parameter appearing in equation controls the strength of the root - augmentation feedback .it is a measure of the root - to - shoot ratio , a characteristic plant trait .note that the soil - water dependence of the biomass growth term in equation and of the water uptake term in equation is linear .nonlinear forms , including that used in the r model , have been studied in ref .like in the r model , the infiltration feedback appears through the biomass - dependent form of the infiltration rate : other differences with respect to the r model involve the introduction of ( i ) the logistic growth form in equation , which represents genetic growth limitations at high biomass densities ( e.g. stem strength ) , ( ii ) the biomass - dependent evaporation rate in the soil - water equation ( second term on right side ) which accounts for reduced evaporation by canopy shading and introduces a local positive water - biomass feedback , and ( iii ) nonlinear overland flow term in the surface - water equation motivated by shallow water theory , rather than a diffusion term as in the r model .the fifth model is a simplified version of the g model , in which the root kernel is assumed to vary sharply in comparison to and , and therefore can be approximated by a dirac delta function .this approximation is suitable for plant species that grow deep roots with small lateral dimensions .the simplified model , denoted sg , reads this version of the model includes the same pattern - forming infiltration feedback as the original model ( is defined the same way it was defined in the g model ) , but the root - augmentation feedback is modified ; water transport towards growing vegetation patches is no longer a result of uptake by the laterally spread roots , but rather a result of soil - water diffusion .the ecological context we consider is water - limited ecosystems in flat terrains exhibiting bistability of a periodic vegetation pattern and bare soil. we will mostly be concerned with initial states consisting of periodic patterns that are locally disturbed to form bare - soil domains .the numerical studies described below are based on numerical continuation methods , used to identify spatially periodic solutions , and on pde solvers , used to identify stable branches of localized patterns and to follow the dynamics of bare - soil domains .as we will shortly argue , these dynamics crucially depend on the additional stable pattern states , periodic or localized , that the system supports .there are several properties that all models appear to share : ( i ) the coexistence of a family of stable periodic solutions , describing vegetation patterns of different wavelengths , with a stable uniform solution that describes the bare - soil state ; ( ii ) bare - soil domains do not expand into patterned domains ; ( iii ) the existence of a stable localized solution describing a single vegetation spot in an otherwise bare soil state .an additional property that is most significant for regime shifts is not shared by all models - multiplicity of stable hybrid states .we use this property to divide the models into two groups , models that do not show multiplicity of stable hybrid states and models that do show such a multiplicity of states .the two groups display different forms of regime shifts as described below .the models that belong to this group are the k model ( section [ sec : klausmeier ] ) , the r model ( section [ sec : rietkerk ] ) and the sg model ( section [ sec : simpmeron ] ) .these models have wide bands of periodic solutions with stable branches that coexist with the stable branch of the bare - soil solution .figure [ fig : multikriet ] shows bifurcation diagrams for the r and sg models in 1d , computed by a numerical continuation method .the bifurcation parameter was chosen to be the precipitation rate .the diagrams show overlapping periodic solutions whose wavelengths increase as decreases .the last periodic solution to exist corresponds to a single hump .we have not been able to identify ( by numerical continuation ) solution branches that describe hybrid patterns , either groups of humps in an otherwise bare soil state or holes in an otherwise periodic pattern . to further testwhether such solutions can exist in these models or , if they exist , whether they are stable , we solved the models equations numerically using initial conditions that describe fronts separating the patterned and the bare - soil states .convergence to front solutions that are stationary over a range of values would indicate the possible existence of hybrid solutions .such front pinning , however , has not been observed ; in all simulations the patterned state propagated into the bare - soil state .we conclude that stable hybrid solutions , apart from a single hump solution , do not exist in these models , or if they do , their existence range is extremely small . in order to study regime shifts in the k , r and sg models we simulated the model equations within the bistability range of periodic patterns and bare soil , starting with periodic patterns that contain bare - soil domains .since the patterned state was always found to propagate into the bare - soil state , such initial bare - soil domains contract and disappear .this behavior rules out the occurrence of a gradual regime shift to the bare - soil state ( similar to that shown in panels e - h of fig .[ fig : abrupt_vs_gradual ] ) . the final pattern , however , can differ from the initial one in its wavelength as the 1d simulations of the r model displayed in fig .[ fig : localdisturbanceriet ] show .the system can respond by mere readjustment of the spacings between individual humps without a change in their number , which leads to an increase in the pattern s wavelength ( left panel ) , or , at higher precipitation , by hump splitting , which results in a decrease of the pattern s wavelength ( right panel ) .similar responses to local disturbances were found in the k and sg models .figure [ fig : localdisturbancegilad ] displays results of 2d simulations of the sg model showing that the two response forms , spacing readjustments and spot splitting , can occur at the same precipitation by changing the size of the initial bare - soil domain . reducing the precipitation rate to values below the bistability range of periodic patterns and bare soil leads to an abrupt global transition to the bare - soil state as fig .[ fig : simp_gilad_decay ] shows .numerical solutions of the ll and g models ( sections [ sec : lejeune ] and [ sec : meron ] ) using pde solvers point towards the existence of stable hybrid states in addition to periodic - pattern states .[ fig : lejeune_snaking ] shows a bifurcation diagram for the ll model , using the mortality rate as the bifurcation parameter .the upper solution branch corresponds to a periodic - pattern state , while the lowest branch corresponds to the bare - soil state .the red branches in between correspond to stable hybrid states describing localized patterns , a few examples of which are shown in the right panels .solutions of this kind in 1d and 2d have been found earlier .figure [ fig : gilad_snaking ] shows a partial bifurcation diagram for the g model in 2d .the upper line corresponds to a spot - pattern state in the absence of local disturbances .] , while the lower lines correspond to hybrid patterns with decreasing number of spots as the right panels show .note the difference between the hybrid solution branches in the two models ; while in the ll model they all terminate at the same control - parameter value , which coincides with the fold - bifurcation point of the periodic pattern solution , in the g model the hybrid solution branches are slanted - solutions with smaller numbers of spots terminate at lower values .the multitude of stable hybrid patterns , i.e. patterns consisting of groups of spots in an otherwise bare soil , groups of holes in otherwise periodic patterns and various combinations thereof , suggests a form of spatial plasticity .that is , any pattern of local disturbances shifts the system to the closest hybrid pattern with no further dynamics .this behavior rules out the occurrence of a gradual regime shift as a result of initial local disturbances , but unlike the k , r and sg models the system does not recover from the disturbances .this suggests the possible occurrence of a gradual regime shift in a continuously disturbed system .while the two models share spatial plasticity in response to local disturbances , they differ in the response to gradual parameter changes ( or ) . in the ll modelall localized pattern solutions terminate at the fold bifurcation point ( see fig . [fig : lejeune_snaking ] ) . above that pointthe only stable state is bare soil and , therefore , any hybrid state must collapse to this state . note the difference between the bifurcation diagram in fig .[ fig : lejeune_snaking ] and the diagram obtained with the swift - hohenberg equation in fig .[ fig : snaking ] . in the latterthere is a subrange ( ) outside the hybrid - state range which is still within the bistability range , where disturbed patterns go through gradual shifts .no such subrange has been found in the ll model .contrary to the ll model , the slanted structure of localized pattern solutions in the g model , allows for a gradual response .in fact , the hybrid state ( b ) in fig . [fig : gilad_snaking ] was obtained from the periodic state ( a ) by an incremental decrease of .likewise , the hybrid states ( c ) and ( d ) were obtained from the states ( b ) and ( c ) by further incremental decreases of .the degree of slanting increases as the root - to - shoot parameter is increased .all models considered in this study predict the same basic vegetation states and stability properties along a rainfall gradient , including a bistability range of bare soil and periodic spot patterns .we may therefore expect these models to depict similar scenarios for desertification shifts , i.e. transitions from productive spot patterns to the unproductive bare - soil state .pattern - formation theory , represented here by results obtained with the swift - hohenberg equation , suggests various possible forms for such scenarios ; abrupt , gradual or incipient , induced by environmental changes , by disturbances or both .underlying these forms are several nonlinear behaviors . the first and simplest is a global transition from the spot - pattern state to the bare - soil state , induced by a slow change of a control parameter past a fold bifurcation , or by a disturbance that shifts the system as a whole to the attraction basin of the bare - soil state .such processes induce global abrupt shifts to the bare - soil state as fig . [fig : abrupt_vs_gradual](a - d ) illustrates .local disturbances , on the other hand , can lead to partial shifts that result in spatially - limited domains of the bare - soil state in an otherwise periodic - pattern state . the subsequent course of events depends on the dynamics of the fronts that bound these domains . when the fronts propagate , a slow process of expansion and coalescence of bare - soil domains can eventually culminate in a global gradual shift , as fig .[ fig : abrupt_vs_gradual](e - h ) illustrates .when the fronts are pinned , the domains remain fixed in size , after some small adjustments , in which case the shift is incomplete or incipient - the system converges to one of the many hybrid states it supports . to our surprisethe models we studied do not capture all possible scenarios pattern - formation theory allows .moreover , scenarios that are captured by some models are not captured by others .our studies first suggest that in all five vegetation models ( k , r , sg , ll , g ) the bare - soil state never grows at the expense of the periodic - pattern state ( unlike the behavior shown in fig . [fig : abrupt_vs_gradual](e - h ) ) through the entire bistability range ; bare - soil domains either stay fixed in size or contract and disappear . furthermore , the k , r , and sg models do not show hybrid states at all , while the models that do show hybrid states , ll and g , differ in the existence ranges of these states . in the ll modelthe branches of all hybrid states terminate at the same threshold which coincides with that of the periodic pattern state , while in the g model the termination points are aligned on a slanted line .the results for the k , r and sg models suggest that shifts to the bare - soil state can only occur outside the bistability range of vegetation patterns and bare soil , and are therefore abrupt . within the bistability range ,bare - soil domains induced by local disturbances contract and disappear , thus restoring the vegetation - pattern state , although a wavelength change is likely to occur .both the ll and g models predict the possible occurrence of incipient regime shifts within the bistability range of periodic vegetation patterns and bare soil .these shifts can be induced by local - disturbance regimes and culminate in one of the stable hybrid states when the disturbance regimes are over .complete shifts to the bare - soil state , due to increased stress , are abrupt in the ll model but can be gradual in the g model because of the slanted structure of the hybrid solution branches ; incremental precipitation decrease in the g model can result in step - like transitions to hybrid states of lower bioproductivity as fig .[ fig : gilad_snaking ] shows .these results raise several open questions . the first is related to the finding that bare - soil domains do not expand into patterned domains in the entire bistability range .this behavior can be attributed to the positive pattern - forming infiltration and root - augmentaiton feedbacks .both give advantage to plants at the rim of a patterned domain as compared with inner plants ; the rim plants receive more runoff from the surrounding bare soil and experience weaker competition for soil water .these factors act against the retreat of vegetated domains .processes that may favor such a retreat include soil erosion and roots exposure in sandy soils under conditions of high wind power , or insect outbreak . whether bare - soil expansion can be explained by water - biomass interactions alone , or additional processes must be considered , is still an open question that calls for both empirical and further model studies . from the perspective of pattern formationtheory the finding that the bare - soil state never expands into vegetation - pattern states questions the utility of the maxwell - point concept far from the instability of uniform vegetation to periodic patterns and calls for further mathematical analysis .another open question is what elements in the ll and g models , and correspondingly what ecological and physical processes , are responsible for the multitude of stable hybrid states . the results for the ll model clearly show that reducing local facilitation , by decreasing the parameter , narrows down the hybrid - state range and can eliminate the hybrid states altogether .however , it also narrows down the bistability range of periodic patterns and bare soil , and therefore does not resolve processes that favor the formation of localized patterns alone .the results for the g model and its simplified version sg hint towards the possible role of the nonlocal water uptake by laterally extended root systems in inducing hybrid states .this nonlocal competition mechanism is absent in the sg model and may possibly be responsible for the absence of hybrid states in this model .further studies are needed , first to substantiate the existence of stable hybrid states , particularly in the g model , and second to clarify the roles of local and nonlocal facilitation and competition processes in inducing them . finally , the models we have studied are all deterministic . real ecosystems , however , are generally subjected to stochastic fluctuations in time and space , which may affect the bifurcation structure of spatial states .additive temporal noise , for example , can induce the propagation of pinned fronts , and thereby affect the hybrid - state range .the effect of noise on abrupt , gradual and incipient regime shifts is yet another open problem that calls for further studies .studying these questions is significant for identifying the nature of desertification shifts , i.e. whether they are abrupt , gradual or incipient , in various environments and for different plant species , and for assessing the applicability of early warning signals for imminent desertification .we wish to thank arjen doelman and moshe shachak for helpful discussions and arik yochelis for helping with the numerical continuation analysis .the research leading to these results has received funding from the european union seventh framework programme ( fp7/2007 - 2013 ) under grant number [ 293825 ] .craig d. allen , alison k. macalady , haroun chenchouni , dominique bachelet , nate mcdowell , michel vennetier , thomas kitzberger , andreas rigling , david d. breshears , e.h .( ted ) hogg , patrick gonzalez , rod fensham , zhen zhang , jorge castro , natalia demidova , jong - hwan lim , gillian allard , steven w. running , akkin semerci , and neil cobb .a global overview of drought and heat - induced tree mortality reveals emerging climate change risks for forests ., 259(4):660 684 , 2010 . .the interval is the snaking or hybrid - state range .the diagram was calculated using the swift - hohenberg equation , a minimal model for bistability of uniform and patterned states . from ., width=415 ] in fig .[ fig : snaking ] . panels ( e - h ) show a gradual transition from the same initial state to the zero state , within the bistability range , but outside the hybrid - state range , i.e. in fig .[ fig : snaking ] .the gradual transition occurs by the local expansion and coalescence of the disturbed domains on a time scale much longer than that of the abrupt transition ( note that the latter is so fast that no noticeable domain expansion occurs during the whole transition ) .both shifts are global in the sense that they culminate in a zero state encompassing the whole system ( panel ( h ) is still a transient ) .the transitions were obtained by solving numerically the swift - hohenberg equation . ] while the horizontal axis represents the precipitation rate .solid ( dashed ) lines denote stable ( unstable ) states .the left most line in both panels corresponds to a single hump ( spot ) .the large overlap ranges of the periodic - pattern solutions allows the system to respond to local disturbances or precipitation changes by changing the pattern s wavelength ( see fig .[ fig : localdisturbanceriet ] ) . hybrid states resulting from front pinningwere not observed in these models .the diagram for the sg model shows period - doubling bifurcations which were not found in the r model ( e.g. the point where the green line wl=30 emanates from the red line wl=15 ) .the instability of a solution that goes through period doubling is not captured ( as the solid line indicates ) because of the small system considered in the numerical stability analysis .the parameters used for the r model are : , , , , , , .those for the sg model are : , , , , , , . ]( left panel ) and for ( right panel ) .at the lower precipitation value the removal of a hump leads to a pattern with a longer wavelength ( the number of humps after the disturbance remains the same and the distance between them is adjusted to fill the whole space ) . at the higher precipitationthe removal of a hump leads to a pattern with a smaller wavelength ( the two humps adjacent to the disturbed location split and the number of humps in the final pattern is larger than in the initial pattern ) ; after the splitting the distance between the humps is adjusted to fill the whole space with evenly spaced humps .parameters are as in fig . [fig : multikriet ] . ] while the horizontal axis represents the mortality rate .the top , blue line represents a periodic - pattern state , while the bottom , black line represents the bare - soil state .the red lines in between correspond to localized hybrid patterns with odd and even number of humps as the examples in the panels on the right side show .note that all hybrid - state branches ( red lines ) terminate at the same parameter value , , as the periodic pattern branch .this feature has repeatedly been found for other sets of parameter values and implies an abrupt shift to the bare - soil state upon increasing .the parameters we used are : and .,scaledwidth=90.0% ]
|
drylands are pattern - forming systems showing self - organized vegetation patchiness , multiplicity of stable states and fronts separating domains of alternative stable states . pattern dynamics , induced by droughts or disturbances , can result in desertification shifts from patterned vegetation to bare soil . pattern - formation theory suggests various scenarios for such dynamics ; an abrupt global shift involving a fast collapse to bare soil , a gradual global shift involving the expansion and coalescence of bare - soil domains , and an incipient shift to a hybrid state consisting of stationary bare - soil domains in an otherwise periodic pattern . using models of dryland vegetation we address the question which of these scenarios can be realized . we found that the models can be split into two groups : models that exhibit multiplicity of periodic - pattern and bare - soil states , and models that exhibit , in addition , multiplicity of hybrid states . furthermore , in all models we could not identify parameter regimes in which bare - soil domains expand into vegetated domains . the significance of these findings is that while models belonging to the first group can only exhibit abrupt shifts , model belonging to the second group can also exhibit gradual and incipient shifts . a discussion of open problems concludes the paper . * keywords * : models of vegetation pattern formation , multiplicity of stable states , localized patterns , fronts , homoclinic snaking , desertification .
|
recently , a new strategy for the modelling of stress propagation in static cohesionless granular media was developed ( bouchaud _ et al ._ 1995 ; wittmer _ et al ._ 1996 , ) .the medium is viewed as an assembly of rigid particles held up by friction .the static indeterminacy of frictional forces within the assembly is circumvented by the assumption of certain local _ constitutive relations _( c.r.s ) among components of the stress tensor .these are assumed to encode the network of contacts in the granular packing geometry ; they therefore depend explicitly on the way in which the medium was made its _ construction history_. the task is then to postulate and/or justify physically suitable c.r.s among stresses , of which only one ( the _ primary _ c.r . ) is required for systems with two dimensional symmetry , such as a wedge of sand ; for a three dimensional symmetric assembly ( the conical sandpile ) a secondary c.r .is also needed . among the primary constitutive relations of wittmer _are a certain class ( called the ` oriented stress linearity ' or osl models ) which have simplifying features .indeed , in two - dimensional geometries these combine with the stress continuity equation to give a wave equation for stress propagation , in which the horizontal and vertical directions play the role of spatial and temporal coordinates respectively ( bouchaud _ et al .a distinguishing feature of the osl models is that the _ characteristic rays for stress propagation _ ( analagous to light or sound rays in ordinary wave propagation ) are then fixed by the construction history : they do not change direction under subsequent reversible loading .( irreversible loadings , which can in these models be infinitesimal , are discussed in section [ sec : fragile ] below . ) as discussed by bouchaud _( 1998 ) , the characteristics of the differential equation can be viewed as representing , in the continuum , the mean behaviour of ` force chains ' or ` stress paths ' in the material ( dantu 1967 ; liu _ et al ._ 1995 ; thornton & sun 1994 ) . of the osl models , a particularly appealing member , with special symmetry properties ,is called the ` fixed principal axes ' ( fpa ) model .this has the additional property that the characteristics everywhere coincide in orientation with the principal axes of the stress tensor .the fpa model therefore supposes that these principal axes have an orientation fixed at the time of burial . this is arguably the simplest possible choice for a history - dependent c.r . among stresses . for the case of a sandpile in which grains are deposited by surface avalanches , which we presume to apply for a conical pile constructed from a point source ( though see section [ sec : experiment][subsec : plasticone ] below ), the orientation of the major axis at burial is constant , and known from the fact that the free surface in such a pile must be a yield surface .the resulting constitutive equation among stresses , for the sandpile geometry , then has a singularity at the centre of a cone or wedge ; this is physically admissible since the centreline separates material which has avalanched to the left from material which has avalanched to the right .this singularity leads to an ` arching ' effect , as previously invoked to explain the stress - dip by edwards & oakeshott ( 1987 ) and others ( trollope 1968 ; trollope & burman 1980 ) .the osl models were developed to explain experimental data on the stress distribution beneath a conical sandpile , built by surface avalanches of sand , poured from a point source onto a rough , rigid support ( smid & novosad 1981 ; jokati & moriyama 1979 ; brockbank _ et al .such data shows unambiguously the presence of a minimum ( ` the pressure dip ' ) in the vertical normal stress below the apex of the pile . with a plausible choice of secondary c.r .( of which several were tried , with only minor differences resulting ) , the fpa case , in particular , was found to give a fairly good quantitative account of the data of smid & novosad ( 1981 ) , and of brockbank _ et al . _( 1997 ) ; see fig .[ fig : dip ] .this is remarkable , in view of the radical simplicity of the assumptions made .we accept , of course , that such models may be valid only a limited regime in some larger parameter space .for example , since strain variables are not introduced , these models can not of themselves examine the crossover to conventional elastic or elastoplastic behaviour that must presumably arise when the applied stresses are significant on the scale of the elastic modulus of the grains themselves . some further remarks on this crossover , in the context of anisotropic elastoplasticity ,are made in section [ sec : strain][subsec : anisotropic ] . in this paperwe discuss the physical content of our general modelling approach ( of which the fpa model is one example ) , based on local stress propagation rules that depend on construction history , as encoded in constitutive relations among stresses . in particularwe contrast the approach with more conventional ideas especially the ideas of elastoplasticity . for simplicity , our mathematical discussionis mainly limited to two dimensions ( although our models were developed to describe three - dimensional piles ) and to the simplest , isotropic forms of elastoplastic theory .the discussion aims to sharpen some conceptual issues .these concern not the details of particular models , but the general question of what _ sort _ of description we should aspire to : what sort of information do we need as modelling input , and what can be predicted as output ?an equally important ( and closely related ) question is , what are the control variables in an experiment that must be specified to ensure reproducible behaviour , and what are the observables that can then be measured to depend on these ?for experiments on sandpiles ( briefly reviewed in section [ sec : experiment ] ) we believe these to be open physics questions , and to challenge some widely held assumptions of the applicability of traditional elastoplastic modelling strategies to cohesionless poured grains .the proposal that granular assemblies under gravity can not properly be described by the ideas of conventional elastoplasticity has been opprobiously dismissed in some quarters : we stand accused of ignoring all that is ` long and widely known ' among geotechnical engineers ( savage 1997 ) .however , we are not the first to put forward such a subversive proposal .indeed workers such as trollope ( 1968 ) and harr ( 1977 ) have long ago developed ideas of force transfer rules among discrete particles , not unrelated to our own approaches , which yield continuum equations quite unlike those of elastoplasticity .more recently , dynamical _ hypoplastic _ continuum models have been developed ( kolymbas 1991 , kolymbas and wu 1993 ) which , as explained by gudehus ( 1997 ) describe an ` anelastic behaviour without [ the ] elastic range , flow conditions and flow rules of elastoplasticity ' . our own models , though not explicitly dynamic ,are similarly anelastic , as we discuss in section [ sec : fragile ] .they should perhaps be classified as hypoplastic models , although their relation to extremely _ anisotropic _elastoplastic models is examined in section [ sec : strain][subsec : anisotropic ] below .we start by reviewing ( in their simplest forms ) some well - known modelling approaches based on rigid - plastic and elastoplastic ideas .this is followed by a brief summary of the mathematical content of the fpa model and its relatives .the equations of stress continuity express the fact that , in static equilibrium , the forces acting on a small element of material must balance . for a conical pile of sand we have , in dimensions , _ r+ _ z= [ 1]_r+ _ z= g - /r[2]__ij = 0[3]where . here and are cylindrical polar coordinates , with the downward vertical .we take as a symmetry axis , so that ; is the force of gravity per unit volume ; is the usual stress tensor which is symmetric in .the equations for are obtained by setting in ( [ 1],[2 ] ) and suppressing ( [ 3 ] ) .these describe a wedge of constant cross section and infinite extent in the third dimension .they also describe a layer of grains in a thin , upright hele - shaw cell , but only if the wall friction is negligible .the coulomb inequality states that , at any point in a cohesionless granular medium , the shear force acting across any plane must be smaller in magnitude than times the compressive normal force . here is the angle of friction , a material parameter which , in simple models , is equal to the angle of repose . we accept this here , while noting that ( i ) in principle depends on the texture ( or fabric ) of the medium and hence on its construction history ; ( ii ) for a highly anisotropic packing , the existence of an orientation - independent is questionable ( see section [ sec : strain][subsec : anisotropic ] ) ; ( iii ) the identification of with the repose angle ignores some complications such as the bagnold hysteresis effect . the model that wittmer _ et al .( ) refer to as incipient failure everywhere " ( ife ) , is more commonly called the rigid - plastic model .it postulates that the coulomb condition is everywhere obeyed _ with equality _ ( nedderman 1992 ) .that is , through every point in the material there passes some plane across which the shear force is exactly times the normal force . by assuming this , the ife model allows closure ( modulo a sign ambiguity discussed below ) of the equations for the stress without invocation of an elastic strain field .the ife model has therefore as its ` constitutive relation ' ( wittmer _ et al . _ ) : _ rr = _ zz1^ 2[ife ] whereas the coulomb inequality requires only that lies between the two values ( ) on the right .the postulate that a coulombic slip plane passes through each and every material point is not usually viewed as being accurate in itself ; the rigid - plastic model is more often proposed as a way of generating certain ` limit - state ' solutions to an underlying elastoplastic model . in the simplest geometriesthese solutions correspond to taking the or sign in ( [ ife ] ) .it is a simple exercise to show that for a sandpile at its repose angle , only one solution of the resulting equations exists in which the sign choice is everywhere the same .this requires the negative root ( conventionally referred to as an ` active ' solution ) and it shows a hump , not a dip , in the vertical normal stress beneath the apex .savage ( 1997 ) , however , draws attention to a ` passive ' solution , having a pronounced dip beneath the apex .this solution actually contains a pair of matching planes between an inner region where the positive root of ( [ ife ] ) is taken , and an outer region where the negative is chosen . in principlethere is more than one such ` passive ' solution .for example one can seek an ife solution in which all stress components are continuous across the matching plane .this requires a discontinuity in the gradients of the stresses at the centreline ( see fig .[ fig : bounds ] ) .the latter does not contradict eq .( [ ife ] ) , although it might be thought undesirable on other grounds ( for example if the ife equation is thought to bound the behaviour of a simple elastoplastic body , for which the resulting displacement fields might not be admissible ) .an alternative , which avoids this , is to instead have a discontinuity of the normal stress parallel to the matching plane itself .this gives a second passive solution ( savage 1997,1998 ) .these solutions do not exhaust the repertoire of ife solutions for the sandpile : there is no physical principle that limits the number of matching surfaces . by adding extra ones , a very wide variety of results can be achieved .the emphasis placed on the rigid - plastic approach , at least in some parts of the literature , seems to rest on a misplaced belief that the limit - state solutions can be ` generally regarded as bounds between which other states can exist , _i.e. _ , when the material is behaving in an elastic or elastoplastic manner ' ( savage 1997 ) . a simple counterexample is shown in fig .[ fig : bounds ] .this shows the active and two passive solutions ( as defined above ) for a two - dimensional pile ( wedge ) , along with an elementary elastoplastic solution as presented recently by cantelaube & goddard ( 1997 ) and earlier by samsioe ( 1955 ) .the latter is piecewise linear with no singularity in the displacement field on the central axis ; it happens to coincide mathematically with the solution of a simple hyperbolic model ( bouchaud _ et al ._ 1995 ) for the same geometry ( there is no stress dip in this particular model ) . clearly the vertical normal stress does not lie everywhere between that of the active and passive ife solutions , which are therefore not bounds . in two dimensions at least , it has been argued that the pressure dip can be explained within a simple conventional elastoplastic modelling approach .this is certainly possible if the base is allowed to sag slightly ( savage 1997 ) . here, however , we are concerned with piles built on a rough , rigid support . even in this case, it has been argued that results similar to those of the fpa model can be obtained ( cantelaube & goddard 1997 ) .the simplest elastoplastic models assume a material in which a perfectly elastic behaviour at small strains is conjoined onto perfect plasticity ( the coulomb condition with equality ) at larger ones .in such an approach to the sandpile , an inner elastic region is matched , effectively by hand , onto an outer plastic one . in the inner elastic regionthe stresses obey the navier equations , which follow from those of hookean elasticity by elimination of the strain variables .the corresponding strain field is not discussed , but tacitly treated as infinitesimal , since the high modulus limit is taken .howevever , for fpa - like solutions , which show a cusp in the vertical stress on the centreline , the displacement shows singular features which are not easily reconciled with a purely elastoplastic interpretation .the fact that the plastic zone is introduced ad - hoc also has drawbacks for example it is hard to explain the continued presence of such a zone if the angle of the pile is reduced to slightly below the friction angle ( to allow for the bagnold hysteresis effect , say ) . in osl approaches ,an outer zone is not assumed but predicted , and remains present in this case , although the material in this zone is no longer at incipient failure .the existence of fpa - like solutions to simple elastoplastic models in three dimensions , on a non - sagging support , remains very much in doubt .but in any case , our objections to the elastoplastic approach to modelling sandpiles lie at a more fundamental level .specifically it appears that , to make unambiguous predictions for the stresses in a sandpile , these models require boundary information that has no obvious physical meaning or interpretation .we return to this physics problem in section [ sec : strain][subsec : indet ] . in the fpa model ( wittmer _ ) one hypothesizes that , in each material element , the orientation of the stress ellipsoid became ` locked ' into the material texture at the time when it last came to rest , and does not change in subsequent loadings ( unless forced to : see section [ sec : fragile ] ) .this is a bold , simplifying assumption , and it may indeed be far too simple , but it exemplifies the idea of having a local rule for stress propagation that depends explicitly on construction history . for the sandpile geometry , where the material comes to rest on a surface slip plane , this constitutive hypothesis leads to the following relation among stresses : = -2[5]where is the angle of repose .eq.([5 ] ) is algebraically specific to the case of a pile created from a point source by a series of avalanches along the free surface . a consequence of eq .( [ 5 ] ) , for a pile at repose , is that the major principal axis everywhere bisects the angle between the vertical and the free surface .it should be noted that in cartesian coordinates , the fpa model reads : _xx = -2(x)_xz [ 4]where is horizontal . from eq .( [ 4 ] ) , the fpa constitutive relation is seen to be discontinuous on the central axis of the pile : the local texture of the packing has a singularity on the central axis which is reflected in the stress propagation rules of the model . the paradoxical requirement , on the centreline , that the principal axes are fixed simultaneously in two different directions has a simple resolution : the stress tensor turns out to be isotropic there .the fpa model is one of the larger class of osl models in which the primary constitutive relation ( in the sandpile geometry ) is , in cartesians _xx = + ( x)_xz [ 6]with constants . as explained wittmer _( ) , these models ( in two dimensions ) yield hyperbolic equations that have _ fixed characteristic rays _ for stress propagation .( unless , these are asymmetrically disposed about the vertical axis , and invert discontinuously at the centreline . )the constitutive property that osl models describe is that these characteristic rays , or force chains , have orientations that are ` locked in ' on burial of an element .the boundary condition , that the free surface of a pile at its angle of repose is a slip plane , yields one equation linking and to ; thus the osl scheme represents a one - parameter family of models .note that , as soon as is not exactly unity ( the fpa case ) the orientation of the principal axes rotates smoothly as one passes through the centreline of the pile .the assumption of fixed principal axes , though appealing , is thus rather delicate , and arguably much less important than the idea of fixed characteristics , since these represent the average geometry of force chains in the medium .the experimental data ( fig .[ fig : dip ] ) supports models in the osl family with close , but perhaps not exactly equal , to unity .[ bccmodel ] note that , unless the osl parameter is chosen so that , a constitutive singularity on the central axis remains .the case corresponds to one studied earlier by bouchaud _( 1995 ) ; this ` bcc ' model is the only member of the osl family to have characteristics symmetric about the vertical .( their angles to the vertical obey . ) this latter model could be called a ` local janssen model ' in that it assumes _ local _ proportionality of horizontal and vertical compressive stresses an assumption which , when applied _ globally _ to average stresses in a silo , was first made by janssen ( 1895 ) . the local - rule models just discussed do not account for the presence of ` noise ' or randomness in the local texture .such effects have been studied by claudin _( 1998 ) , and , if the noise level is not too large , lead at large length scales to effective wavelike equations with additional gradient terms giving a diffusive spreading of the characteristic rays .the limit where the diffusive term dominates corresponds to a _ parabolic _differential equation ( harr 1977 ) , similar to those arising in scalar force models ( liu _ et al ._ 1995 ) which have , in effect , a single downward characteristic ( so the main interest lies in the diffusive spreading ) .it is possible that under extreme noise levels this picture changes again , although this conclusion is based on assumptions about the noise itself that may not be valid ( claudin _ et al .the discussions that follow therefore apply to local - rule models with moderate , but perhaps not extreme , noise .note finally that the fact that two continuum models , based on different constitutive hypotheses , can give identical results for the stresses in some specified geometry , obviously does not mean that the models are equivalent .( equivalence requires , at least , that the green function of the two models is also the same . ) thus , for example , models such as fpa are not equivalent to trollope s model of clastic discontinua " ( trollope 1968 , trollope and burman 1980 ) . in appendix[ sec : trollope ] we outline the relationship between our work and the marginal packing models studied by ball & edwards ( to be published ) , huntley ( 1993 ) , hong ( 1993 ) , bagster ( 1978 ) and liffmann _ et al . _( 1992 ) , as well as the work of trollope ( 1956,1968 ) .in the fpa model and its relatives , strain variables are not considered . a partial justification for thiswas given by wittmer _( ) , namely that the experimental data obey a form of radial stress - field ( rsf ) scaling : the stress patterns observed at the base are the same shape regardless of the overall height of the pile .formally one has for the stresses at the base _ij = gh s_ij(r / ch ) [ rsf ] where is the pile height , and a reduced stress : is the angle between the free surface and the horizontal so that for a pile at repose , .this form of rsf scaling , which involves only the forces at the base ( evesque 1997 ) , might be called the ` weak form ' and is distinct from the ` strong form ' in which eq .( [ rsf ] ) is obeyed also with ( an arbitrary height from the apex ) replacing ( the overall height of the pile ) .this scaling implies that there is no characteristic length - scale .since elastic deformation introduces such a length - scale ( the length - scale over which an elastic pile would sag under gravity ) the observation of rsf scaling to experimental accuracy suggests that elastic effects _ need not be considered explicitly_. we accept however ( correcting wittmer _ ) that this does not of itself rule out elastic or elastoplastic behaviour which , at least in the limit of large modulus , can also yield equations for the stress from which the bulk strain fields cancel . note that it is tempting , but entirely wrong , to assume that a similar cancellation occurs at the boundaries of the material ; we return to this below ( section [ sec : boundary ] ) .the cancellation of bulk strain fields in elastoplastic models is convenient since there appears to be no clear definition of strain or displacement for piles constructed by pouring sand grainwise from a point source . to define a physical displacement or strain field ,one requires a reference state . in ( say ) a triaxial strain test ( see e.g. wood 1990 ) an initial state is made by some reproducible procedure , and strains measured from there .the elastic part is identifiable in principle , by removing the applied stresses ( maintaining an isotropic pressure ) and seeing how much the sample recovers its shape .in contrast , a pile constructed by pouring grains onto its apex can not convincingly be described in terms of the plastic and/or elastic deformation from some initial reference state of the same continuous medium : the corresponding experiments are unrealizable . even were the load ( which consists purely of gravity ) to be removed , the resulting unstrained body would comprise grains floating freely in space with no definite positions .it is unsatisfactory to define a strain or displacement field with respect to such a body .the problem occurs whenever the solidity of the body itself _ only _ arises because of the load applied .a similar situation occurs , for example , in colloidal suspensions that flow freely at small shear stresses but ( by jamming ) can support larger ones indefinitely ( cates _ et al ._ , to be published ) .although one can not uniquely define the strain in a granular assembly under gravity , it may of course be possible to define _ incremental _ strains in terms of the displacement of grains when a small load is added .however , the range of stress increments involved might in practice be negligible ( kolymbas 1991 ; gudehus 1997 ) .models that assume local constitutive equations among stresses ( including all osl models , and also the ife or rigid - plastic model ) provide hyperbolic differential equations for the stress field . accordingly , if one specifies a zero - force boundary condition at the free ( upper ) surface of a wedge , then any perturbation arising from a small extra body force ( a ` source term ' in the equations ) propagates along two characteristics passing through this point .( in the osl models these characteristics are , moreover , straight lines . )therefore the force at the base can be found simply by summing contributions from all the body forces as propagated along two characteristic rays onto the support ; the sandpile problem is , within the modelling approach by bouchaud _( 1995 ) and wittmer _ et al . _( ) , mathematically well - posed .note that in principle , one could have propagation also along the ` backward ' characteristics ( see fig .[ fig : pathfig](a ) ) .this is forbidden since these cut the free surface ; any such propagation can only arise in the presence of a nonzero surface force , in violation of the boundary conditions .therefore the fact that the propagation occurs only along downward characteristics is not related to the fact that gravity acts downward ; it arises because we know already the forces acting at the free surface ( they are zero ) .suppose we had instead an inverse problem : a pile on a bed with some unspecified overload at the top surface , for which the forces acting at the base had been measured . in this case, the information from the known forces could be propagated along the _ upward _ characteristics to find the unknown overload .more generally , in osl models of the sandpile , each characteristic ray will cut the surface of a ( convex ) patch of material at two points . within these models ,the sum of the forces tangential to the ray at the two ends must be balanced by the tangential component of the body force integrated along the ray ( see fig . [fig : pathfig](b ) ) .we discuss this physics ( that of force chains ) in section [ sec : fragile][subsec : stresspaths ] . in three dimensions ,the mathematical structure of these models is somewhat altered ( bouchaud _ et al ._ 1995 ) , but the conclusions are basically unaffected .note however that for different geometries , such as sand in a bin , the problem is not well - posed even with hyperbolic equations , unless something is known about the interaction between the medium and the sidewalls .ideally one would like an approach in which sidewalls and base were treated on an equal basis ; this is the subject of ongoing research .note also that the essential character of the boundary value problem is not altered when appropriate forms of randomness are introduced . foralthough the response to a point force is now spread about the two characteristics , even in the parabolic limit ( where the underlying straight rays are effectively coincident and only spreading remains ) the sandpile boundary value problem remains well posed .the well - posedness of the sandpile does not extend to models involving the elliptic equations for an elastic body .for such a material , the stresses throughout the body can be solved only if , at all points on the boundary , either the force distribution or a displacement field is specified ( landau & lifshitz 1986 ) . accordingly , once the zero - stress boundary condition is applied at the free surface , nothing can in principle be calculated unless either the forces or the displacements at the base are already known ( and the former amounts to specifying in advance the solution of the problem ) . from an elastoplastic perspective , it is clearly absurd to try to calculate the forces on the support , which are the experimental data , without some further information about what is happening at the bottom boundary .we have called this the problem of ` elastic indeterminacy ' ( bouchaud _ et al . _ 1998 )although perhaps ` elastic ill - posedness ' would be a better term .the problem does not arise from any uncertainty about what to do mathematically : one should specify a displacement field at the base .difficulties nonetheless arise if , as we argued above , no physical significance can be attributed to this displacement field for cohesionless poured sand . to give a specific example , consider the static equilibrium of an elastic cone or wedge of finite modulus resting on a completely rough , rigid surface ( which one could visualize as a set of pins ; fig .[ fig : indeterminacy ] ) . starting from any initial configuration ,another can be generated by pulling and pushing parts of the body horizontally across the base ( _ i.e. _ , changing the displacements there ) ; if this is rough , the new state will still be pinned and will achieve a new static equilibrium .this will generate a stress distribution , across the supporting surface and within the pile , that differs from the original one . if a large enough modulus is now taken ( at fixed forces ), this procedure allows one to generate arbitrary differences in the stress distribution while generating neither appreciable distortions in the shape of the cone , nor any forces at its free surface .analogous remarks apply to any simple elastoplastic theory of sandpiles , in which an elastic zone , in contact with part of the base , is attached at matching surfaces to a plastic zone .in contrast , experimental reports ( reviewed in section [ sec : experiment ] ) indicate that for sandpiles on a rough rigid support , the forces on the base can be measured reproducibly .they also suggest that these forces , although subject to statistical fluctuations on the scale of several grains , do not vary too much from one pile to another , at least among piles constructed in the same way ( e.g. , by avalanches from a point source ) , from the same material .this argues strongly against the idea that such forces in fact depend on a basal displacement field , which is determined either by the whim of the experimentalist , or by some as - yet unspecified physical mechanism acting at the base of the pile .note that basal sag is _ not _ a candidate for the missing mechanism , since it does not resolve the elastic indeterminacy in these models ; the latter arises primarily from the _ roughness _ , rather than the rigidity , of the support .note also , however , that elastic indeterminacy can be alleviated in practice if the elastoplastic model is sufficiently anisotropic ; we return to this point in section [ sec : strain][subsec : anisotropic ] .evesque ( private communication ) , unlike many authors , does confront the issue of elastic indeterminacy and seemingly concludes that the experimental results _ are and must be indeterminate _ ; he argues that the external forces acting on the base of a pile can indeed be varied at will by the experimentalist , without causing irreversible rearrangements of the grains ( see also evesque & boufellouh 1997 ) . towhat extent this viewpoint is based on experiment , and to what extent on an implicit presumption in favour of elastoplastic theory , is to us unclear .let us boldly suppose , then , that the experimental data is meaningful and reproducible , at least as far as the global , ` coarse - grained ' features of the observations are concerned .( noise effects at the level of individual grains may in contrast be exquisitely sensitive to temperature and other poorly - controlled parameters ; claudin & bouchaud 1997 . )adherents to traditional elastoplastic models then have three choices .the first is to consider the possibility that , after all , the problem of cohesionless poured sand may be better described by quite different governing equations from those of simple elastoplasticity .this possibility , which represents our own view , has certainly been suggested before .for example , hypoplastic models in which there is negligible elastic range ( gudehus 1997 ; kolymbas 1991 , kolymbas and wu 1993 ) do not suffer from elastic indeterminacy .the second choice is to postulate various additional constraints , so as to eliminate some of the infinite variety of solutions that elastoplastic models allow ( unless basal displacements are specified ) .for example , it is tempting to impose ( in its strong form ) rsf scaling : for a wedge , as shown by samsioe ( 1955 ) and cantelaube & goddard ( 1997 ) this reduces the admissible solutions to a piecewise linear form .such a postulate may seem quite harmless : after all , we have emphasized already that the observations do themselves show ( weak ) rsf scaling .however , according to these models , the central part of the pile can correctly be viewed as an elastic continuum ; hence from any solution for the stresses it _ should be _ physically possible to generate another by an infinitesimal pushing and pulling of the elastic material along the rough base .accordingly one has no reason to expect even the weak rsf scaling observed experimentally . setting this aside, one could impose weak rsf scaling by assuming a basal displacement field of the same overall shape for piles of all sizes .however , as pointed out by evesque ( 1997 ) , even this imposition does not require the _ strong _ form of rsf scaling assumed by cantelaube & goddard ( 1997 ) . in summary , simple elastoplastic models of sandpiles_ require _ that the experimental results for the force at the base depend on how the material was previously manipulated .any attempt to predict the forces without specifying these manipulations is misguided . a third reaction, therefore , is to start modelling explicitly the physical processes going on at the base of the pile .as mentioned previously , one is required to specify a displacement field at the base of the elastic zone ; more accurately , it is the product of the displacement field and the elastic modulus that matters . this need not vanish in the large modulus limit ( section [ sec : strain][subsec : indet ] ) ; one possible choice , nonetheless , is to set the displacement field to zero at a finite modulus ( which might then be taken to infinity ) .the simplest interpretation of this choice is by appeal to a model in which the ` sandpile ' is constructed as follows ( fig . [fig : spaceship](a ) ) : an elastoplastic wedge , floating freely in space , is brought to rest in contact with a rough surface , in a state of zero strain . once in contact, gravity is switched on with no further adjustments in the contact region allowed .this might be referred to as the ` spaceship model ' ( or perhaps the ` floating model ' ) of a sandpile .this illustrates two facts : ( a ) in considering explicitly the displacement field at the bottom surface , elastoplastic modellers are obliged to make definite assumptions about the previous history of the material ; ( b ) these assumptions do not usually have much in common with the actual construction history of a sandpile made by pouring . a possible alternative to the spaceship model ,in which unstressed laminae of elastoplastic material are successively added to an existing pile ( fig .[ fig : spaceship](b ) ) is discussed in appendix [ sec : laminated ] .we shall now show that hyperbolic behaviour can be recovered from an elastoplastic description by taking a strongly anisotropic limit ( cates _ et al ._ , to be published ) . for simplicitywe restrict attention to the fpa model .the fpa model describes , by definition , a material in which the shear stress must vanish across a pair of orthogonal planes fixed in the medium those normal to the ( fixed ) principal axes of the stress tensor . according to the coulomb inequality , which the model also adopts ,the shear stress must also be less than times the normal stress , across planes oriented in all other directions .clearly this combination of requirements can be viewed as a limiting case of an elastoplastic model with an anisotropic yield condition : the plane normal and the vertical ( say ) .an anisotropic yield condition should arise , in principle , in any material having a nontrivial fabric , arising from its construction history .the limiting choice corresponding to the fpa model for a sandpile is for ( this corresponds to planes where lies parallel to the major principal axis ) , and otherwise .( there is no separate need to specify the second , orthogonal plane across which shear stresses vanish , since this is assured by the symmetry of the stress tensor . ) by a similar argument , all other osl models can also be cast in terms of an anisotropic yield condition , of the form where vanishes , and is finite for two values of .( this fixes a _ratio of shear and normal stresses across certain special planes . ) at this purely phenomenological level there is no difficulty in connecting hyperbolic models smoothly onto _ highly anisotropic _elastoplastic descriptions .specifically , consider a medium having an orientation - dependent friction angle that does not actually vanish , but is instead very small ( , say ) in a narrow range of angles ( say of order ) around , and approaches elsewhere .( one interesting way to achieve the required yield anisotropy is to have a strong anisotropy in the _ elastic _ response , and then impose a _ uniform _ yield condition to the strains , rather than stresses . )such a material will have , in principle , mixed elliptic / hyperbolic equations of the usual elastoplastic type .the resulting elastic and plastic regions must nonetheless arrange themselves so as to obey the fpa model to within terms that vanish as . if is small but finite ,then for this elastoplastic model the results will depend on the basal boundary condition , but only through these higher order corrections to the leading ( fpa ) result .we show in section [ sec : fragile ] below that the case of small but finite is exactly what one would expect if a small amount of particle deformability were introduced to a fragile skeleton of rigid particles obeying the fpa constitutive relation .although somewhat contrived ( from an elastoplastic standpoint ) , the above choice of anisotropic yield condition establishes an important point of principle , and may point toward some important new physics .although elastoplastic models do suffer from elastic indeterminacy ( they require a basal displacement field to be specified ) , the extent of the influence of the boundary condition on the solution depends on the model chosen .strong enough ( fabric - dependent ) anisotropy , in an elastoplastic description , might so constrain the solution that , although it suffers elastic indeterminacy in principle , it does so only harmlessly in practice . under such conditionsit is _ primarily the fabric _ and only minimally the boundary conditions which actually determine the stresses in the body . for models such asthat given above there is a well - defined limit where the indeterminacy is entirely lifted , hyperbolic equations are recovered , and it is quite proper to talk of local stress propagation ` rules ' which are determined , independently of boundary conditions , by the fabric ( hence construction history ) of the material. our modelling framework , based precisely on these assumptions , will be valid for sandpiles if , as we contend , their physics lie close to this limit of ` fabric dominance ' ( see section [ sec : fragile ] below ) .this contention is consistent with , though it does not require , belief in the existence of an underlying elastoplastic continuum description .before discussing in more detail the physical interpretation of our models , we give a brief account of the experimental data . in doing this , it is important to draw a distinction between ( axially symmetric ) cones , and ( translationally symmetric ) wedges of sand .the latter is a quasi - two dimensional geometry .the main question is , to what extent the pressure - dip can be trusted as a reproducible experimental phenomenon for a sandpile constructed by pouring onto a rough rigid support . in particular , savage ( 1997 ) has drawn attention to the possible role of small deflections in the base ( ` basal sag ' ) in causing the dip to arise .the earliest data we know of , on conical sandpiles , is that of hummel & finnan ( 1920 ) who observed a pronounced stress dip .however , their pressure cells were apparently subject to extreme hysteresis , and these results can not be relied upon .otherwise the only data prior to smid & novosad ( 1981 ) _ for cones _ is that of jokati & moriyami ( 1979 ) .although a stress dip is repeatedly observed by these authors , their results ( on rather small piles ) do not show consistent rsf scaling .the well - known data of smid & novosad ( 1981 ) shows a clear stress minimum at the centre of the pile .even this dataset is not completely satisfactory : the observation of the dip is based on the data from a single ( but calibrated ) pressure cell beneath the apex .however , the data for different pile heights shows clear ( weak ) rsf scaling , and is quantitatively fit by the fpa model with either of the secondary closures shown in fig . [fig : dip ] .savage ( 1997 ) points out that ` it is not possible from the information given to estimate the deflections [ at the base ] that might result from the weight of the pile ' .smid and novosad , however , describe their platform as ` rigid ' . ) , as shown in fig .[ fig : dip ] ; when normalizing stresses by the mean density of the pile , he apparently prefers to use a separate measurement of the bulk density ( in a different geometry ) , rather than the density deduced by integrating the vertical normal stresses to give the weight of the pile . ]recently , brockbank _ et al . _( 1997 ) have performed a number of careful measurements on relatively small piles of sand ( as well as flour , glass beads , etc . ) .the pressure transducers comprise an assembly of steel ball - bearings lying atop a thin blanket of transparent rubber on a rigid glass plate ; material is poured from a point source onto this assembly .the deflection of the ball - bearings is estimated as 10 m . by calibrating and optically monitoring their imprints on the rubber film, the vertical stresses can be measured .perhaps the most interesting feature of this method is that , although the basal deflection is certainly not zero , it is of a character quite unlike basal sag .indeed , the supporting ball - bearings are deflected downward ( indenting the rubber film ) in a manner that depends on the _ local _ compressive stress , as opposed to the cumulative ( _ i.e. _ , nonlocal ) effect of sagging .the latter is bound to be maximal under the apex of the pile , whereas the indentation is maximal under the zone of maximum vertical compressive stress , wherever that may be . if the stress pattern is controlled by slight deformations of the base , there would be no reason to expect a similar stress pattern to arise for an indentable base , as for a sagging one .but in fact , a very similar stress pattern is seen ( fig .[ fig : dip ] ) .the data shown here involve averaging over several piles , since the setup measures stresses over quite small areas of the base ( the ball bearings are 2.5 mm diameter ) and these stresses fluctuate locally , as is well - known ( liu _ et al ._ 1995 ; claudin & bouchaud 1997 ) .although still subject to relatively large statistical scatter , the data show an unambiguous dip of very similar magnitude to that reported by smid and novosad ; moreover the dip is spread over several , rather than a single , transducer(s ) .it is , of course , important to distinguish conceptually the noisiness of this data ( arising from fluctuations at the granular level ) from any intrinsic irreproducibility of the results . if the results are reproducible , then for large enough piles one might expect the averaging over several piles to be obviated by binning the data over many transducersthis is , in effect , what smid and novosad do ( since their transducers are much larger ) .more careful experimental investigations of this point would , nonetheless , be welcome .we conclude from this recent study , which substantially confirms the earlier work of smid & novosad ( 1981 ) , that the attribution of the stress dip to basal sag is not justified for the case of conical piles of sand .( 1997 ) also saw a stress dip for small , but not large , glass beads .this difference suggests that to observe the dip requires a large enough pile compared to the grain size perhaps to allow an anisotropic mesoscale texture to become properly established .no dip was seen for lead shot ( deformable ) or flour ( cohesive ) .the experiments on _ wedges _ appear very different .the papers of hummel & finnan ( 1920 ) , and lee & herington ( 1971 ) include datasets for which the construction history is described as being effectively from a line source .these results , as well as others cited by savage ( 1997 ) offer support for his conclusions ( made earlier by burman & trollope 1980 ) that the construction history of the wedge does not much matter , and that there is only a very small or negligible dip for wedges supported by a fully rigid base . these studies also suggest that a dip appears almost immediately if the base under the wedge is allowed to sag .these results , if confirmed by careful repetition of the experiments , would certainly cast doubt on fpa - type models as applied to wedges . such historic experiment ,measuring the stress distribution for wedges made supposedly from a line source , need careful repetition .this is because , even from a point source ( conical pile ) or line source ( wedge ) at least two different types of construction history are possible .the first is when , as assumed in fpa - type models , the grains avalanche in a thin layer down the free surface .the second , which , like the first , has clearly been observed in three - dimensional work on silo filling ( munch - andersen & nielsen 1990 , 1993 ) is called ` plastic cone ' behaviour .it entails the impacting grains forcing their way downwards at the apex into the body of the pile , which then spreads sideways .a parcel of grains arriving at the apex ends up finally as a thin horizontal layer .( a transition between this and surface avalanche flow may be controlled by varying the height from which grains are dropped , among other factors . )a third possibility is that of ` deep yield ' ( see evesque & boufellouh 1997 ) : a buildup of material near the apex followed by a deep avalanche in which a thick slab of material slumps outwards ( evesque 1991 ) .these different construction histories , even among piles created from a point or line source , would lead one to expect quite different stress patterns .for example , the plastic cone construction should lead to a texture with local symmetry about the vertical , as assumed by bouchaud _this model , which we also expect to describe a conical pile built by sieving sand uniformly onto a supporting disc ( wittmer _ et al . _ ) does not give a pressure dip .although in point - source experiments on cones the surface avalanche mechanism is usually seen ( evesque 1991 ; evesque al .1993 ) we do not know whether the same applies for wedges ; the classical literature is ambiguous ( hummel & finnan 1920 ; lee & herington 1971 ) . for these reasonssuch experiments must be repeated , with proper monitoring of the construction history , before conclusions can be drawn .there are , in fact , good reasons why the surface avalanche scenario , on which models such as fpa depend , may be very hard to observe in the wedge geometry .recall that for the wedge geometry at repose , all osl models predict an outer sector of the wedge , of substantial thickness , in which the coulomb inequality is saturated .clearly , if avalanches take place on top of a thick slab of material already at incipient failure , it may be impossible to avoid rearrangements deeper within the pile , leading either to ` deep yield ' or ` plastic wedge ' behaviour .to this extent the application of fpa - type models to a wedge geometry is not necessarily self - consistent .the same does not apply in the conical geometry , where the solution of these models predicts only an infinitesimal plastic layer at the surface of the cone ( wittmer _ et al ._ 1996 ) . accordingly it would be very interesting to compare experimentally wedges and cones of the same material to see whether the character of the avalanches is fundamentally different , as fpa - like models might lead one to expect .further experiments involving comparison of histories are suggested by wittmer _( ) .although there are , so far , few data showing a clear dependence of measured stresses on construction history in freestanding cones or wedges , the effect is well - established in experiments on silos .specifically , for flat - bottomed silos filled by surface avalanches from a point source , the vertical normal force at the centre of the base is less than at the edge ( munch - andersen & nielsen 1990 ) .this effect , which is readily explained within an fpa - type modelling approach ( wittmer _ et al . _ ) , is not reported in silos filled by sieving , nor when a plastic cone behaviour is seen at the apex ( munch - andersen & nielsen 1993 ) .as we have emphasized , the continuum mechanics represented by our hyperbolic models is not that of conventional elastoplasticity . in what followswe develop an outline interpretation of this continuum mechanics as that appropriate to a material in which stresses propagate primarily along force chains .simulations of frictional spheres offer some support for the force - chain picture , at least as a reasonable approximation : most of the deviatoric stress is found to arise from _ strong , normal _ forces between particles participating in force chains ; tangential forces ( friction ) and the weaker contacts transverse to the chains contribute mainly to the isotropic pressure ( thornton & sun 1994 ; thornton 1997 ; c. thornton , this volume ) .in addition to this , the content of our models is to assume that the skeleton of force chains is _ fragile _ , in a specific sense defined below .informally speaking , the hyperbolic problem posed by osl models is determined once half of the boundary forces are specified .more precisely ( fig .[ fig : pathfig](b ) ) one is required to specify the surface force tangential to each characteristic ray , at one end and _ one end only_. the corresponding force acting at the other end is obliged to balance the sum of the specified force , any body forces acting tangentially along the ray . if it does not do so , then within our modelling approach , the material ceases to be in static equilibrium .this is no different from the corresponding statement for a fluid or liquid crystal ; if boundary conditions are applied that violate the conditions for static equilibrium , some sort of motion results . unlike a fluid , however , for a granular medium we expect such motion to be in the form of a finite rearrangement rather than a steady flow .such a rearrangement will change the microtexture of the material , and thereby _ alter the constitutive relation among stresses_. we expect it to do so in such a way that the new network of force chains ( new constitutive relation ) is able to support the newly imposed forces . although simplified , we believe that this picture correctly captures some of the essential physics of force chains .such chains are load - bearing structures within the contact network and , in the simplest approximation of straight chains of uniform orientation these must have the property described above : any difference in the forces on two ends of a path must be balanced by a body force .note that if one makes a linear chain of more than two rigid particles with point contacts , then to avoid torques , this can indeed support only tangential forces , regardless of the local friction coefficient between the grains themselves ; see figure [ fig : stresspath](a ) .force chains should , we believe , be identified ( on the average ) with the characteristic rays of our hyperbolic equations .mean orientation _ of the force chains is then reflected in a constitutive equation such as fpa or osl .our modelling approach thus assumes that the mean orientation of force chains , in each element the material , is fixed at burial .( this does not necessarily require that the individual chains are themselves fixed . )we think it reasonable to assume that the force chains will not change their average orientations so long as they are able to support subsequent applied loads .but if a load is applied which they can not support ( one in which the tangential force difference and body force along a path do not match ) irreversible rearrangement is inevitable ( evesque , private communication ) .this causes some part of the pile to adopt a new microtexture and thereby a new constitutive relation . in other words ,_ incompatible _ loadings of this kind must be seen as part of the construction history of the pile .there is a close connection between these ideas and recent work on the ` marginal mechanics ' of periodic arrays of identical grains .( this is considered further in appendix [ sec : trollope ] . )the marginal situation is where the ( mean ) coordination number of the grains is the minimum required for mechanical integrity ; in two dimensions this is three for frictional and four for frictionless spheres .( larger coordination numbers are needed for aspherical grains . ) indeed , each osl models rigorously describes the continuum mechanics of a certain ordered array of this kind ( see appendix [ sec : trollope ] ) .marginal packings are exceptional in an obvious sense : most packings of grains one can think of do not have this property , and the forces acting on each grain can not be found without further information .however , we can interpret this correspondence between continuum and discrete equations , not at the level of the packing of individual grains ( for which the marginal coordination state would be hard to explain ) but at the level of a granular skeleton made of force chains . the osl models ( in two dimensions ) can then be viewed as postulating a simplified , marginally stable geometry of the skeleton , in which a regular lattice of force chains ( bearing tangential forces only ) meet at four - fold coordinated junctions .( for the fpa model , though not in general , this lattice is rectangular .see figure [ fig : stresspath](b ) . )such a skeleton leads to hyperbolic equations ( or perhaps parabolic ones if enough disorder is added ) ; its mechanics are determinate in the absence of a displacement field specified at the base . in the present context, fragility arises from the the requirement of tangential force balance along force chains .if this is violated at the boundary ( within the models as so far defined , even infinitesimally ) then internal rearrangement must occur , causing new force chains to form , so as to support the load .it seems reasonable to assume that when rearrangements are forced upon the system , it responds in an ` overdamped manner ' that is , the motion ceases as soon as the load is once again supported .if so , one expects the new state to again be marginally stable .this suggests a scenario in which the skeleton evolves dynamically from one fragile state to another . by such a mechanism , marginally stable packings ,although exceptional in the obvious sense that most packings one can think of are not marginal , may nonetheless be generic in unconsolidated dry granular matter . thornton ( 1997 ) reports that , in simulations of frictional spheres , force chains do rearrange strongly under slight reorientations of the applied load .consider finally a regular lattice of force chains , for simplicity rectangular ( the fpa case ) which is fragile if the chains can support only tangential loads .this is the case so long as such paths consist of linear chains of rigid particles , meeting at frictional point contacts : as mentioned above , the forces on all particles within each chain must then be colinear , to avoid torques .this imposes the ( fpa ) requirement that there are no shear forces across a pair of orthogonal planes normal to the force chains themselves ( see section [ sec : strain][subsec : anisotropic ] ) .suppose now a small degree of particle deformability is allowed ( cates _ et al ._ , to be published ) .this relaxes _ slightly _ the collinearity requirement , but only because the point contacts are now flattened .the ratio of the maximum transverse load to the normal one will therefore vanish as some power of the mean deformation .this yield criterion applies only across two special planes ; failure across others is governed by some smooth yield requirement ( such as the ordinary coulomb condition : the ratio of the principal stresses lies between given limits ) .the granular skeleton just described , which was fragile in the limit of rigid grains , is now governed by a strongly anisotropic elastoplastic yield criterion of precisely the kind described in section [ sec : strain][subsec : anisotropic ] .the skeleton can support loads that do violate the tangential balance condition , but only through terms that vanish as . to escape the hyperbolic regime of ` fabric dominance ' , must be significant , which in turn requires significant particle deformation under the influence of the mean stresses applied . this indicates how a non - fragile packing of frictional , deformable rough particles , displaying broadly conventional elastoplastic features when the deformability is significant , can approach a fragile limit when the limit of a large modulus is taken at fixed loading .( it does not , of course , imply that _ all _ packings become fragile in this limit . )conversely it shows how a packing that is basically fragile ( in its response to gravity ) could nonetheless support very small incremental deformations , such as sound waves , by an elastic mechanism .the question of whether sandpiles are better described as fragile , or as ordinarily elastoplastic , remains open experimentally . to some extentit may depend on the question being asked .however , we have argued , on various grounds , that in calculating the stresses in a pile under gravity a fragile description may lie closer to the true physics .from the perspective of geotechnical engineering , the problem of calculating stresses in the humble sandpile may appear to be of only of marginal importance .the physicist s view is different : the sandpile is important , because it is one of the simplest problems in granular mechanics imaginable .it therefore provides a test - bed for existing models and , if these show shortcomings , may suggest ideas for improved physical theories of granular media .there are , in physics , certain types of problem for which the fundamental principles or equations are clear , and the difficulty lies in working out their consequences .an example is the use of the navier stokes equation in studies of ( say ) turbulence .the form of the navier stokes equation can be deduced by considering only the symmetries and conservation laws of an isotropic fluid .accordingly , its status is not , as sometimes assumed , that of an approximation based on constitutive hypotheses that happen to be very accurate for certain materials .rather , it describes a limiting behaviour , which all members of a large class of materials ( viscoelastic fluids included ) approach with indefinite accuracy in the limit of long length- and time - scales .( we are aware of no theory of elastoplasticity having remotely similar status . )there are other types of problem in which the fundamentals are not clear . for such problems ,the governing equations must first be established , before they can be solved .we remain convinced that the static modelling of _ poured assemblies of cohesionless grains under gravity _ is of this second type .this view is not particularly new , either among physicists ( edwards & oakeshott 1989 ) , or among engineers ( gudeshus 1985,1997 ) . from this perspective, we can see no reason why the starting points of simple rigid - plastic or elastoplastic continuum mechanics should offer significant insights into the sandpile problem .simple elastoplastic approaches , in particular , give only one unambiguous physical prediction : that a sandpile supported by a rough base should have _ no definite behaviour_. experimentalists , who believe themselves to be measuring a definite result , are likely to be baffled by such predictions .for if , as these models require , the forces acting at the base of a pile can be varied at will without causing its static equilibrium to be lost ( by making small elastic displacements at the base ) , then all the published ` measurements ' of such forces must be dismissed as artefact .an alternative view is that these represent rather haphazard investigations of some unspecified physical mechanism that does somehow determine a displacement field at the base of the pile .( as mentioned previously , basal sag is certainly not an adequate candidate . )the challenge of whether , for cohesionless poured sand , such a displacement field can sensibly be defined , remains open .given the present state of the data , a conventional elastoplastic interpretation of the experimental results for sandpiles may remain tenable ; more experiments are urgently required . in the mean time , a desire to keep using tried - and - tested modelling strategies until these are demonstrably proven ineffective is quite understandable .we find it harder to accept the suggestion ( savage 1997 ) that anyone who questions the complete generality of traditional elastoplastic thinking is somehow uneducated .our own position is not that elastoplasticity itself is dead , but we do believe that macroscopic stress propagation in sandpiles is determined much more by the internal fabric of the material ( therefore the construction history ) and much less by boundary conditions , than _ simple _ elastoplastic models suggest .reasons for this , based on the idea of a fragile skeleton of force chains , have been discussed above . by considering a particular form of yield condition ,we have shown how a fragile model can be matched smoothly onto a relatively conventional , but strongly anisotropic , elastoplastic theory .thus it is possible in principle to have a model which , although strictly governed by the mixed hyperbolic / elliptic equations of elastoplasticity , leads to solutions that obey purely hyperbolic equations everywhere , to within ( elastically indeterminate ) corrections that are small in a certain limit .in such a system the results will depend less and less on boundary conditions , and more and more on fabric , as that limit is approached .moreover , for certain well - defined fragile packings of frictional grains , the limit is the rigid particle one , in which the elastic modulus of the grains is taken to infinity at fixed loading . in summary , we have discussed a new class of models for stress propagation in granular matter .these models assume local propagation rules for stresses which depend on the construction history of the material and which lead to hyperbolic differential equations for the stresses .as such , their physical basis is substantially different from that of conventional elastoplastic theory ( although they may have much more in common with ` hypoplastic ' models ) .our approach describes a regime of ` fragile ' behaviour , in which stresses are supported by a granular skeleton of force chains that must undergo finite internal rearrangement under certain types of infinitesimal load .obviously , such models of granular matter might be incomplete in various ways .specifically we have discussed a possible crossover to elastic behaviour at very small incremental loads , and to conventional elastoplasticity at very high mean stresses ( when significant particle deformations arise ) .however , we believe that our approach , by capturing at least some of the physics of force chains , may offer important insights that lie beyond the scope of conventional elastoplastic or rigid - plastic modelling strategies . the equivalence between our fragile models and limiting forms of extremely _ anisotropic _ elastoplasticity , has been pointed out .we are grateful to s. edwards , p. evesque , j. goddard , g. gudehus , j. jenkins , d. kolymbas , d. levine , s. nagel , s. savage , c. thornton , and t. witten for discussions .this research was funded in part by epsrc ( uk ) grants gr / k56223 and gr / k76733 .as an alternative to the ` spaceship model ' , one might envisage ( fig .[ fig : spaceship](b ) ) the creation of a pile by incremental addition of thin layers of elastoplastic material to its upper surface ( in imitation of an avalanche ) . itmight then be argued that this thin layer , being under negligible stress , must be characterized by a zero displacement field ( savage 1998 ) . on a rough support , one would then expect the displacement at the base to remain zero as further additions to the pile are made , giving a zero displacement boundary condition at the base of what has , by now , presumably become a simple elastoplastic body .this reasoning is flawed : the same argument entails that , at any stage of the pile s construction , the _last _ layer added is in a state of zero displacement , not just where it meets the base , but along its entire length .if so , then not only the base but also the free surface of the pile is subject to a zero displacement boundary condition . for a simple elastoplastic cone or wedge ,this is incompatible with the zero stress boundary condition already acting at the free surface .( such a body , in effect suspended under gravity from a fixed upper surface , will exert forces across that surface , as well as across the supporting base ) .the paradox is resolved by noticing that this ` laminated elastoplastic ' model in fact involves the addition of thin , stress - free elastoplastic layers to an already deformed body. the result will not be a simple elastoplastic continuum , but a body in which internal stresses and displacements are present even when all body forces are removed ( like a reinforced concrete pillar , or a tennis racket made of laminated wood ) fig.[fig : spaceship](b ) . such a body can , if carefully designed with a specific loading in mind , satisfy simultaneously a zero stress and zero displacement ( more properly , constant displacement ) boundary condition at any particular surface . these rather intriguing propertiesmay well be worth investigating further , but they are still a long way from a realistic description of the construction history of a sandpile . in any caseit is misleading to suggest ( savage 1998 ) that such considerations can justify the adoption of a zero displacement basal boundary condition within an ordinary ( _ i.e. _ , not pre - strained ) , isotropic elastoplastic continuum model .note first that a very large class of discrete models lead directly to osl models in the continuum limit .a simple example is defined in fig . [fig : trollope](a ) . as shown by bouchaud _( 1995 ) , this model gives a wave equation with two characteristic rays symmetrically arranged about the vertical axis .if the symmetry in the stress propagation rules is broken , an asymmetric osl model arises instead ( fig . [fig : trollope](b ) ) .secondly , when the continuum limit of such force - transfer models is taken , one has ( in two dimensions ) _ only two characteristic rays _ even if the force transfer rules involve more than two neighbours in the layer below .an example ( claudin _ et al ._ 1998 ) is shown in fig .[ fig : trollope](c ) . broadly speaking, one recovers an osl model , in the continuum limit , whenever the forces passed from a grain to its downward ( or sideways ) neighbours obey a deterministic linear decomposition of the ` incident force ' , defined as the vector sum of the forces acting from grains in the layer above , plus the body force on the given grain .trollope s model , whose force transfer rules are as shown in fig .[ fig : trollope](d - f ) , is not a member of this class .( indeed it has three characteristic rays in the continuum limit , rather than two .) this is because _ the vector sum of the incident forces on a grain is not taken _ before applying a rule to determine the outgoing forces from that grain ; the latter depend _ separately _ on each of the incident forces . as a description of hard frictional grains, we consider this unphysical . for , if the grain in fig .[ fig : trollope](d ) is subjected to two equal small extra forces from its two neighbours in the layer above ( whose vector sum is vertical ) the net effect on the outgoing forces should be equivalent to a small increase in its weight . within trollope s model ,this is not the case . since its propagation rules are linear , any attempt to rectify this feature ( by taking the vector sum of the forces before propagating these on to the next layer ) will give an osl model instead .bouchaud , j. p. , claudin , p. , cates , m. e. & wittmer , j. p. 1998 , models of stress propagation in granular media . in _ proc .physics of dry granular media , cargese , france , october 1997 . _herrmann ) , nato advanced study institute , kluwer ( in press ) .cantelaube , f. , & goddard , j. d. 1997 elastoplastic arching in 2d heaps . in _ proc .3rd int . conf . on powders and grains , durhamnc , usa 18 - 23 may 1997 _ ( eds .r. p. behringer & j. t. jenkins ) , pp .231 - 234 .rotterdam : balkema .evesque , p. & boufellouh , s. 1997 stress distribution in an inclined pile : soil mechanics calculation using finite element technique . in _ proc .3rd int . conf . on powders and grains ,durham nc , usa 18 - 23 may 1997 _ ( eds .behringer & j. t. jenkins ) , pp .295 - 298 .rotterdam : balkema .evesque , p. on `` stress propagation and arching in static sandpiles '' by j. p. wittmer _ et al . _ : about the scaling hypothesis of the stress field in a conic sandpile. _ j. physique _ * i 7 * , 1305 - 1307 .gudehus , g. 1997 .attractors , percolation thresholds and phase limits of granular soils . in _ proc .3rd int . conf . on powders and grains ,durham nc , usa 18 - 23 may 1997 _ ( eds .r. p. behringer & j. t. jenkins ) , pp .169 - 183 .rotterdam : balkema .savage , s. b. 1997 problems in the statics and dynamics of granular materials .3rd int . conf .on powders and grains , durham nc 18 - 23 may 1997 _ , ( eds .r. p. behringer & j. t. jenkins ) , pp .185 - 194 .rotterdam : balkema .savage , s. b. 1998 modelling and granular material boundary value problems . in _ proc .physics of dry granular media , cargese , france , october 1997 . _herrmann ) , nato advanced study institute , kluwer , in press .trollope , d. h. 1968 the mechanics of discontinua or clastic mechanics in rock problems . in _ rock mechanics in engineering practice _( eds . k. g. stagg & o. c. zienkiewicz ) , ch .275 - 320 .new york : wiley .wittmer , j. p. , claudin , p. , cates , m. e. & bouchaud , j .- a new approach for stress propagation in sandpiles and silos . in _ friction , arching , contact dynamics _ ( eds .wolf & p. grassberger ) , p. 153 - 167 .singapore : world scientific .
|
the pressure distribution beneath a conical sandpile , created by pouring sand from a point source onto a rough rigid support , shows a pronounced minimum below the apex ( ` the dip ' ) . recent work of the authors has attempted to explain this phenomenon by invoking local rules for stress propagation that depend on the local geometry , and hence on the construction history , of the medium . we discuss the fundamental difference between such approaches , which lead to hyperbolic differential equations , and elastoplastic models , for which the equations are elliptic within any elastic zones present . in the hyperbolic case , the stress distribution at the base of a wedge or cone ( of given construction history ) , on a rough rigid support , is uniquely determined by the body forces and the boundary condition at the free ( upper ) surface . in simple elastoplastic treatments one must in addition specify , at the base of the pile , a displacement field ( or some equivalent data ) . this displacement field appears to be either ill - defined , or defined relative to a reference state whose physical existence is in doubt . insofar as their predictions depend on physical factors unknown and outside experimental control , such elastoplastic models predict that the observations should be intrinsically irreproducible . this view is not easily reconciled with the existing experimental data on conical sandpiles , which we briefly review . our hyperbolic models are based instead on a physical picture of the material , in which ( a ) the load is supported by a skeletal network of force chains ( stress paths " ) whose geometry depends on construction history ; ( b ) this network is ` fragile ' or marginally stable , in a sense that we define . although perhaps oversimplified , these assumptions may lie closer to the true physics of poured cohesionless grains than do those of conventional elastoplasticity . we point out that our hyperbolic models can nonetheless be reconciled with elastoplastic ideas by taking the limit of an extremely anisotropic yield condition . [ firstpage ]
|
one of the main problems of quantum mechanics deals with the possibility of measuring together the position and momentum distributions and of a quantum system prepared in a state .the basic structures of quantum mechanics dictate that there is no ( joint ) measurement which would directly give both the position and momentum distributions and that , for instance , any determination of the position distribution necessarily disturbs the system such that the initial momentum distribution gets drastically changed . in recent yearstwo important steps have been taken in solving this problem .first of all , the original ideas of heisenberg have finally been brought to a successful end with the seminal paper of werner which gives operationally feasible necessary and sufficient conditions for a measurement to serve as an approximate joint measurement of the position and momentum distributions , including also the inaccuracy - disturbance aspect of the problem . the second breakthrough in studyingthis question comes from a reconstruction of the state from a single informationally complete measurement , notably realized optically by an eight - port homydyne detection , ( for a rigorous quantum mechanical treatment , see ) . in conjunction with an explicit state reconstruction formula ( known at least for the husimi - distribution ), this allows one to immediately determine the distributions of any given observables .if one is only interested in determining the position and momentum distributions and , it is obviously unnecessary to reconstruct the entire state ; one should be able to do this with less information . herewe will use the statistical method of moments to achieve a scheme for position and momentum tomography , i.e. the reconstruction of the position and momentum distributions from the measured statistics .the price for using moments is , of course , that they do not exist for all states , and even when they do , they typically do not determine the distribution uniquely .hence , we restrict here to the states for which the position and momentum distributions are exponentially bounded .we note that this is an operational condition and can , in principle , be tested for a given moment sequence .we consider three different , though related , measurement schemes based on the von neumann model , sect. [ vnmodel ] , and the balanced homodyne detection technic , sect .[ homodyne ] .the first model is a sequential measurement of a standard position measurement of the von neumann type followed by any momentum measurement , sect .[ sequential ] .the second ( sect . [ akb ] ) builds on the arthurs - kelly model as developed further by busch whereas the third ( sect . [ homodyne ] )model uses the quantum optical realizations of position and momentum as the corresponding quadrature observables of a ( single mode ) signal field implemented by balanced homodyne detection . in sect .[ simultaneous ] we apply the method of moments to determine both the position and momentum distributions and from the actually measured statistics .finally , we compare our method with the state reconstuction method , sect .there we also comment briefly the possibility of inverting convolutions .we begin , however , with quoting the basic no - go results on the position - momentum joint / sequential measurements .there are many formulations of the basic fact that position and momentum of a quantum object can not be measured jointly , or , equivalently , that , say , any position measurement ` destroys ' all the information on the momentum prior to the measurement . in this sectionwe recall one of the most striking formulations of this fact . to do that we fix first some notations .let be a complex separable hilbert space and the set of bounded operators on .let be a nonempty set and a -algebra of subsets of .the set function is a _semispectral measure _ , or normalized positive operator measure , pom , for short , if the set function is a probability measure for each , the set of unit vectors of .we denote this probability measure by .a semispectral measure is a spectral measure if it is projection valued , that is , all the operators , are projections . if is the hilbert space of a quantum system , then the observables of the system are represented by semispectral measures and the numbers , , , are the measurement outcome probabilities for in a vector state .an observable is called sharp if it is represented by a spectral measure .otherwise , we call it unsharp . herewe consider only the cases where the measurement outcomes are real numbers , that is , is the real borel space , or , pairs of real numbers , in which case is .the position and momentum distributions and are just the probability measures and defined by and together with a density matrix ( mixed state ) .an observable has two marginal observables and defined by the conditions and for all .any measurement of constitutes a joint measurement of and . on the other hand ,any two observables and admit a joint measurement ( or equivalently a sequential joint measurement ) if there is an observable ( on the product value space ) such that and .the following result is crucial : [ apu3 ] let be a semispectral measure , such that one of the marginals is a spectral measure .then , for any , , that is , the marginals commute with each other , and , that is , is of the product form .assume that is an observable with , say , the first marginal observable being the position of the object .then and commute with each other , and due to the maximality of the position observable any is a function of .therefore , can not represent ( any nontrivial version of ) the momentum observable .similarly , if one of the marginal observables is the momentum observable , then the two marginal observables are pairwisely commutative , and the effects of the other marginal observable are functions of the momentum observable .it is a basic result of the quantum theory of measurement that each observable ( sharp or unharp ) admits a realization in terms of a measurement scheme , that is , each observable has a measurement dilation . in particular, this is true for the position and momentum observables and .however , due to the continuity of these observables they do not admit any repeatable measurements .in fact , the known realistic models for position and momentum measurements serve only as their approximative measurements which constitute and -measurements only in some appropriate limits . herewe consider two such models , the standard von neumann model and the optical version of a , resp . , -measurement in terms of a balanced homodyne detection . before entering these modelswe briefly recall the notion of intrinsic noise of an observable and the corresponding characterization of noiseless measurements . for an observable moment operator is the ( weakly defined ) symmetric operator =\int_{\mathbb{r}}x^k\,de ] . in particular , the number \psi\rangle=\int_{\mathbb{r}}x^k\,dp^e_\psi ] , and it is known to be positive , that is , for all )\cap d(e[1]^2) ] of is selfadjoint , then is sharp exactly when is noiseless , that is , . in by a projection ] for all , and thus also , though the first moment ] of an observable alone is never sufficient to determine the actual observable . in statistical terms , the first moment information ( expectation ) \psi\rangle ] can all be computed , and they turn out to be polynomials of degree of , that is , )=d(q^k) ] , one has , suggesting , again , that , for a fixed , if is large , then the noise is small , or , for a fixed , if is small , then , again , would be small .but , again , the precise meaning of the limit in either of the cases or waits to be qualified .consider first the limit , so that , the operator measures are actually , with the moment operators ] for all ( and for all ) , and \psi\rangle = \langle\psi|q^k\psi\rangle\ ] ] for all and . due to the exponential boundedness of the hermite functions , the moments , , of the probability measure determine it uniquely .since is a dense subspace , the probability measures , , determine , by polarization , the spectral measure of . to conclude that on the basis of the statistical data ( [ datalimit ] ) , the observable would converge to , one needs to know that also is determined by its moment operators ] , .hence , by polarization , is determined by the numbers \psi\rangle ] denotes the restriction of the moment operator ] , which includes the set because of the above operator relations .hence , .but is symmetric and selfadjoint , so that .this would again suggests that in the limit , the intrinsic noise goes to zero and thus the measured observable would approach the quadrature observable . like in the previous case , sect .[ vnmodel ] , this limit requires further considerations .actually , the restrictions of all the moment operator ] . since is dense , these probability measures define again the whole operator measure .let now be a sequence of positive numbers converging to infinity .for this choice , let , where the phase is also fixed , and let be the corresponding balanced homodyne detection observable . by the above results it now follows that the spectral measure is the only moment limit of the sequence of observables . moreover , for any unit vector , for all whose boundary is of lebesgue measure zero . in this senseone can say that the high amplitude limit of the balanced homodyne detection scheme serves as an experimental implementation of a quadrature observable . again, one may solve the statistical moments from ( [ hdmoments ] ) for all .however , in this case they are not directly expressible in terms of actually measured moments \psi\rangle ] tends to infinity as increases . for example , the limit of the second moment is never finite since is always non - zero .that is , the limits of the moments of the probability measure are not moments of any determinate probability measure , and hence they do not determine any observable .another way to look at the limits of the marginal observables is to choose a sequence of initial probe states , such that approaches the delta distribution as increases .for example , choose the gaussian states in which case the explicit forms of the moment operators ] can easily be computed : & = & \sum^{k}_{i=0,\ i\textrm { even}}\binom{k}{i } \frac{\lambda^{-i}}{\sqrt{n^{i}\pi } } { \gamma}\left(\frac{i+1}{2 } \right ) q^{k - i } , \label{marginaalit21}\\ g^{\lambda , n}_{2}[k ] & = & \sum^{k}_{i=0,\ i\textrm { even}}\binom{k}{i } \lambda^{i}\sqrt{\frac{n^{i}}{\pi } } { \gamma}\left(\frac{i+1}{2 } \right ) p^{k - i},\label{marginaalit22}\end{aligned}\ ] ] where denotes the gamma function .taking the limit one gets a result similar to the one considered before ( ) .as expected , the limit procedures can not give both the and -distributions , but as it is obvious from ( [ marginaalit11]-[marginaalit12 ] ) and ( [ marginaalit21]-[marginaalit22 ] ) the method of moments can again be used .we return to that in sect .[ simultaneous ] .again , the convolution structure allows one to easily compute the distances between the marginals and the sharp position and momentum observables .one finds that showing , that the product of the distances does not depend on .since the distances are fourier - related , their product has a positive lower bound , that is , .for example , in the case of the gaussian initial states one has for all .the arthurs - kelly model as developed further by busch ( see also ) is based on the von neumann model of an approximate measurement .it consists of standard position and momentum measurements performed simultaneously on the object system .consider a measuring apparatus consisting of two probe systems , with associated hilbert spaces and .let be the initial state of the apparatus .the apparatus is coupled to the object system , originally in the state , by means of the coupling which changes the initial state of the object - apparatus system into .the final state has the position representation notice , that the coupling ( [ coupling ] ) is a slightly simplified version of the one used by arthurs and kelly .however , it does not change any of our conclusions .the measured covariant phase space observable is determined from the condition for all , and the marginal observables and turn out to be where and are the probability distributions related to the original single measurements , i.e. and , and we have used the scaled functions and . if we choose the initial state of the apparatus to be such that , the moment operators can be computed : & = & \sum_{n=0}^{k}\sum_{i=0}^{n } \binom{k}{n } \binom{n}{i } \lambda^{-(n - i)}(-\mu)^{i}\langle \phi_{1}\vert q^{n - i}_{1 } \phi_{1}\rangle \langle\phi_{2}\vert q^{i}_{2}\phi_{2}\rangle q^{k - n},\label{marginaalit31}\\ g_{2}[k ] & = & \sum_{n=0}^{k}\sum_{i=0}^{n } \binom{k}{n } \binom{n}{i } \mu^{-(n - i)}(-\lambda)^{i}\langle \phi_{2}\vert p^{n - i}_{2 } \phi_{2}\rangle \langle\phi_{1}\vert p^{i}_{1}\phi_{1}\rangle p^{k - n}.\label{marginaalit32}\end{aligned}\ ] ] it is clear from equations ( [ akmarginaali1 ] -[akmarginaali2 ] ) , that the - and -distributions can not be simultaneously obtained as limits of the marginals , since the distributions and can not both be arbitrarily sharply concentrated .however , equations ( [ marginaalit31]-[marginaalit32 ] ) show that the method of moments can be used .the eight - port homodyne detector consists of the setup shown in figure [ detector ] .the detector involves four modes and the associated hilbert spaces will be denoted by , , and .mode 1 corresponds to the signal field , the input state for mode 2 serves as a parameter which determines the observable to be measured , and mode 4 is the reference beam in a coherent state .the input for mode 3 is left empty , corresponding to the vacuum state .we fix a photon number basis for each , so that the annihilation operators , as well as the quadratures , , and the photon number operators are defined for each mode .the photon detectors are considered to be ideal , so that each detector measures the sharp photon number .the phase shifter is represented by the unitary operator , where is the shift .there are four 50 - 50-beam splitters , , , , each of which is defined by its acting in the coordinate representation : in the picture , the dashed line in each beam splitter indicates the input port of the `` primary mode '' , i.e. the mode associated with the first component of the tensor product in the description of equation ( [ split ] ) .the beam splitters are indexed so that the first index indicates the primary mode .let be the coherent input state for mode 4 .we detect the scaled number differences and , where , so that the joint detection statistics are described by the unique spectral measure extending the set function where the operator acts on the entire four - mode field .let and be the input states for mode 1 and 2 , respectively .then the state of the four - mode field after the combination of the beam splitters and the phase shiter is we regard , and as fixed parameters , while is the initial state of the object system , i.e. the signal field .the detection statistics then define an observable on the signal field via = { { \rm tr } } [ w_{\rho,\sigma , z,\xi } \mathsf d_1(x)\otimes\mathsf d_2(y)].\ ] ] this is the signal observable measured by the detector .let denote the covariant phase space observable generated by a positive trace one operator , that is , for all , where , , are the weyl operators associated with the position and momentum operators and .let denote the conjugation map , i.e. in the coordinate representation , and let be any sequence of positive numbers tending to infinity .it was shown in that the measured observable approaches with increasing the phase space observable generated by , that is , in the weak operator topology , for any such that the boudary has zero lebesque measure . in general , it is difficult to determine the domains of the moment operators of the covariant phase space observable . however , if the generating operator is such that and are hilbert - schmidt operators for all , then according to ( * ? ? ?* theorem 4 ) we have & = & \sum_{n=0}^{k } \binom{k}{n } ( -1)^{n } { { \rm tr } } [ \sigma q^{n}_{2 } ] q^{k - n}_{1},\label{marginaalit41}\\ g^{c\sigma c^{-1}}_{2}[k ] & = & \sum_{n=0}^{k } \binom{k}{n } ( -1)^{n } { { \rm tr } } [ \sigma p^{n}_{2 } ] p^{k - n}_{1}.\label{marginaalit42}\end{aligned}\ ] ]in the three different measurement models considered above , the actually measured observable is a covariant phase space observable for an appropriate generating operator .hence , the marginal observables and are convolutions of the sharp position and momentum observables with the fourier related probability densities and defined by , respectively . indeed , if is the spectral decomposition of , then and .due to this structure , the moment operators of the marginal observables and can be written in simple forms as polynomials of either or .that is , for any , \psi\rangle & = & \sum^{k}_{i=0 } s^{q}_{ki } \langle\psi\vert q^{k - i}\psi\rangle,\\ \langle \psi \vert g_{2}^t[k ] \psi\rangle & = & \sum^{k}_{i=0 } s^{p}_{ki } \langle\psi \vert p^{k - i}\psi\rangle,\end{aligned}\ ] ] where the coefficents and depend on the model in question and in each case . from these ,the recursion formulae for the moments of the position and momentum distributions and , with , of the object to be measured can be computed : \psi\rangle - \sum^{k}_{i=1 } s^{q}_{ki } \langle\psi\vert q^{k - i}\psi\rangle,\label{rekonstruktio1}\\ \langle \psi \vert p^{k } \psi\rangle & = & \langle \psi \vert g_{2}^t[k]\psi\rangle - \sum^{k}_{i=1 } s^{p}_{ki } \langle\psi \vert p^{k - i}\psi\rangle.\label{rekonstruktio2}\end{aligned}\ ] ] if is chosen to be , for example , a linear combination of hermite functions , the distributions and are exponentially bounded and as such , are uniquely determined by their respective moment sequences and . in this senseone is able to measure simultaneously the position and momentum observables and in such a vector state in any of the three single measurement schemes collecting the relevant marginal information .furthermore , since the linear combinations of hermite functions are dense in , their associated distributions and suffice to determine the whole position and momentum observables and as spectral measures .we have shown with three different measurement models that the statistical method of moments allows one to determine with a single measurement scheme both the position and momentum distributions and from the actually measured statistics for a large class of initial states . in each casethe actually measured observable is a covariant phase space observable whose generating operator depends on the used measurement scheme . such an observable is known to be informationally complete if the operator satisfies the condition \neq 0 ] , then , of course , one knows the distribution of any observable , in particular , the position and momentum distributions and .however , the reconstruction of the state from such a statistics is typically a highly difficult task , see e.g. . in the special case of the generating operator being the gaussian ( vacuum ) state , the distribution ] , with being the husimi q - function of the state . using the polar coordinates , the matrix elements of with respect to the number basis are where it is to be emphasized that the reconstruction of the state requires , however , full statistics of the observable .the marginal information , which is used in the method of moments , is clearly not enough to reconstruct the state even in the case where the position and momentum distributions are exponentially bounded . to illustrate this fact ,let us consider the functions , with , .the fourier transform of is and the position and momentum distributions are which are clearly exponentially bounded . for , we see that and are different states , but and .the marginal probabilities are for all , with , so the marginal distributions are equal .it follows that the state can not be uniquely determined from the marginal information only . since the marginal observables and are of the convolution form with densities , the position and momentum distributions can also be obtained if one is able to invert the convolutionindeed , for any initial state the marginal distributions and have the densities and , where , , with , and , .the unknown distributions and can be solved from the measured distributions and by using either the fourier inversion or the differential inversion method . like the method of moments , these methods have their own specific restrictions .in fact , by the fourier theory , one has , for instance , , so that , provided that is pointwise nonzero . if is an -function , then the function coincides with the distribution ( almost everywhere ) .obviously , this puts strong restrictions on the actually measured distribution as well as on the ` detector ' density .the method of differential inversion is known to be applicable whenever the detector densities and have finite moments . in the special case of so that and are the gaussian , one has provided that the right hand sides exist , which is a further condition on the initial state . to conclude , the statistical method of moments provides an operationally feasible method to measure with a single measurement scheme both the position and momentum distributions and for a large class of initial states , the relevant condition being the exponential boundedness of the involved distributions .this method requires neither the state reconstruction nor inverting convolutions .if is a projection in the range of , then commutes with any effect , ( see , for instance , ( * ? ? ?* th . 1.3.1 , p. 91 ) ) .therefore , the marginals and are mutually commutative , i.e. for all , and the map is a positive operator bimeasure , and extends uniquely to a semispectral measure , with for all ( see , e.g. , theorem 1.10 , p. 24, of ) .let . since and commute and one of them is a projection , we have , the greateslower bound of and , ( * ? ? ?* corollary 2.3 ) .since also is a lower bound for and , we obtain .it follows that for any , where is the algebra of all finite unions of mutually disjoint sets of the form , .denote .now is a monotone class .[ if is an increasing sequence of sets of , then for any , we have because e.g. is a positive measure .this shows that .similarly , we verify the corresponding statement involving decreasing sequences , and thereby conclude that is a monotone class . ]since , and is an algebra which generates the -algebra , it follows from the monotone class theorem that for all .let , and let be any unit vector .since and are probability measures , we get implying that . since was arbitrary , this implies .the proof is complete .ali , e. progoveki , classical and quantum statistical mechanics in a common liouville space , _ physics _ * 89a * ( 1977 ) 501 - 521 .e. arthurs , j. kelly , on the simultaneous measurements of a pair of conjugate observables , _ bell system tech .j. _ * 44 * ( 1965 ) 725 . c. berg , j.p.r .christensen , p. ressel , _ harmonic analysis on semigroups _ , springer , berlin , 1984 .p. busch , _ unbestimmtheitsrelation und simultane messungen in der quantentheorie _ , ph.d .thesis , university of cologne , 1982 .english translation : indeterminacy relations and simultaneous measurements in quantum theory , _ int .* 24 * 63 - 92 ( 1985 ) .p. busch , m. grabowski , p. j. lahti , _ operational quantum physics _ , springer , berlin , 1995 .p. busch , j. kukas , p. lahti , measuring position and momentum together , _physics letters _ * a 372 * ( 2008 ) 4379 - 4380 .g. m. dariano , c. macchiavello , m. g. a. paris , detection of the density matrix through optical homodyne tomography without filtered back projection , _ phys .a _ * 50 * ( 1994 ) 4298 .dubin , j. kiukas , j .-pellonp , private communication 2008 .g. freud , _ orthogonal polynomials _ ,akad ' emia kiad ' o , budabest , 1971 . w. heisenberg , ber den anschaulichen inhalt der quantentheorischen kinematik und mechanik , _ z. phys . _* 43 * ( 1927 ) 172 - 198 .i. i. hirschman , d. v. widder , _ the convolution transform _ , princeton university press , princeton , 1955 .r. g. hohlfeld , j. i. f. king , t. w. drueding , g. v. sandri , solution of convolution integral equations by the method of differential inversion , _ siam j. appl . math ._ , * 53 * ( 1993 ) 154 - 167 .a. s. holevo , covariant measurements and uncertainty relations , _ rep .* 16 * ( 1979 ) 385 - 400 .j. kiukas , p. lahti , on the moment limit of quantum observables , with an application to the balanced homodyne detection , _ j. mod .optics _ * 55 * ( 2008 ) 1175 - 1198 .j. kiukas , p. lahti , a note on the measurement of phase space observables with an eight - port homodyne detector , _optics _ ( 2007 ) j. kiukas , p. lahti , k. ylinen , phase space quantization and the operator moment problem , _ j. math* 47 * ( 2006 ) 072104/18 .j. kiukas , p. lahti , k. ylinen , semispectral measures as convolutions and their moment operators , _ j. math* 49 * ( 2008 ) 112103/6 .j. kiukas , r. werner , private communication 2008 .p. lahti , j .-pellonp , k. ylinen , two questions on quantum probability , phys .lett . a * 339 * ( 2005 ) 18 - 22 .u. leonhardt , h. paul , phase measurement and q function , _ phys .rev . a _ * 47 * ( 1993 ) r2460-r2463 .u. leonhardt , _ measuring the quantum state of light _ , cambridge university press , cambridge , 1997 .g. ludwig , _ foundations of quantum mechanics i _ , springer - verlag , berlin 1983 .a. uczack , _instruments on von neumann algebras _ , institute of mathematics , d university , poland , 1986 .manko , g. marmo , a. simoni , f. ventriglia , a possible experimental check of the uncertainty relations by means of homodyne measuring photon quadrature , arxiv:0811.4115v1 .t. moreland , s. gudder , infima of hilbert space effects , _ linear algebra and its applications _ * 286 * 1 - 17 ( 1999 ) .m. ozawa , quantum measuring processes of continous observables , _ j. math .* 25 * ( 1984 ) 79 - 87 .m. paris , j. eh ' aek ( eds ) , _ quantum state estimation _ ,notes phys .* 649 * , springer - verlag , berlin , 2004 .m. g. raymer , uncertainty principle for joint measurement of noncommuting variables , _ am ._ * 62 * ( 1994 ) 986 - 993 .p. trm , s. stenholm , i. jex , measurement and preparation using two probe modes , _ phys . rev . a _ * 52 * ( 1995 ) 4812 - 4822 .j. von neumann , _ mathematische grundlagen der quantenmechanik _, springer - verlag , berlin , , 1932 .r. werner , quantum harmonic analysis on phase space , _ j. math .phys . _ * 25 * ( 1984 ) 1404 - 1411 .r. werner , dilations of symmetric operators shifted by a unitary group , _ j. func .* 92 * ( 1990 ) 166 - 176 .r. werner , the uncertainty relation for joint measurement of position and momentum , _ qu .* 4 * ( 2004 ) 546 - 562 .
|
we illustrate the use of the statistical method of moments for determining the position and momentum distributions of a quantum object from the statistics of a single measurement . the method is used for three different , though related , models ; the sequential measurement model , the arthurs - kelly model and the eight - port homodyne detection model . in each case , the method of moments gives the position and momentum distribution for a large class of initial states , the relevant condition being the exponential boundedness of the distributions . + pacs numbers : 03.65.-w , 03.67.-a
|
in 1971 manfred eigen published a seminal paper on the evolution of error - prone self - replicating macromolecules .his theory was expanded significantly later on , primarily in works of eigen , schuster and co - workers .one of the principal findings was the existence of the _ error threshold _ , i.e. , the critical mutation rate such that the equilibrium population of macromolecules ( the _ quasispecies _ in the terminology of eigen et al . ) can not provide conditions for evolution if the fidelity of copying falls below this critical level .this critical mutation rate depends on the length of macromolecules and hence puts limits on the amount of information that can be carried by a given macromolecule . to improve fidelityone needs longer sequences ( e.g. , a more efficient replicase ) , to have longer sequences one needs better fidelity , hence the chicken egg problem .an easy and obvious solution to this problem is that the early primordial genomes must have consisted of independently replicating entities , which , generally speaking , would compete with each other ( see , e.g. , and references therein ) . if we consider a simple mathematical description of independent competing replicators then the usual differential equations for the growth take the following form : where is the concentration of the -th type of macromolecules , is the rate of replicating , is the degree of auto - catalysis , and is the term which is necessary to keep the total concentration constant , this term depends only on and not on the index , in the present case ; easy to see that this is equivalent to the condition . in the case if we have system with non - linear growth rates , which model different coupling strength of the various components , for the discussion of such growth rates see , e.g. , .hereinbelow we consider mainly ( or even , ) but remark that gives the exponential growth , gives the standard hyperbolic growth ( autocatalysis ) , and for the parabolic growth occurs .it is straightforward to show that for only one replicator present at , the competition winds up in the _ competitive exclusion _ of all but one types , i.e. , the genome composed of independently replicating entities is not vital .helps to replicate another one , , macromolecule promotes the replication of closing the loop ; denote reaction rates.,scaledwidth=40.0% ] to resolve this situation eigen and schuster suggested a concept of the _ hypercycle _ , a group of self - replicating macromolecules that catalyze each other in a cyclic manner : the first type helps the second one , the second type helps the third , etc , and the last type helps the first one closing the loop ( see fig . [fig : intro:1 ] ) .an analogue to system can be written in the form where index coincides with , .for we obtain the standard hypercycle model .it is known that is _ permanent _ , i.e. , all the concentrations are separated from zero , and hence different replicators coexist in this model .more exactly , for short hypercycles , , the internal equilibrium is globally stable , for longer hypercycles , , a globally stable limit cycle appears .the problem with the hypercycle model is its vulnerability to the invasion of parasites .we remark that models and are systems of ordinary differential equation ( odes ) , i.e. , they are mean - field models . as a solution to the parasite invasion problemit was suggested that heterogeneous population structure can strengthen persistence of the system .one of the suggested solution was spatially explicit models , see also for reviews of the pertinent work .two major approaches to spatially explicit models are reaction - diffusion equations and cellular automata models , and they both were considered in the cited works . which was lacking, however , is an analytical treatment of the resulting systems , because in both cases the researchers have resorted to extensive numerical simulations .an only notable exception to our knowledge is , where some of the models with explicit space are analyzed analytically .an interest in cluster - like solutions of reaction - diffusion systems resulted in the analysis of spatially explicit hypercycle in infinite space .note that models and are a special case of the general replicator equation , for which several approaches are known to incorporate an explicit spatial structure , albeit there is no universally accepted way of incorporating dispersal effects .the solution to the problem with equal diffusion rates is straightforward , in this case we , following ecological approach , can just add the laplace operator to the right hand sides of or .this was used , e.g. , in the classical paper by fisher to model the effect of the spatial structure on the invasion properties of an advantageous gene ; this approach later was generalized by hadeler . however , for the primordial world , it would be a too stringent an assumption to have all the diffusion coefficients equal . to overcome this difficulty , vickers et al .introduced a special form of the population regulation to allow for different diffusion rates , now in the subject area of evolutionary game dynamics . in theseworks a nonlinear term is used that provides_ local _ regulation of the populations under question , although no particular biological mechanism is known that lets individuals adapt their per capita birth and death rates to local circumstances . in our view , it is more natural to assume the _ global _ regulation of the populations , hence following along the lines of thought that brought to the models and. mathematically it means that we assume that the total populations satisfy the following condition where is a spatial variable now .this approach was first used in . which is important here is that this approach allows to obtain some analytical insights of the systems .in this text our goal is to present an analytical treatment of the models of prebiotic macromolecules with self- and hypercyclic catalysis with an explicit spatial structure and global population regulation in the form of reaction - diffusion equations .let be a bounded domain , , , with a piecewise - smooth boundary .the spatially explicit analogue to is given by the following reaction - diffusion system here , , is the laplace operator , in the cartesian coordinates .the initial conditions are , ( although we note , that in each particular case we shall specify admissible values of ) , and the form of will be determined later .a slight modification of gives the hypercyclic system where . in both problems and the functions are assumed to be nonnegative , since they represent _ relative _ concentrations of different macromolecules .it is natural to assume that we consider closed systems ( see also ) , i.e. , we have the boundary conditions where is the normal vector to the boundary .it is assumed that the global regulation of the total concentration of macromolecules occurs in the system such that for any time moment .this condition is an analogous condition for the total concentration of replicators in the finite - dimensional case . from the boundary condition and the integral invariant the expressions for the functions and follow : and finally we have a mixed problem for a system of semilinear parabolic equations with the integral invariant and functionals and .suppose that for any fixed each function is differentiable with respect to variable , and belongs to the space as the function of for any fixed .here is the space of functions with the norm ^{\frac{1}{p+1}}+\left[\int_{\omega}\sum_{k=1}^m\left|\frac{\partial u}{\partial x_k}\right|^{2}\,dx\right]^{\frac{1}{{2}}}.\ ] ] note that if then , where is the sobolev space of square - integrable functions for which their first partial derivatives are also square - integrable . without loss of generalitywe shall assume further that volume of the domain is equal 1 : .our main goal is to analyze existence and stability of the steady state solutions to and .the steady state solutions are given by the solutions to the following elliptic problems : and with the boundary conditions on ; .the integral invariant now reads the values of and are constant : and if it is assumed that then the equilibrium points of and coincide with the steady state solutions to and .these solutions are spatially homogeneous .the converse is also true : the spatially homogeneous equilibria of systems and are fixed points of the dynamical systems and respectively .the coordinates of these spatially homogeneous solutions are straightforward to write down .let and consider the sum , where the index of summation is determined later .all spatially homogeneous solutions to are given by ending with the vertices ( unity at the -th place ) of the simplex ; for each steady state in obtained by summing through all non - zero elements in the vector .the spatially homogeneous stationary solution to is given by be a spatially homogeneous solution to system . in the usual way we assume that the cauchy data are perturbed here .inasmuch as we have then from it follows that consider the following eigenvalue problem the system of eigenfunctions of this problem , forms a complete system in the sobolev space such that where is the kronecker symbol .the corresponding eigenvalues satisfy the condition hence for we assume that can be represented as where are constant. denote the set of functions such that , where .[ th3:1 ] for all spatially homogeneous stationary solutions to are unstable with respect to any perturbation from the set if the solutions , are stable when here is the first non - zero eigenvalue of the problem .let be a vector - function belonging to for any fixed . using andwe can seek the solution to in the following form : substituting into and retaining in the usual way only linear terms with respect to we obtain the following equations : consider first the case .direct calculations show that . multiplying equations one after another by the functions and integrating with respect to weobtain the following system of ordinary differential equations : for one has therefore , as , which implies that is unstable . using the same approach it is straightforward to show that are also unstable .now we deal with . first note that from it follows that for we have and therefore , for , when .taking into account we obtain that . if and holds then , if holds then , which proves the theorem .if then spatially homogeneous stationary solution to system is unstable with respect to any perturbations from the set when as before we will look for a solution to in the form . after substituting into , multiplying by and integrating , we obtain the following system of ordinary differential equations for : applying the routh hurwitz criterion we obtain that the solutions to go to if holds , which implies instability of .[ [ remark-3.1 . ] ] remark 3.1 .+ + + + + + + + + + + inverse inequality to provides stability of only in the cases .actually , for we have that takes the form all eigenvalues can be easily evaluated because the corresponding matrix is circular : where is the -th root of the equation .the eigenvector does not satisfy , therefore we exclude it from the consideration .when all eigenvalues have negative real part , in the case also will be stable .for there is at least one eigenvalue with positive real part , which proves the claim that is unstable when .here we will prove that when the space is one dimensional , ,\,\delta=\partial_x ] .hence we rewrite and in the form each equation of system can be put in the following form : system is a hamiltonian system for any , in which is considered as a `` time '' variable , with the hamiltonian the phase orbits of can be found from the standard formula where correspond to the candidates for spatially non - homogeneous stationary solutions to , scaledwidth=70.0% ] from the form of the phase orbits ( see fig . [ f1 ] ) it immediately follows that there exist orbits that satisfy the condition these orbits represent closed curves surrounding the center point in fig .different diffusion coefficients correspond to the motion along the phase orbits with different velocities . to prove the theorem we need to show that there exist two values and such that =1 , for , and corresponding solutions to satisfy the first condition in . the solutions to system can be found in the explicit parametric form : ^\frac{1}{p}\tau,\quad \tau\geq 0,\\ x_i & = \sqrt{\frac{d_i}{\bar{f}_1}}\int_{\tau_0}^\tau \frac{dt}{\sqrt{p_i(t)}}+c_i^2,\quad p_i(t)=c_i^1+t^2-t^{p+2},\\ u_i(x_i ) & = -\left[\frac{p+2}{2a_i}\bar{f}_1\right]^\frac{1}{p}\tau,\quad \tau\leq 0,\\ x_i & = \sqrt{\frac{d_i}{\bar{f}_1}}\int_{\tau_0}^\tau \frac{dt}{\sqrt{q_i(t)}}+c_i^2,\quad q_i(t)=c_i^1+t^2+t^{p+2}. \end{split}\ ] ] to proceed we need the following lemma ( the proof is given in the appendix )[ l4:1 ] the equation has two real positive roots for all values ^{\frac{2}{p}+1},\,0\right).\ ] ] moreover , ^{\frac{2}{p}}\right),\quad \tau_i^2\in\left(\left[\frac{2}{2+p}\right]^{\frac{2}{p}},\,1\right).\ ] ] an analogous lemma holds for .now we return to the parametric representation .consider the first derivative of the functions : ^\frac{1}{p}\sqrt{\frac{d_i}{\bar{f}_1}}\sqrt{p_i(t)}.\ ] ] this expression vanishes at the points and .using for and , we obtain letting we obtain that .we will use the following notation : remark that these integrals will exist because the roots of are simple when satisfy .the formula establishes the connection between the values of the constant and values of the diffusion coefficient , which determines the velocity of motion of phase points .the latter implies that guaranteers that the motion from the initial point to the final point occurs during the unit time .now we are going to prove that the solution satisfies the first condition in : ^\frac{1}{p}\sum_{i=1}^n\left[\frac{\bar{f}_1}{a_i}\right]^{\frac{1}{p}}\sqrt{\frac{d_i}{\bar{f}_1}}\int_{\tau_i^1}^{\tau_i^2 } \frac{tdt}{\sqrt{p_i(t)}}=1.\ ] ] from it follows that ^\frac{1}{p}\sum_{i=1}^n\left[\frac{\bar{f}_1}{a_i}\right]^{\frac{1}{p}}\sqrt{\frac{d_i}{\bar{f}_1}}\int_{\tau_i^1}^{\tau_i^2 } \frac{(p+2)t^{p+1}/2-tdt}{\sqrt{p_i(t)}}=0.\ ] ] indeed , we have using to find and substituting this expression into we obtain ^{\frac{1}{p}}\sum_{i=1}^n\left[\frac{d_i}{a_i}\right]^\frac{1}{p}\left(i_i^1\right)^{\frac{2}{p}-1}\left(i_i^2\right)=1,\ ] ] where to conclude the proof we need the following lemma , the proof of which is given in the appendix [ l4:2 ] if then the following inequality holds : ^{\frac{1}{p}}i_i^2>\left[\frac{\pi^2}{p}\right]^\frac{1}{p}.\ ] ] applying the result of lemma [ l4:2 ] to we obtain that if holds then there exists a spatially non - uniform stationary solution to .[ [ remark-4.1 . ] ] remark 4.1 .+ + + + + + + + + + + from the symmetry of the system , the time needed to get from the point to the point is the same as the time needed to get from to , and the speed of movement is inversely proportional to .therefore , reducing these values twice we guaranteer that spatially non - uniform stationary solution exists , which corresponds to the full cycle in the phase plane ; reducing 4 times we obtain the solution which corresponds to the movement of the phase point along the cycle two times , and so on .hence system has non - uniform stationary solutions that correspond to movement along the cycles in fig .[ f1 ] arbitrary number of times ( see fig . [ f2 ] ) . , and the boundary conditions . a solution that corresponds to the movement along half the cycle in fig .[ f1 ] ; full cycle ; two full cycles ; four full cycles . changing and hence the velocity of the movement along the phase curveswe can always obtain solutions with arbitrary number of full cycles , scaledwidth=80.0% ] [ [ remark-4.2 . ] ] remark 4.2 .+ + + + + + + + + + + we introduce the following parameter theorem [ th4:1 ] can be restated as follows : if then there exists a spatially non - uniform stationary solution to . on the other hand , if we obtain from theorem [ th3:1 ] that are stable .therefore we can consider as a bifurcation parameter . as this parameter decreases spatially uniform stationary solutionsbecome unstable , and spatially non - uniform solutions appear in the system according to the standard turing bifurcation scenario .now we consider the case of the spatially explicit hypercycle .[ th4:2 ] suppose that holds .if the parameters of problem can be represented by one - parameter perturbation where is a small parameter , then there exist spatially non - uniform stationary solutions to system .system can be rewritten in the following form : if then we have that and which is a particular case of the autocatalytic system .according to theorem [ th4:1 ] system possesses spatially non - uniform stationary solutions . using the presentationit can be shown that the right hand side of is of the order of , i.e. , can be rewritten in the form where are bounded functions .this implies that system is a perturbation of the hamiltonian system .according to the general theory stable and unstable manifolds of the perturbed orbits will be close to the corresponding manifolds of the unperturbed system .therefore for for each non - uniform stationary solution of there exists spatially non - uniform stationary solution to .[ [ remark-4.3 . ] ] remark 4.3 .+ + + + + + + + + + + if we assume that the inverse to inequality holds , then it can be rewritten in the form ^\frac{1}{p}>\frac{p^\frac{1}{p}}{\pi^\frac{2}{p}}.\ ] ] indeed , we can rewrite inverse to in the form ^\frac{1}{pn}>\frac{p^\frac{1}{p}}{\beta\pi^\frac{2}{p}}.\ ] ] using the properties of arithmetic and geometric means we obtain ^\frac{1}{n}.\ ] ] from the previous it follows that ^\frac{1}{pn}\left[\prod_{i=1}^n\frac{1}{(a_i)^\frac{1}{p}}\right]^\frac{1}{n}=n\left[\prod_{i=1}^n\left[\frac{d_i}{a_i}\right]^\frac{1}{p}\right]^\frac{1}{n}>\frac{p^\frac{1}{p}}{\pi^\frac{2}{p}}.\ ] ] once again using the inequality between arithmetic and geometric means we obtain ^\frac{1}{p}\right]^\frac{1}{n}\leq \sum_{i=1}^n\left[\frac{d_i}{a_i}\right]^\frac{1}{p},\ ] ] which proves the desired result . in words , we showed in this remark that if the inverse to holds , then the inverse to is true , which means that if the spatially homogeneous solution to hypercycle system is stable there are no spatially non - homogeneous solutions .[ [ example-4.1 . ] ] example 4.1 .+ + + + + + + + + + + + it is possible to obtain an explicit solution to in the special case when , .first , we rewrite in the form the expression denotes the standard scalar product in , is the transformation .matrix is circular and has eigenvalues .consider the orthogonal transformation that reduces to its canonical form : summing all equations in the hypercyclic system we have where is the diffusion vector , , .let , .it follows that since , the last equation takes the form suppose that .then we have that and the function , satisfies the differential equation whose explicit solution can be found using .[ [ remark-4.4 . ] ] remark 4.4 .+ + + + + + + + + + + as in the case of system parameter can be considered as a bifurcation parameter for .consider the local system of autocatalytic reaction in the form for the following we need we shall say that the initial conditions for system and system are concerted if let us assume that the initial conditions for systems and are concerted . on integrating system with respect to and using the equality we obtain where since we have and , consequently , [ l5:1 ] let the initial conditions for systems and be concerted .then where . first , we prove the left inequality in . using and hlder s inequality we have ^p\sum_{i=1}^na_iw_i^{p+1}=\beta^pf_1^{loc}(t).\ ] ] to prove the right inequality in we assume that there exists that .since the functions and are continuous , there exists neighborhood from which follows .then from it follows that due to the fact that the initial conditions of and are concerted , then from the comparison theorem we obtain where are the solutions to . from the other hand we should have ; we obtain a contradiction .[ th5:1 ] let .then for almost all initial conditions there exists an index , _ _ ( _ _ which depends on _ _ ) _ _ such that for all in the space , and when .we have , and hence .the eigenfunctions of the problem form a complete system in .let us represent let be the solutions to and let the initial conditions for systems and be concerted .we will look for a solution to in the form inserting into we obtain integrating the last equation with respect to and noting give using the fact that are the solutions to we obtain it is known that solutions to have a property of multistability .it means that all the vertexes of the simplex are stable , and the choice of initial conditions determines to which vertex the system evolves . in other words , for almost all initial conditions the system ends up in , for which all the coordinates excluding are zero ( when for all , and ) .hence , from the theorem follows .[ [ remark-5.1 . ] ] remark 5.1 .+ + + + + + + + + + + theorem answers a natural question which spatially non - uniform stationary solution of survives in the evolutionary process . to answer itwe need to consider two systems and with concerted initial conditions .as was mentioned system possesses the property of multistability ; each vertex of the simplex has its own basin of attraction .if we denote these basins as , then the number of the basin , to which the initial conditions of belong , determines which spatially non - uniform solution will dominate the evolution . note that for the dominant solution another point here is that the explicit space structure in the system with global regulation does not provide the conditions for surviving more than one type of prebiotic replicators , in for all .almost all spatially non - uniform stationary solutions of the problem are unstable .consider with the initial conditions where are spatially non - uniform stationary solutions to , and . from theorem [ th5:1 ]it follows that there exists a positive integer ( which depends on the initial conditions ) such that in space for .therefore only one set of stationary solutions can be stable .[ [ remark-5.2 . ] ] remark 5.2 .+ + + + + + + + + + + it is possible to obtain sufficient conditions for stability of the non - uniform stationary solution .unfortunately , applying this condition requires additional serious analysis .indeed , we can look for a solution to in the form putting these solutions into and retaining only linear terms we obtain -a_jz_j(x , t)\langle u_j^{p+1}(x),1\rangle+d_i\delta z_j(x , t),\\ \partial_t z_i(x , t ) & = -a_iz_i(x , t)\langle u_i^{p+1}(x),1\rangle+d_i\delta z_i(x , t),\quad i\neq j , \end{split}\ ] ] with the initial conditions . here denotes the usual scalar product in .this implies that all for when .on the other hand we have \\ & { } -a_j\langle z_j(x , t),z_j(x ,t)\rangle \langle u_j^{p+1 } , 1\rangle+d_i\langle \delta z_j(x , t ) , z_j(x , t)\rangle .\end{split}\ ] ] substituting the following into and using the fact that we obtain that all the terms in except for the terms in the square brackets are negative .the terms in the square brackets have the following form from which we obtain a sufficient condition for stability of the solution in the form the last formula should be checked only for small because .the result of lemma [ l5:1 ] can be extended to the case of hypercycle reaction .[ l5:1n ] let the initial conditions of system and system be concerted .then we have and therefore since the initial conditions of and are concerted , then , as in the case of theorem we can represent as the sum , where are given by , and note that for any . from the last inequality it follows that since then , using we obtain . using the last lemma we can extend the results of _ permanence _ of hypercycle system with to the spatially explicit case remind that permanence means that solutions to system with the initial conditions do not vanish , i.e. , let and let the initial conditions of systems and be concerted , and then the solutions to system do not vanish in space .let a solution to vanish for some , i.e. , using the reasoning along the lines of theorem [ th5:1 ] , we obtain using lemma [ l5:1n ] we hence have the last and the cauchy inequalities yield from the fact it follows that either or tend to zero , which contradicts to the permanence of the hypercycle system .this completes the proof .similar to remark 5.2 we can obtain sufficient conditions for stability of the spatially nonhomogeneous stationary solutions for the hypercycle system .however , the utility of such conditions is questionable because we hardly can expect that we will be able to check these conditions analytically .it is possible to study the stability of spatially nonhomogeneous solutions in somewhat weaker sense .we shall say that spatially non - uniform stationary solution to system or is stable in the sense of the mean integral value if for any there exists such that for the initial conditions it follows that for any and , where , as before , are the solutions of or , it is clear that the stability in the mean integral sense is weaker than the stability in the usual sense ( lyapunov stability ) . for example , consider functions ] .since , the function reaches its maximum at the point , and this implies that {\prod_{i=1}^n\bar{w}_i}}\geq n^2\ ] ] which means that . invoking the arguments of the topological equivalence of and completes the proof .in this paper we studied the existence and stability of stationary solutions to autocatalytic and hypercyclic systems and with nonlinear growth rates and explicit spatial structure .it is well known that the mean field models ( e.g. , models described by ode systems ) are often show different behavior from the models where the spatial structure is taken into consideration ( more on this ) . in particular , it is widely acknowledged that the evolution and survival of altruistic traits can be mediated by spatial heterogeneity .macromolecules that catalyze the production of other macromolecules are obviously altruists , and in this note we tried to answer the question whether the particular form of spatial regulation ( namely , global regulation ) can promote the coexistence of different types of macromolecules in the prebiotic world ( within a hydrothermally formed system of continuous iron - sulfide compartments ) .the analysis presented in is significantly extended to the cases of nonlinear growth rates , arbitrary fitness and diffusion coefficients .the major conclusion is as follows : the mathematical models with spatial structure and global regulation show in general very similar qualitative features to those of local models .two basic properties , namely the competitive exclusion for autocatalytic systems and the permanence for the hypercyclic systems , are shown to hold for spatially explicit systems .numerical calculations illustrate these conclusions in figs .[ f3 ] and [ f4 ] ( the details on the numerical scheme used in the calculations are given in ) . .the initial conditions are .note that the orientation of the axis is different for and .only one type , , survives .the asymptotic state is a spatially non - uniform stationary solution .the details of the numerical computations are given in ] .the initial conditions are .the asymptotic state is spatially non - uniform stationary solutions .the details of the numerical computations are given in ] more precisely , for sufficiently large diffusion coefficients the spatially uniform stationary solutions to and have the same character as in the local models and .for such diffusion coefficients the asymptotic behavior of the local and distributed models coincides . if , on the other hand , the inequality holds and the nonlinear growth rates satisfy the condition then new , spatially non - uniform solutions appear ; for small diffusion coefficients these spatially heterogeneous solutions can correspond to the multiple cycles on the phase plane of the corresponding hamiltonian system ( fig .[ f2 ] ) . in the case of autocatalytic systemthese solution can be stable only if all but one asymptotic state are zero .in the case of the hypercyclic system we prove that these spatially heterogeneous solutions can be stable in the sense of the mean integral value .the examples of the asymptotic states for a hypercyclic systems found numerically are shown in fig .these non - uniform stationary solutions can be considered as the means of the hypercycle system to withstand the parasite invasion ( the analysis of models with parasites and with is the subject of the ongoing work ) . .note that case corresponds to the simulation shown in fig .consider the function .this function has two roots and , and attains its maximum at ^{\frac{2}{p}} ] .function can be obtained from by shifting the latter .therefore , when holds , has two positive roots that are situated in the interval . to simplify notations we drop indexes where it is possible .we need to prove that for and where ^\frac{1}{p},\quad p(\tau_1,c)=0,\quad p(\tau_2,c)=0,\quad p'_{\tau}(\tau_0,c)=0,\ ] ] we have that ^\frac{1}{p}\ ] ] for .function does not exceed its hermite interpolation polynomial , which is build using the values at .this follows from non - negativity of the reminder term of interpolation and the fact that when .therefore , we have where making the change of the variable in the integral we obtain where since the graph of any convex function lays above any tangent line , then we have for any . using the last inequality we can estimate as using the last estimate and returning to we obtain that {g(t_1)g(t_2)},\ ] ] where with the help of the taylor formula the denominator in can be presented in the following form : where belongs to the interval .if we denote , we obtain denominator of this fraction has its fist term positive and its second and third terms negative .indeed , we have , and , using the taylor formula around for both parts of this equality , we obtain where and . then since for any . which implies that from which follows that the second term is negative .using this fact we obtain which completes the proof .the authors are grateful to dr .semenov for the help with the proof of lemma [ l4:2 ] .the research of asn is supported by the department of health and human services intramural program ( nih , national library of medicine ) .r. ferriere and r. e. michod . .in u. dieckmann , r. law , and j. a. j. metz , editors , _ the geometry of ecological interactions : simplifying spatial complexity _ , pages 318339 . cambridge university press , 2000 .
|
analytical analysis of spatially extended autocatalytic and hypercyclic systems is presented . it is shown that spatially explicit systems in the form of reaction - diffusion equations with global regulation possess the same major qualitative features as the corresponding local models . in particular , using the introduced notion of the stability in the mean integral sense we prove the competitive exclusion principle for the autocatalytic system and the permanence for the hypercycle system . existence and stability of stationary solutions are studied . for some parameter values it is proved that stable spatially non - uniform solutions appear . [ [ keywords ] ] keywords : + + + + + + + + + autocatalytic system , hypercycle , reaction - diffusion , non - uniform stationary solutions , stability
|
in this paper , we consider the data with the following mixture structure .let , , be independent and identically distributed ( i.i.d . )copies of . for every , comes from one of the subpopulations with probability density functions ( pdfs ) .denote by the probability that is from the subpopulation and let .clearly and . in summary ,the pdf of conditioning on is given by practically , is known , observable , or can be reliably estimated from other sources .that is , conditioning on , follows a mixture model with known mixing proportions .our main interest in this paper is to estimate nonparametrically .recently , data of the mixture structure in ( [ mix - model ] ) have been more and more frequently identified in the literature and in practice .acar and sun ( 2013 ) provided one example of such data . in the genetic association study of single nucleotide polymorphisms ( snps ) ,the corresponding genotypes of snps are usually not deterministic ; in the resultant data , they are typically delivered as genotype probabilities from various genotype calling or imputation algorithms ( see for example li et al . 2009 and carvalho et al . 2010 ) .ma and wang ( 2012 ) summarized two types of genetic epidemiology studies under which mixture data of such kind are collected .these studies are kin - cohort studies ( wang et al .2008 ) and quantitative trait locus studies ( lander and botstein 1989 ; wu et al .2007 ) ; see also wang et al .( 2012 ) and the references therein .section [ section - read - data ] also gives an example of such data in the malaria study .more examples and the corresponding statistical research can be founded in ma et al .( 2011 ) , qin et al .( 2014 ) , and the references therein . with the data of the mixture structure in ( [ mix - model ] ) , statistical methods for estimating the component cumulative distribution functions ( cdfs ) have been established in the literature .a comprehensive overview of these developments is as follows .ma and wang ( 2012 ) pointed out that the classic maximum empirical likelihood estimators of these component cdfs are either highly inefficient or inconsistent .they proposed a class of weighted least square estimators .wang et al .( 2012 ) and ma and wang ( 2014 ) proposed consistent and efficient nonparametric estimators based on estimating equations for the component cdfs when the data are censored .qin et al .( 2014 ) considered another class of estimators for the component cdfs by maximizing the binomial likelihood .their method can be applied to data with censored or a non - censored structure .we observe that all these works were focused on the estimation of cdfs and assumed to be a discrete random vector .the estimation of the pdfs are less addressed in the literature . as far as we are aware , to date ma et al .( 2011 ) is the only existing reference that considered the component density estimation under the setup of model ( [ mix - model ] ) .they proposed a family of kernel - based weighted least squares estimators for the component pdfs under the assumption that is continuous .however , to the best of our knowledge , there are two limitations in their approach : ( 1 ) the resultant estimates do not inherit the nonnegativity property of a regular density function ; as is well known , such a property is often important in many downstream density - based studies . in that paper , though authors have discussed an em - like algorithm to achieve nonnegative component density estimates , the corresponding theoretical properties as well as the numerical performance of these estimates were not studied yet . ( 2 ) when dealing with some practical problems , this method does not make full use of the data and therefore the resultant density estimation may not be as efficient .we refer to the end of section [ section - sim ] for an example and further discussion .in this paper , we consider maximum smoothed likelihood ( eggermont and lariccia 2001 , chapter 4 ) estimators for , namely , which maximize a smoothed likelihood function and inherit all the important properties of pdfs .our method can handle data with s continuous or discrete .we also propose a majorization - minimization algorithm that computes these density estimates numerically .this algorithm incorporates similar ideas as levine et al .( 2011 ) and the em - like algorithm ( hall et al .we show that under finite samples , starting from any initial value , this algorithm not only increases the smoothed likelihood function but also leads to estimates that maximize the smoothed likelihood function .another main contribution of this paper is to establish the asymptotic consistency and the corresponding convergence rate for our density estimates .because of the properties ( see section [ section-3 ] ) of the non - linear operator " defined in section [ section-2 ] and the complicated form of the smoothed log - likelihood function , the development of the asymptotic theories for the nonparametric density estimates under the framework of mixture model is technically challenging and still lacking in the literature .we solve this problem by employing the advanced theories in empirical process ( see van der vaart and wellner 1996 , kosorok 2008 , and the references therein ) .we expect that the technical tools established in this paper may benefit the future study on the asymptotic theories of the nonparametric density estimates for mixture model of other kinds ; see for example levine et al . (2011 ) .the rest of the paper is organized as follows .section [ section-2 ] presents our proposed density estimates based on smoothed likelihood principal .section [ section - mm - alg ] suggests a majorization - minimization algorithm to numerically compute these density estimates , and establishes the finite - sample convergence properties of this algorithm .section [ section-3 ] studies the asymptotic behaviors of our density estimators .section [ section - band - sel ] proposes a bandwidth selection procedure that is easily imbedded into the majorization - minimization algorithm .section [ section - sim ] conducts simulation studies , which show that the proposed method is more efficient than existing methods in terms of integrated square error .section [ section - read - data ] applies our method to a real data example .the technical details are relegated to the appendix .with the observed data from model ( [ mix - model ] ) , we propose a maximum smoothed likelihood method for estimating .we consider the set of functions furthermore , we assume that s have the common support . given model ( [ mix - model ] ) and observations , the conditional log - likelihood can be written as however , as is well known , this log - likelihood function is unbounded in ; see page 25 in silverman ( 1986 ) and page 111 in eggermont and lariccia ( 2001 ) .therefore , the corresponding maximum likelihood estimates do not exist .this unboundedness problem can be solved by incorporating the smoothed likelihood approach ( eggermont and lariccia , 1995 , groeneboom et al .2010 , yu et al .2014 , and the references therein ) .specifically , we define the smoothed log - likelihood of to be where is the nonlinear smoothing operator for a density function , represented by here , is a kernel function supported on ] contains at least one observation , such that the corresponding .in this section , we propose an algorithm that numerically calculates with given bandwidths and study the finite - sample convergence property of this algorithm .the proposed algorithm , called the majorization - minimization algorithm , is in spirit similar to the majorization - minimization algorithm in levine et al .( 2011 ) . to facilitate our theoretical development ,we define the majorization - minimization updating operator on as follows . for any ,let where we first show that is capable of increasing the smoothed log - likelihood function in every step of updating .[ theorem-1 ] for every , we have .theorem [ theorem-1 ] immediately leads to our proposed majorization - minimization algorithm as follows .given initial values , for , we iteratively update from to as clearly , theorem [ theorem-1 ] above ensures that for every , we have . furthermore , since for any , belongs to the class of functions : therefore , for .next , we study the finite - sample convergence property of this majorization - minimization algorithm ; we observe that the technical development for such a convergence property is nontrivial .we first present a sufficient and necessary condition under which is a solution of the optimization problem ( [ def - f - hat ] ) .[ theorem-1-added-1 ] assume for every , .consider , then if and only if almost surely under the lebesgue measure .the following corollary is resulted from an immediately application of theorem [ theorem-1-added-1 ] ; the straightforward proof is omitted .[ corollary-1 ] assume for every , .let be a solution of the optimization problem ( [ def - f - hat ] ) , then almost surely under the lebesgue measure .corollary [ corollary-1 ] benefits our subsequent technical development of the asymptotic theories for in section [ section-3 ] .it indicates that the solution of ( [ def - f - hat ] ) is equivalent to the solution of as long as the stated condition for every is satisfied .this condition is quite reasonable since if for some then the subpopulation does not appear in the data and we can delete the corresponding from the mixture model ( [ mix - model ] ) . therefore , developing the asymptotic theories for from ( [ def - f - hat ] ) is equivalent to developing those from ( [ def - f - hat-1 ] ) .based on theorem [ theorem-1-added-1 ] , we show the convergence of the updating sequence to its global maximum , which implies the convergence of the proposed majorization - minimization algorithm .[ theorem-1-added-2 ] assume .then , we have where is a solution of the optimization problem ( [ def - f - hat ] ) .based on theorem [ theorem-1-added-2 ] , if we do nt impose further conditions on the data , is not necessarily strictly concave .therefore , we can only show that the updating sequence converges to the maximum of .note that this does not guarantee the convergence of to , i.e. , the maximizer of , because such a maximizer may not be uniquely defined .instead , referring to the proof of this theorem , we have shown that there exists at least a subsequence of converging to a maximizer of . furthermore ,if we impose some technical condition such that is strictly concave , is then uniquely defined by ( [ def - f - hat ] ) .immediately , we can show for every .we refer to the discussion at the end of section 2 for a sufficient condition so that is strictly concave in .we end this section with the following remark about the proposed majorization - minimization algorithm above .( 2011 ) discussed an em - like algorithm in their discussion section to obtain nonnegative component density estimates .in particular , they suggested defining and using a similar way as ( [ def - mathg ] ) to update the resultant density estimates in their paper .yet , the corresponding theoretical properties as well as the numerical performance of these estimates are left unknown . as commented by levine et al .( 2011 ) , algorithms of this kind do not minimize / maximize any particular objective function ; this may impose difficulty in the subsequent technical development .we refer to levine et al .( 2011 ) for more discussion of such a method .in this section , we investigate the asymptotic behaviors of given in ( [ def - f - hat ] ) .first , we consider the consistency of under the hellinger distance , where the hellinger distance between two non - negative functions and is defined to be ^{1/2}.\end{aligned}\ ] ] where are functions defined on , is a measure on .[ consistency - theorem-1 ] assume conditions 13 . then for any , we have where is the marginal density of , is the conditional density of given , and , , denote the true values of .next we establish the asymptotic convergence rate for , under the -distance .the proof of this theorem heavily replies on the results given in theorem [ consistency - theorem-1 ] .[ theorem-3 ] assume conditions 14 in appendix b. for every and , we have last , we establish the convergence of . we observe that the results by theorems [ theorem-1-added-1 ] and [ theorem-3 ] play key roles in the proof .[ theorem-4 ] assume conditions 14 in appendix b. for any , we have for presentational continuity , we have organized the technical conditions and long proofs of theorems [ consistency - theorem-1][theorem-4 ] in appendix b. as observed in appendix b , the theoretical developments for these theorems are technically challenging .the main obstacles are due to the following undesirable properties of with being an arbitrary pdf .firstly , is neither a density nor necessarily sufficiently close to the corresponding .therefore , the well developed empirical process theories and technics for m - estimators in density estimation ( see for example section 3.4.1 in van der vaart and wellner 1996 ) is not directly applicable .secondly , introduces significant bias on the boundary of the support of .for example , if is supported on ] .that is when ] is the support for the kernel function .these two properties of significantly challenge our technical development .so far , we can only show the asymptotic behaviours of , , and as those given in theorems [ consistency - theorem-1 ] , [ theorem-3 ] and [ theorem-4 ] . the convergence rate given in theorems[ theorem-3 ] and [ theorem-4 ] may not be the optimal .there is some room to improve .however , because of these two properties of " , we conjecture that is the best rate achievable by under the assumption that s are supported on a compact support .the intuition is as follows .consider the extreme case that even though s are estimated ideally well , say , one can show that the best rate for is still bounded by .the maximum smoothed likelihood estimates depend on the choice of the bandwidths .we suggest an algorithm that embeds the selection of the bandwidth into the updating steps of the majorization - minimization algorithm suggested in section [ section - mm - alg ] .let be the positive integer closest to , which serves as an estimate of the average number of observations from the population .given initial values and initial bandwidths , for .we update and as follows . 1 . step 1 . for every and , let 2 .sort : .let . treating the observations in as from a single population, we apply available bandwidth selection method for classical kernel density estimate to choose .denote by the resultant bandwidth .let the philosophy of the above bandwidth selection step ( i.e. step 2 ) is as follows .in fact , collects the observations most likely coming from the population based on the preceding iteration .therefore , we use these observations to select the bandwidth for the corresponding density estimates in the current iteration .when implementing this algorithm in our numerical studies , we use the quartic kernel , which was also used by ma et al .( 2011 ) . the initial density is randomly chosen from , i.e. , the corresponding weights are randomly generated from the uniform distribution over [ 0,1 ] . in the bandwidth selection step ( i.e. step 2 ) , once is obtained , we use r function dpik ( ) to update the bandwidths , . dpik ( ) in the r package kernsmooth is implemented by wand and matt ( publicly available at http://cran.r-project.org/package=kernsmooth ) .this package is based on the kernel methods in wand and jones ( 1996 ) .furthermore , the initial bandwidths are set as for every , where is the output of dpik ( ) based on all the observations .we iterate steps 13 until the change of the smoothed likelihood is smaller than a tolerance value in each iteration . in our numerical studies, we observe that this algorithm converges fast .for example , consider the real data example in section [ section - read - data ]. setting the random seed set as 123456 " , the bandwidths do not change up to 6th decimal point in two iterations ; the change of is less than in another 59 iterations .we have also experimented with other random seeds .the results are very similar .in addition , the resultant estimates for are independent of the choice of .we use the following simulation examples to examine the numerical performance of our density estimates . we consider three studies " .studies i and ii adopt the same setup as those in ma et al .( 2011 ) so that we can compare the results by our method with those in that paper .study iii mimics the real data example given in section [ section - read - data ] . in the first study ( ), we generate data using two populations , i.e. , .both populations have a standard normal distribution , so that , where denotes the pdf of the standard normal distribution .we generate with . for every , we set with , where are generated independently from the uniform distribution over ] .next , we verify that has the following properties .* is a concave function in ] , leading to ( p1 ) .we proceed to show ( p2 ) .first , to verify that is continuously differentiable in and the existence of , it suffices to verify that for every and , du ] for some and } k(x ) > 0 ] and are twice continuously differentiable in with bounded second order derivatives .furthermore , .* : let be the support for .there exists vectors in satisfy ( i ) and ( ii ) below . * * the vectors are linearly independent . * * there exist balls , , are disjoint , and for every .+ condition 1 requires that the bandwidths have the same order .condition 2 requires that the kernel function is symmetric and is sufficiently smooth .condition 3 requires the component pdfs are sufficiently smooth and is positive on the support of .condition 4 is a identifiability condition , which is satisfied when is a continuous random vector , or a discrete random vector with at least supports .the proof of theorems [ consistency - theorem-1][theorem-4 ] heavily relies on the well developed results for the m - estimation in empirical process .we use van der vaart and wellner ( 1996 ) ( vm ) as the main reference and adapt the commonly used notation in this book . in this section, we introduce some necessary notation and review two important results .we first review some notation necessary for introducing the result for the m - estimation .let ( ) denote smaller ( greater ) than , up to a universal constant . throughout , we will use to denote a sufficiently large universal constant . for a set of functions of ,we define here means the expectation is taken under .this convention will be used throughout the proof .the hellinger distance between two non - negative functions and is defined to be ^{1/2}.\end{aligned}\ ] ] with the above preparation , we present an important lemma , which is an application of theorem 3.4.1 of van der vaart and wellner ( 1996 ) to our current setup .it serves the basis for our subsequent proof .[ lemma0.vm ] suppose the notation , , and are defined above , is the true conditional density of given , and is the marginal density of .if the following three conditions are satisfied : 1 . for every and , 2 . for every and , for functions such that is decreasing on for some ; 3 . , where and satisfies for every ; an difficult step in the application of the above lemma is to verify condition c2 .an useful technique is to establish a connection between and the bracketing integral of the class . for the convenience of presentation in next subsections , we introduce some necessary notation and review an important lemma .we first introduce the concept of bracketing numbers , which will be used to define the bracketing integral .consider a set of functions and the norm defined on the set .for any , the bracketing number }(\epsilon,\mathm,\|\cdot\|) ] is defined in ( [ bracket.integral ] ) .lemma [ entropy ] below gives the upper bound for }(\delta , \gamma\mathp_{n } , d) ] , where is an arbitrarily small constant .note that for any , when . in the following proof, we focus on the function class defined on . with condition 2 , we first check that for any arbitrary , we have for some universal constant . here ] . then for every , there exist a set of -brackets : i=1,\ldots , n_j\} ] , this indicates for every , }(\epsilon , \gamma\mathp_{n } , d ) \lesssim \log n_{[]}(\sqrt{m}\epsilon , \gamma\mathp_{n } , d ) \le \sum_{j=1}^m\log n_j \lesssim \sum_{j=1}^m \frac{\log h_j}{\epsilon^{1/a}h_j^{1 + 0.5/a}}.\end{aligned}\ ] ] this proves ( [ max - cond-3 ] ) .let , where for , \\ c_{h_j , j } f_{0,j}(x),&x\in[c_1 , c_2]\\ c_{h_j , j } f_{0,j}(c_1),&x\in[c_1-lh_j , c_1]\\ 0,&\mbox{otherwise } \end{array } \right.,\ ] ] where is a constant such that . in the proof ,we need the approximation of . note that by condition c3 , we have that for \mathcal{n} \alpha \alpha \alpha \alpha \alpha \alpha ] ; * for an arbitrary , }(\epsilon , \mathf_{c , j } , l_2(p_0 ) ) \lesssim \frac{\log h_j}{\epsilon^{1/a } h_j^{1 + 1/a}} ] .applying theorem 2.7.11 in vm , ( p1 ) immediately follows . we proceed to show ( p2 ) . using an exactly the same strategy as the proof of ( [ max - cond-3 - 2 ] ) in lemma [ entropy ], we can verify }(\epsilon , \mathp_{n , j } , l_2(p_0))\lesssim \frac{\log h_j}{\epsilon^{1/a } h_j^{1 + 1/a}}.\end{aligned}\ ] ] for notational convenience , we write }(\epsilon , \mathp_{n , j } , l_2(p_0)) ] that covers .we consider : \begin{array}{l}g_l(y,{\mbox{\boldmath } } ) = \frac{\alpha_j u_{i_j , j}}{p_{u } } ; \\ g_u(y,{\mbox{\boldmath } } ) = \frac{\alpha_j v_{i_j , j}}{p_{l } } ; \\p_{u } = \widetilde p_{u}i\{\widetilde p_{u } > c\ } + c i\{\widetilde p_{u } \le c\ } ; \ \p_{u } = \sum_{l=1}^m \alpha_{l } v_{i_l ,l};\\ p_{l } = \widetilde p_{l}i\{\widetilde p_{l } > c\ } + c i\{\widetilde p_{l } \le c\ } ; \ \\widetilde p_{l } = \sum_{l=1}^m \alpha_{l } u_{i_l , l } ; \\ \mbox{for every } i_l = 1,\ldots , n_j ; \quad \mbox{and } \quad l=1,\ldots , m \end{array } \right\},\end{aligned}\ ] ] which contains number of brackets .we verify that covers .in fact , for every since for every , covers , there exist , where for every , such that we need to calculate the sizes of the brackets in under . to this end, we consider an arbitrary \in \widetilde \mathb_j ] is a -bracket in under .this together with the facts that covers and contains number of brackets completes our proof for ( p2 ) in this lemma .last , we show ( p3 ) .let .it is straightforward to check that }\left(\epsilon , \mathf_{c , j , 0 } , l_2(p_0)\right ) \lesssim\frac{\log h_j}{\epsilon^{1/a } h_j^{1 + 1/a}}. \label{entropy - mathf - c - j-0}\end{aligned}\ ] ] on the other hand , let be an arbitrary function in and .let ] , }(\epsilon , \mathp_{n,0 } , l_2(p_0)) ] , }(\epsilon , \mathp_{n,0 } , l_2(p_0)) ] , }(\epsilon , \mathp_{n,0 } , l_2(p_0)) \alpha \alpha ] . in fact \nonumber \\ & = & \int_{\rr } \int_{{\mbox{\boldmath }}\in s_\gamma}i\{\widetilde p_0(y , { \mbox{\boldmath } } ) - \widehat p(y , { \mbox{\boldmath } } ) > 0\}\left\{\widetilde p_0(y , { \mbox{\boldmath } } ) - \widehat p(y , { \mbox{\boldmath }})\right\ } \gamma({\mbox{\boldmath } } ) \widetilde p_0(y , { \mbox{\boldmath } } ) d{\mbox{\boldmath }}dy \nonumber \\ & \le & \int_{\rr } \int_{{\mbox{\boldmath }}\in s_\gamma } |\widetilde p_0(y , { \mbox{\boldmath } } ) -\widehat p(y , { \mbox{\boldmath }})| \gamma({\mbox{\boldmath } } ) \widetilde p_0(y , { \mbox{\boldmath } } ) d{\mbox{\boldmath }}dy \nonumber\\ & \lesssim & d(\gamma \widehat p ,\gamma \widetilde p_0 ) .\label{i-1 - 1-proof-6}\end{aligned}\ ] ] now , we combine ( [ i-1 - 1-proof-1 ] ) , ( [ i-1 - 1-proof-2 ] ) , ( [ i-1 - 1-proof-5 ] ) , and ( [ i-1 - 1-proof-6 ] ) to conclude which together with ( [ i-1 - 3-proofed ] ) and ( [ hat - f - j - proof ] ) conclude with similar but easier procedures as above , we can verify we now prove theorem [ theorem-4 ] .recall the definition of and in ( [ hat - f - j - proof ] ) and their asymptotic properties we have presented in ( [ result - i-1 ] ) and ( [ result - i-2 ] ) .we have which together with theorem [ consistency - theorem-1 ] and the following easily checked result ( [ thm-4 - 12 ] ) based on conditions 2 and 3 completes our proof of this theorem by setting sufficiently large . groeneboom , p. , jongbloed , g. , and witte , b. i. ( 2010 ) .maximum smoothed likelihood estimation and smoothed maximum likelihood estimation in the current status model . _ the annals of statistics _, 38 , 352387 .kitua , a. y. , smith , t. , alonso , p. l. , masanja , h. , urassa , h. , menendez , c. , kimario , j. , and tanner , m. ( 1996 ) .plasmodium falciparum malaria in the first year of life in an area of intense and perennial transmission ._ tropical medicine and international health _ , * 1 * , 475 - 484 . qin , j. , garcia , t. p. , ma , y. , tang , m. , marder , k. , and wang , y. ( 2014 ) . combining isotonic regression and em algorithm to predict genetic risk under monotonicity constraint ._ annals of applied statistics , in press ._ wang y. , garcia , t. p. , and ma y. ( 2012 ) .nonparametric estimation for censored mixture data with application to the cooperative huntington s observational research trial ._ journal of the american statistical association _, 107 , 13241338 .vounatsou , p. , smith , t. , and smith , a. f. m. ( 1998 ) .bayesian analysis of two - component mixture distributions applied to estimating malaria attributable fractions . _ applied statistics _, * 47 * , 575 - 587 .wang , y. , clark , l.n . ,louis , e.d ., mejia - santana , h. , harris , j. , cote , l.j . ,waters , c. , andrews , d. , ford , b. , frucht , s. , fahn , s. , ottman , r. , rabinowitz , d. and marder , k. ( 2008 ) .risk of parkinson s disease in carriers of parkin mutations : estimation using the kin - cohort method . _archives of neurology _ , 65 , 467474 .
|
in this paper , we propose a maximum smoothed likelihood method to estimate the component density functions of mixture models , in which the mixing proportions are known and may differ among observations . the proposed estimates maximize a smoothed log likelihood function and inherit all the important properties of probability density functions . a majorization - minimization algorithm is suggested to compute the proposed estimates numerically . in theory , we show that starting from any initial value , this algorithm increases the smoothed likelihood function and further leads to estimates that maximize the smoothed likelihood function . this indicates the convergence of the algorithm . furthermore , we theoretically establish the asymptotic convergence rate of our proposed estimators . an adaptive procedure is suggested to choose the bandwidths in our estimation procedure . simulation studies show that the proposed method is more efficient than the existing method in terms of integrated squared errors . a real data example is further analyzed . * key words and phrases : * em - like algorithm ; empirical process ; m - estimators ; majorization - minimization algorithm ; mixture data ; smoothed likelihood function . * running title : * msl component density estimation in mixture models .
|
in this article , we shall study the numerical approximation to the following problem : given a fixed and a bounded set , find such that , for , with flow and competitive lotka - volterra functions given by where , for , the coefficients are non - negative constants , is a real constant , , and are non - negative functions .problem ( [ eq : pde])-([def : reaction ] ) is a generalization of the cross - diffusion model introduced by busenberg and travis and gurtin and pipkin to take into account the effect of over - crowding repulsion on the population dynamics , see for the modelling details . under the main condition for some positive constant ,it was proven in the existence of weak solutions for rather general conditions on the data problem .notice that ( [ h : def_pos ] ) implies the following ellipticity condition on the matrix : this condition allows us to , through a procedure of approximation , justify the use of as a test function in the weak formulation of ( [ eq : pde])-([eq : id ] ) .then we get , for the entropy functional the identity from ( [ h : def_pos ] ) and other minor assumptions one then obtains the entropy inequality providing the key estimate of and which allows to prove the existence of weak solutions .however , it was also proven in that condition ( [ h : def_pos ] ) is just a sufficient condition , and that solutions may exist for the case of semi - definite positive matrix . a particular important case captured by problem ( [ eq : pde])-([def : reaction ] )is the _ contact inhibition problem _ , arising in tumor modeling , see for instance chaplain et al . . in this case ,matrix is semi - definite positive , and the initial data , describing the spatial distribution of normal and tumor tissue , satisfy .this free boundary problem was mathematically analyzed by bertsch et al .for one and several spatial dimensions by using regular lagrangian flow techniques . in , a different approach based on viscosity perturbationswas used to prove the existence of solutions . in ,the lagrangian techniques of were generalized showing , in particular , the non - uniqueness in the construction of solutions by this method . in ,a conforming finite element method was used both for proving the existence of solutions of the viscosity approximations ( or for the case of satisfying ( [ h : def_pos ] ) ) , and for the numerical simulation of solutions . since in the case of positive semi - definite matrix solutionsmay develope discontinuities in finite time , the fem approximations exhibit instabilities in the neighborhood of these points . in this articlewe use a deterministic particle method to give an alternative for the numerical simulation of solutions . in our context ,deterministic particle methods were introduced , for the scalar linear diffusion equation , by degond and mustieles in .lions and mas - gallic , in , gave a rigorous justification of the method with a generalization to nonlinear diffusion .gambino et al . studied a particle approximation to a cross - diffusion problem closely related to ours .let us finally remark that cross - diffusion parabolic systems have been used to model a variety of phenomena since the seminal work of shigesada , kawaski and teramoto .these models ranges from ecology , to semiconductor theory or granular materials , among others . global existence and regularity results for the evolution problem , and for the steady state have been provided .other interesting properties , such as pattern formation , has been studied in .finally , the numerical discretization has received much attention , and several schemes have been proposed .consider a system of particles described by ther masses , , and their trajectories , \to\o\subset{\mathbb{r}}^m ] and with , satisfying \mathbf{x}^i(0 ) = \mathbf{x}^i_0 , \end{array } \right.\ ] ] where for any and , and we introduced the following generalized counterpart of ( [ def : ueps ] ) : notice that , so defined , function may lead to numerical instabilities when cancels ; thus , to avoid divisions by zero , we introduce a parameter small enough and approximate by in order to approximate system ( [ systemodes ] ) , we apply a time discretization based on an _ implicit midpoint formula_. more precisely , we fix a constant time step ( ) and , given a particle approximation with the associated positions ( ) and weights , we approximate as follows : 1 .approximate the position of the associated particles at time , , using an implicit euler rule : 2 .approximate the position of the associated particles at time , , using an explicit euler rule : 3 .approximate taking advantage of the approximate position of the associated particles : notice that the first step above requires the determination of the solution of the nonlinear algebraic equation ( [ nonlinear - algeq ] ) .we compute an approximation to such solution by applying a fixed point algorithm .it consists of the following steps : 1 .initialize .2 . for , given , 3 . check the stopping criteria , here , is a tolerance parameter and is a finite set of points . in our experiments, we take , and given by a uniform grid of points of , with .notice that the initial condition in ( [ systemodes ] ) is given in terms of the initial location of the particles , whereas that of the original problem , ( [ eq : id ] ) , provides a value of their spatial distribution . in this sense ,given an initial condition in , we must study how to initialize and in such a way that on the one hand , to initialize the positions , we used a uniform grid of , that is , we simply took the set of points : on the other hand , to initialize the weights , one is temted to impose ( [ icond_aprox ] ) exactly on the points of .however , in doing so , negative weights arise , spoiling the convergence of the method . in consequence, we compared two different strategies : * define ; * solve the following constrained linear least - squares problem : where and .although the first approach is faster , it introduces too much diffussion , whereas the second provides a more accurate approximation while preserving stability .a similar strategy applies to the particle redistribution after several time steps .indeed , let us recall that particle redistribution is tipically needed in particle simulations to avoid that particles get concentrated in some parts of the domain : if so , too big gaps between particles will arise in other parts , producing numerical instabilities .we refer to for a review on this issue and some alternative strategies that apply .finally , the boundary conditions ( [ eq : bc ] ) are taken into account by means of a specular reflection whenever a particle location is changed . although this is the most common approach to non - flow boundary conditions , border effects are noticiable in simple discretization schemes .for comparison purpouses , we use the finite element approximation of ( [ eq : pde])-([eq : id ] ) introduced in , eventually using the regularization of the flows ( [ def : flow ] ) given by for .the approximation is obtained by using a semi - implicit euler scheme in time and a continuous finite element approximation in space , see for the details where , in particular , the convergence of the fully discrete approximation to the continuous solution is proved .we now sketch the fem scheme .let be the time step of the discretization. for , set .then , for the problem is to find , the finite element space of piecewise -elements , such that for , , \hspace*{1 cm } = \big(\alpha_{i } u^n_{\epsilon i } - \lambda _ { \epsilon } ( u^n_{\epsilon i } ) ( \beta_{i1 } \lambda _ { \epsilon } ( u^{n-1}_{\epsilon 1 } ) + \beta_{i2 } \lambda _ { \epsilon } ( u^{n-1}_{\epsilon 2 } ) ) , \chi \big)^h , \end{array}\ ] ] for every . here , stands for a discrete semi - inner product on .the parameter makes reference to the regularization introduced by some functions and , which converge to the identity as , see .since is a nonlinear algebraic problem , we use a fixed point argument to approximate its solution , , at each time slice , from the previous approximation .let .then , for the problem is to find such that for , and for all \hspace*{1 cm } = \big(\alpha_{i } u^{n , k}_{\epsilon i } - \lambda _ { \epsilon } ( u^{n , k-1}_{\epsilon i } ) ( \beta_{i1 } \lambda _ { \epsilon } ( u^{n-1}_{\epsilon 1 } ) + \beta_{i2 } \lambda _ { \epsilon } ( u^{n-1}_{\epsilon 2 } ) ) , \chi \big)^h .\end{array}\ ] ] we use the stopping criteria , for empirically chosen values of , and set . _ experiment 1 ._ we consider a particular situation of the contact - inhibition problem in which an explicit solution of may be computed in terms of a suitable combination of the barenblatt explicit solution of the porous medium equation , the heavyside function and the trajectory of the contact - inhibition point . to be precise , we construct a solution to the problem with here , is the heavyside function and is the barenblatt solution of the porous medium equation corresponding to the initial datum , i.e. +.\ ] ] for simplicity , we consider problem - for such that , with , so that for all $ ] .the point is the initial contact - inhibition point , for which we assume , i.e. it belongs to the interior of the support of , implying that the initial mass of both populations is positive .observe that satisfies the porous medium equation , implying regularity properties for this sum , among others , the differentiability in the interior of its support .it can be shown that the functions with , are a weak solution of problem - .we use the fem scheme and the particle method scheme to produce approximate solutions to problem ( [ eq : s1])-([def : us ] ) for and a resolution of ( nodes or particles ) . in this experiment, the fem scheme behaves well without the addition of the regularizing term in ( [ reg : flow ] ) , i.e. we take .the initial data is given by ( [ def : us ] ) with .we run the experiments till the final time is reached .we use a small time resolution in order to capture the discontinuity of the exact solution .as suggested in the restriction must be impossed in order to get stability .we chose and .this high time resolution implies that the fixed point algorithms to solve the nonlinearities is scarcely used .particle spatial redistribution is neither needed in this experiment .although both algorithms produce similar results , i.e. a good approximation outside a small neighborhood of the discontinuity , they behave in a different way . on one hand ,the particle method needs fewer particles to cover the discontinuity , see figs .[ exp1_1.fig ] and [ exp1_2.fig ] . on the other hand ,the particle method creates oscillating instabilities in a large region of the positive part of the solution , effect which is not observed in the case of the fem . in any case , the global errors are similar .in particular , the mean relative square error is of order . _ experiment 2 ._ another instances of the contact - inhibition problem are investigated . in fig .[ exp2_1.fig ] , we show approximate transient solutions obtained by the particle method ( continuous line ) and finite element method ( dotted line ) of two problems given in the form with and . for the first problem ( left panel of fig .[ exp2_1.fig ] ) we choose and for the second problem ( right panel of fig . [ exp2_1.fig ] ) we choose implying that matrix is positive semi - definite in both cases .differently than in experiment 1 , the sum does not satisfy more than a continuity regularity property for these problems .indeed , a jump of the derivative may be observed at the contact inhibition point .this effect is explained by the differences in the flows on the left and on the right of the contact inhibition point .as it can be seen in the figures , particle and finite element methods provide a similar approximation . only at the contact inhibition point some differences may be observed .the relative error is , as in the experiment 1 , of order .the discretization parameters are and , with .the regularized flow ( [ reg : flow ] ) is used in the fem scheme with .the experiments are run till the final time ( left panel ) and ( right panel ) .the contact inhibition problem , i.e. equations ( [ eq : pde])-([def : reaction ] ) completed with initial data with disjoint supports , is an interesting problem from the mathematical point of view due mainly to the possibility of their solutions developing discontinuities in finite time .although there has been some recent progress in the analytical understanding of the problem , the numerical analysis is still an open problem . in this paperwe have presented in some extension a particle method to produce approximations to the solutions of the problem .we have compared these solutions to exact solutions and to approximate solutions built through the finite element method . in general terms ,the particle method is more computer time demanding , and somehow unstable with respect to the resolution parameters when we compare to fem .however , if the resolution is high enough , it can better capture the discontinuities arising in the solution .although we performed our experiments in a one dimensional setting , particle methods are specially useful for higher dimensions , due to the easiness of their implementation and parallelization . in future workwe shall investigate these extensions .g. galiano , j. velasco , competing through altering the environment : a cross diffusion population model coupled to transport - darcy flow equations , nonlinear anal .real world appl .12(5 ) ( 2011 ) 28262838 .j. a. sherratt , wavefront propagation in a competition equation with a new motility term modelling contact inhibition between cell populations , r. soc .ser . a math .( 2000 ) 23652386 .
|
we use a deterministic particle method to produce numerical approximations to the solutions of an evolution cross - diffusion problem for two populations . according to the values of the diffusion parameters related to the intra and inter - population repulsion intensities , the system may be classified in terms of an associated matrix . when the matrix is definite positive , the problem is well posed and the finite element approximation produces convergent approximations to the exact solution . a particularly important case arises when the matrix is only positive semi - definite and the initial data are segregated : the contact inhibition problem . in this case , the solutions may be discontinuous and hence the ( conforming ) finite element approximation may exhibit instabilities in the neighborhood of the discontinuity . in this article we deduce the particle method approximation to the general cross - diffusion problem and apply it to the contact inhibition problem . we then provide some numerical experiments comparing the results produced by the finite element and the particle method discretizations . * _ keywords : _ cross - diffusion system , contact inhibition problem , deterministic particle method , finite element method , numerical simulations . * _ ams : _ 35k55 , 35d30 , 92d25 .
|
the _ nearest neighbor _ problem ( ) is a fundamental geometric problem which has major applications in many areas such as databases , computer vision , pattern recognition , information retrieval , and many others . given a set of points in a -dimensional space , the goal is to build a data - structure , such that given a query point , the algorithm can report the closest point in to the query .a particularly interesting and well - studied instance is when the points live in a -dimensional real vector space . efficientexact and approximate algorithms are known for this problem .( in the _ approximate nearest neighbor _ ( ) problem , the algorithm is allowed to report a point whose distance to the query is larger by at most a factor of , than the real distance to the nearest point . )see , the surveys , and , and references therein ( of course , this list is by no means exhaustive ) .one of the state of the art algorithms for , based on locality sensitive hashing , finds the -with query time , and preprocessing / space in , where , where hides a constant that is polynomial in . for the norm , this improves to . despite the ubiquity of nearest neighbor methods ,the vast majority of current algorithms suffer from significant limitations when applied to data sets with corrupt , noisy , irrelevant or incomplete data .this is unfortunate since in the real world , rarely one can acquire data without some noise embedded in it .this could be because the data is based on real world measurements , which are inherently noisy , or the data describe complicated entities and properties that might be irrelevant for the task at hand . in this paper, we address this issue by formulating and solving a variant of the nearest neighbor problem that allows for some data coordinates to be arbitrarily corrupt . given a parameter ,the * _ -robust nearest neighbor _ * for a query point , is a point whose distance to the query point is minimized ignoring `` the optimal '' set of -coordinates ( the term ` robust ' is used as an analogy to _ robust pca _that is , the coordinates are chosen so that deleting these coordinates , from both and minimizes the distance between them . in other words ,the problem is to solve the problem in a different space ( which is definitely not a metric ) , where the distance between any two points is computed ignoring the worst coordinates . to the best of our knowledge ,this is the first paper considering this formulation of the robust problem .this problem has natural applications in various fields such as computer vision , information retrieval , etc . in these applications, the value of some of the coordinates ( either in the dataset points or the query point ) might be either corrupted , unknown , or simply irrelevant . in computer vision ,examples include image de - noising where some percent of the pixels are corrupted , or image retrieval under partial occlusion ( e.g. see ) , where some part of the query or the dataset image has been occluded. in these applications there exists a perfect match for the query after we ignore some dimensions . also , in medical data and recommender systems , due to incomplete data , not all the features ( coordinates ) are known for all the people / recommendations ( points ) , and moreover , the set of known values differ for each point .hence , the goal is to find the perfect match for the query ignoring some of those features . for the binary hypercube , under the hamming distance, the -robust nearest neighbor problem is equivalent to the _ near neighbor _ problem .the near neighbor problem is the decision version of the search , where a radius is given to the data structure in advance , and the goal is to report a point that is within distance of the query point .indeed , there exists a point within distance of the query point if and only if coordinates can be ignored such that the distance between and is zero .[ [ budgeted - version . ] ] budgeted version . + + + + + + + + + + + + + + + + + we also consider the weighted generalization of the problem where the amount of uncertainty varies for each feature . in this model ,each coordinate is assigned a weight in advance , which tries to capture the certainty level about the value of the coordinate ( indicates that the value of the coordinate is correct and indicates that it can not be trusted ) .the goal is to ignore a set of coordinates of total weight at most , and find a point , such that the distance of the query to the point is minimized ignoring the coordinates in .surprisingly , even computing the distance between two points under this measure is np - complete(it is almost an instance of ) .we present the following new results : reduction from robust to .we present a general reduction from the robust problem to the `` standard '' problem .this results in a bi - criterion constant factor approximation , with sublinear query time , for the -robust nearest neighbor problem . for -normthe result can be stated as follows .if there exists a point whose distance to the query point is at most by ignoring coordinates , the new algorithm would report a point whose distance to the query point is at most , ignoring coordinates .the query algorithm performs queries in -data structures , where is a prespecified parameter . in , we present the above result in the somewhat more general settings of the normthe algorithm reports a point whose distance is within after ignoring coordinates while performing of -queries .we modify the new algorithm to report a point whose distance to the query point is within by ignoring coordinates while performing queries ( specifically , - ) . for the sake of simplicity of exposition , we present this extension only in the norm .see for details . budgeted version . in, we generalize our algorithm for the weighted case of the problem .if there exists a point within distance of the query point by ignoring a set of coordinates of weight at most , then our algorithm would report a point whose distance to the query is at most by ignoring a set of coordinates of weight at most .again , for the sake of simplicity of exposition , we present this extension only in the norm . data sensitive lsh queries .it is a well known phenomenon in proximity search ( e.g. see andoni _et al . _* section 4.5 ) ) , that many data - structures perform dramatically better than their theoretical analysis . not only that , but also they find the real nearest neighbor early in the search process , and then spend quite a bit of time on proving that this point is indeed a good .it is natural then to ask whether one can modify proximity search data - structures to take an advantage of such a behavior .that is , if the query is easy , the data - structure should answer quickly ( and maybe even provide the exact nearest neighbor in such a case ) , but if the instance is hard , then the data - structure works significantly harder to answer the query . as an application of our sampling approach, we show a data - sensitive algorithm for for the binary hypercube case under the hamming distance .the new algorithm solves the approximate near neighbor problem , in time , where is the smallest value with where is the distance of the query to the , and is the distance being tested .we also get that such queries works quickly on low dimensional data , see for details .the new algorithms are clean and should be practical .moreover , our results for the -robust hold for a wide range of the parameter , from to .there has been a large body of research focused on adapting widely used methods for high - dimensional data processing to make them applicable to corrupt or irrelevant data .for example , robust pca is an adaptation of the pca algorithm that handles a limited amount of adversarial errors in the input matrix .although similar in spirit , those approaches follow a different technical development than the one in this paper . also , similar approaches to robustness has been used in theoretical works . in the work of indyk on sketching , the distance between two points and , is defined to be the median of .thus , it is required to compute the norm of their distance , but only over the smallest coordinates .finally , several generalizations of the problem have been considered in the literature .two related generalizations are the nearest -flat search and the approximate nearest -flat . in the former , the dataset is a set of -flats ( -dimensional affine subspace ) instead of simple points but the query is still a point ( see for example ) . in the latterhowever , the dataset consists of a set of points but the query is now a -flat ( see for example ) .we note that our problem can not be solved using these variations ( at least naively ) since the set of coordinates that are being ignored in our problem are not specified in advance and varies for each query .this would mean that different subspaces are to be considered for each point . in our settings, both and can be quite large , and the new data - structures have polynomial dependency in both parameters . [ [ data - sensitive- . ] ] data sensitive .+ + + + + + + + + + + + + + + + the fast query time for low dimensional data was demonstrated before for an scheme ( * ? ? ?* appendix a ) ( in our case , this is an easy consequence of our data - structure ) .similarly , optimizing the parameters of the construction to the hardness of the data / queries was done before ( * ? ? ?* section 4.3.1 ) our result however does this on the fly for the query , depending on the query itself , instead of doing this fine tuning of the whole data - structure in advance for all queries . by definition of the problem, we can not directly apply johnson - lindenstrauss lemma to reduce the dimensions ( in the norm case ) .intuitively , dimension reduction has the reverse effect of what we want it spreads the mass of a coordinate `` uniformly '' in the projection s coordinates thus contaminating all projected coordinates with noise from the `` bad '' coordinates .the basic idea of our approach is to generate a set of random projections , such that all of these random projections map far points to far points ( from the query point ) , and at least one of them projects a close point to a close point .thus , doing queries in each of these projected point sets , generates a set of candidate points , one of them is the desired robust .our basic approach is based on a simple sampling scheme , similar to the clarkson - shor technique and .the projection matrices we use are _ probing _ matrices .every row contains a single non - zero entry , thus every row copies one original coordinate , and potentially scales it up by some constant .[ [ a - sketch - of - the - technique ] ] a sketch of the technique : + + + + + + + + + + + + + + + + + + + + + + + + + + consider the case where we allow to drop coordinates .for a given query point , it has a robust nearest neighbor , such that there is a set of `` bad '' coordinates , such that the distance between and is minimum if we ignore the coordinates of ( and this is minimum among all such choices ) .we generate a projection matrix by picking the to be present with probability , where is some constant , for . clearly , the probability that such a projection matrix avoids picking the bad coordinates is . in particular ,if we repeat this process times , where is some constant , then the resulting projection avoids picking any bad coordinate with probability . on the other hand , imagine a `` bad '' point , such that one has to remove , say , coordinates before the distance of the point to the query is closer than the robust ( when ignoring only coordinates ) .furthermore , imagine the case where picking any of these coordinates is fatal the value in each one of these bad coordinates is so large , that choosing any of these bad coordinates results in this bad point being mapped to a far away point .then , the probability that the projection fails to select any of these bad coordinates is going to be roughly namely , somewhat informally , with decent probability all bad points get mapped to faraway points , and the near point gets mapped to a nearby point .thus , with probability roughly , doing a regular query on the projected points , would return the desired .as such , repeating this embedding times , and returning the best encountered would return the desired robust with high probability .[ [ the - good - the - bad - and - the - truncated . ] ] the good , the bad , and the truncated .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ultimately , our technique works by probing the coordinates , trying to detect the `` hidden '' mass of the distance of a bad point from the query .the mass of such a distance might be concentrated in few coordinates ( say , a point has coordinates with huge value in them , but all other coordinates are equal to the query point ) such a point is arguably still relatively good , since ignoring slightly more than the threshold coordinates results in a point that is pretty close by . on the other hand , a point where one has to ignore a large number of coordinates ( say ) before it becomes reasonably close to the query point is clearly bad in the sense of robustness . as such, our data - structure would classify points , where one has to ignore slightly more than coordinates to get a small distance , as being close . to capture this intuition ,we want to bound the influence a single coordinate has on the overall distance between the query and a point .to this end , if the robust nearest neighbor distance , to when ignoring coordinates , is , then we consider capping the contribution of every coordinate , in the distance computation , by a certain value , roughly , . under this _ truncation _ , our data - structure returns a point that is away from the query point , where is the distance to the -robust point .thus , our algorithm can be viewed as a bicriterion approximation algorithm - it returns a point where one might have to ignore slightly more coordinates than , but the resulting point is constant approximation to the nearest - neighbor when ignoring points .in particular , a point that is still bad after such an aggressive truncation , is amenable to the above random probing . by carefully analyzing the variance of the resulting projections for such points, we can prove that such points would be rejected by the data - structure on the projected points .[ [ budgeted - version.-1 ] ] budgeted version. + + + + + + + + + + + + + + + + + to solve the budgeted version of the problem we use a similar technique to importance sampling .if the weight of a coordinate is , then in the projection matrix , we sample the with probability and scale it by a factor of .this would ensure that the set of bad " coordinates are not sampled with probability .again we repeat it times to get the desired bounds . [[ data - sensitive-.-1 ] ] data sensitive .+ + + + + + + + + + + + + + + + the idea behind the data - sensitive , is that can be interpreted as an estimator of the local density of the point set . in particular , if the data set is sparse near the query point , not only the data - structure would hit the nearest - neighbor point quickly ( assuming we are working in the right resolution ) , but furthermore , the density estimation would tell us that this event happened . as such , we can do the regular exponential search start with an insensitive scheme ( that is fast ) , and go on to use more sensitive s , till the density estimation part tells us that we are done .of course , if all fails , the last data - structure used is essentially the old scheme .[ def : tail ] for a point , let be a permutation of , such that for a parameter , the * _ -tail _ * of is the point% = % \bigl ( 0 , \ldots , 0 , |\pntc_{\pi(i+1 ) } | , \allowbreak |\pntc_{\pi(i+2 ) } | , \ldots , |\pntc_{\pi(d ) } | \bigr ) . ] which is a contradiction . as such , all the non - zero coordinates of are present in , and we have that [ lemma : basic : exp : var ] let be a point in , and consider a random , see .we have that } } = { { { \tau}}}{\left\| { { { { \bm{x } } } } } \right\|_{{{{\rho}}}}}^{{{{\rho}}}} ] , where . by for , we have } } = { { { t}}}{{{\tau}}}{\left\| { \smash { { { { { { { \bm{x}}}}^{}_{\bbslash k } } } } } } \right\|_{{{{\rho}}}}}^{{{\rho}}}\leq { { { t}}}{{{\tau}}}{{{r}}}^{{{\rho}}} ] , which holds by markov s inequality .[ lemma : lp : heavy : far ] let . if is a -heavy point , then let and , and for all , let .by , with probability at least half , we have that in particular , let , and observe that thus , we have that now set and note that .now , by hoeffding s inequality , we have that } } & \leq { \mathop{\mathbf{pr}}{\mleft [ { \bigl .z \leq { { { { u } } } } } \mright ] } } \leq { \mathop{\mathbf{pr}}{\mleft [ { \bigl .{ \left| { z - \mu } \right| } \geq \mu - { { { { u } } } } } \mright ] } } \leq 2 \exp{\mleft ( { - \frac{2(\mu - { { { { u}}}})^2 } { { { { t}}}{\mleft ( { { { { \tau}}}{{{r}}}^{{{{\rho}}}}/ 2}\mright)}^2}}\mright ) } \\ & \leq 2 \exp{\mleft ( { - \frac{8 ( \mu/2)^2 } { { { { t}}}{{{\tau}}}^2 { { { r}}}^{2{{{\rho}}}}}}\mright ) } \leq 2 \exp{\mleft ( { - \frac{8 ( { { { t}}}{{{\tau}}}{{{r}}}^{{{{\rho}}}}/ 8)^{2 } } { { { { t}}}{{{\tau}}}^2 { { { r}}}^{2{{{\rho}}}}}}\mright ) } = 2 \exp{\mleft ( { - \frac { { { { \beta}}}\ln n } { 8 } } \mright ) } \leq \frac{2}{n^{{{{\beta}}}/8}}. \end{aligned}\ ] ] [ lemma : lp : heavy : tail ] let be a parameter .one can build the data - structure described in with the following guarantees . for a query point ,let be its -robust nearest neighbor in under the norm , and let .then , with high probability , the query algorithm returns a point , such that is a -light .the data - structure performs of -queries under -norm .we start with the painful tedium of binding the parameters . for the bad probability , bounded by , to be smaller than , we set . for the good probability of to be larger than , implies , thus requiring .namely , we set . finally , requires let and let . for a query point ,let be its -robust , and let be the set of largest coordinates in .let denote the event of sampling a projection that does not contain any of the coordinates of . by , with probability , the event happens for the data - structure , for any .as such , since the number of such data - structures built is we have that , by chernoff inequality , with high probability , that there are at least such data structures , say .consider such a data - structure .the idea is now to ignore the coordinates of all together , and in particular , for a point , let be the point where the coordinates of are removed ( as defined in ) . since by assumption , by , with probability at least half ,the distance of from is at most .since there are such data - structures , we know that , with high probability , in one of them , say , this holds . by , any point ( of ) , that is -heavy , would be in distance at least in the projection from the projected . since is a -data - structure under the norm , we conclude that no such point can be returned , because the distance from to in this data - structure is smaller than . note that since for the reported point , the point can not be -heavy , and that the coordinates in can contribute at most . we conclude that the point can not be -heavy .thus , the data - structure returns the desired point with high probability .as for the query performance , the data - structure performs queries in -data - structures .this lemma would translate to the following theorem using .[ theo : l : p ] let be a set of points with the underlying distance being the metric , and , , and be parameters .one can build a data - structure for answering the -robust queries on , with the following guarantees : preprocessing time / space is equal to the space / time needed to store data - structures for performing -queries under the metric , for a set of points in dimensions . the query time is dominated by the time it takes to perform -queries in the data - structures . for a query point , the data - structure returns , with high probability , a point , such that if one ignores coordinates , then the distance between and is at most where is the distance of the nearest neighbor to when ignoring coordinates .( formally , is -light . )setting , the algorithm would report a point using -data - structures , such that if one ignores coordinates , the distance between and is at most .formally , is -light .[ sec : sec : budgeted ] in this section , we consider the budgeted version of the problem for -norm . here, a coordinate has a cost of ignoring it , and we have a budget of , of picking the coordinates to ignore ( note that since we can safely remove all coordinates of cost , we can assume that ) .formally , we have a vector of * _ costs _ * , where the , , is the cost of ignoring this coordinate .intuitively , the cost of a coordinate shows how much we are certain that the value of the coordinate is correct .the set of * _ admissible projections _ * , is given two points , their * _ admissible distance _ * is where we interpret as a projection ( see ) . the problem is to find for a query point and a set of points , both in , the _ robust nearest - neighbor distance _ to ; that is , the point in realizing this distance is the * _ robust nearest - neighbor _ * to , denoted by the unweighted version can be interpreted as solving the problem for the case where all the coordinates have uniform cost .[ def : good : bad ] if is the nearest - neighbor to under the above measure , then the set of _ good _ coordinates is and the set of * _ bad _ * coordinates is in what follows , we modify the algorithm for the unweighted case and analyze its performance for the budgeted case .interestingly , the problem is significantly harder . for two points , computingtheir distance is a special instance of .the problem is np - hard(which is well known ) , as testified by the following lemma .[ lemma:2:points ] given two points , and a cost vector , computing is np - complete , where is the set of admissible projections for ( see ) .this is well known , and we provide the proof for the sake of completeness .consider an instance of with integer numbers .let , and consider the point , and set the cost vector to be .observe that .in particular , there is a point in robust distance at most from the origin , with the total cost of the omitted coordinates being the given instance of has a solution .indeed , consider the set of coordinates realizing .let , and observe that the cost of the omitted coordinates is at most ( by the definition of the admissible set ) .in particular , we have and . as such, the minimum possible value of is , and if it is , then , and and realize the desired partition .adapting the standard for subset - sum for this problem , readily gives the following .[ lemma : i - aamkp ] given points , and a cost vector ^d ] , consider generating a sequence of integers , by picking the number , into the sequence , with probability .we interpret this sequence , as in , as a projection , except that we further scale the , by a factor of , for .namely , we project the with probability , and if so , we scale it up by a factor of , for ( naturally , coordinates with would never be picked , and thus would never be scaled ) .let denote this distribution of weighted sequences ( maybe a more natural interpolation is that this is a distribution of projection matrices ) .[ observation : norm:1 ] let ^d ] , and }} ] be the cost of the coordinates , and let be a parameter .one can build a data - structure , such that given a query point , it can report a robust approximate nearest - neighbor under the costs of . formally , if is the robust nearest - neighbor ( see ) when one is allowed to drop coordinates of total cost , and its distance to this point is ( see ) , then the data - structure returns a point , such that is -light ( see ) .the data - structure has the following guarantees : the preprocessing time and space is , where is the preprocessing time and space needed to build a single data - structure for answering ( standard ) -queries in the -norm for points in dimensions .the query time is , where is the query time of answering -queries in the above data - structures .the proof is similar to in the unweighted case .we set , , and . by the same arguments as the unweighted case , and using , , and markov s inequality , with high probability ,there exists a data - structure that does not sample any of the bad coordinates , and that . by and union bound , for all the points such that ( see ) is -heavy, we have .thus by no such point would be retrieved by .note that since for the reported point , we have that is -light , and that the point is -light .using implies an additional blowup in the computed distance , implying the result . under the conditions and notations of , for the query point and its returned point , there exists a set of coordinates of cost at most ( i.e. , such that .that is , we can remove a set of coordinates of cost at most such that the distance of the reported point from the query is at most .let and by is -light for some constant .let and let be the set of coordinates being truncated ( i.e. , all such that ) .clearly , the weight of the coordinates not being truncated is at most .also for the coordinates in the set , we have that . therefore , assuming that , and noting that .[ sec : d : s : l ] given a set of points and a radius parameter , in the approximate near neighbor problem , one has to build a data - structure , such that for any given query point , if there exists a point in that is within distance of , it reports a point from which is within distance of the query point .in what follows we present a data structure based on our sampling technique whose performance depends on the relative distances of the query from all the points in the data - set .[ [ input.-1 ] ] input .+ + + + + + the input is a set of points in the hamming space , a radius parameter , and an approximation parameter . in the spirit of, one can generate the projections in this case directly . specifically , for any value of , consider a random projection two points * _ collide _ * under , if , for .since we care only about collisions ( and not distances ) for the projected points , we only care what subset of the coordinates are being copied by this projection .that is , we can interpret this projection as being the projection , which can be sampled directly from , where .as such , computing and storing it takes time .furthermore , for a point , computing takes time , for any projection .[ [ preprocessing.-2 ] ] preprocessing .+ + + + + + + + + + + + + + for , let where is a sufficiently large constant . here, is the _ collision probability function _ of two points at distance under projection , and is the number times one has to repeat an experiment with success probability till it succeeds with high probability . let . for ,compute a set of projections .for each projection , we compute the set and store it in a hash table dedicated to the projection . thus , given a query point ,the set of points _ colliding _ with is the set stored as a linked list , with a single entry in the hash table of .given , one can extract , using the hash table , in time , the list representing .more importantly , in time , one can retrieve the size of this list ; that is , the number . for any ,let denote the constructed data - structure .[ [ query.-1 ] ] query .+ + + + + + given a query point , the algorithm starts with , and computes , the number of points colliding with it in .formally , this is the number if , the algorithm increases , and continues to the next iteration , where is any constant strictly larger than .otherwise , and the algorithm extracts from the hash tables ( for the projections of ) the lists of these points , scans them , and returns the closest point encountered in these lists . the only remaining situation is when the algorithm had reached the last data - structure for without success .the algorithm then extracts the collision lists as before , and it scans the lists , stopping as soon as a point of distance had been encountered . in this case, the scanning has to be somewhat more careful the algorithm breaks the set of projections of into blocks , each containing projections , see .the algorithm computes the total size of the collision lists for each block , separately , and sort the blocks in increasing order by the number of their collisions .the algorithm now scans the collision lists of the blocks in this order , with the same stopping condition as above .there are various modifications one can do to the above algorithm to improve its performance in practice .for example , when the algorithm retrieves the length of a collision list , it can also retrieve some random element in this list , and compute its distance to , and if this distance is smaller than , the algorithm can terminate and return this point as the desired approximate near - neighbor . however , the advantage of the variant presented above , is that there are many scenarios where it would return the _ exact _ nearest - neighbor .see below for details . the expected number of collisions with the query point , for a single , is } } = \sum_{{{{\bm{x}}}}\in { \ensuremath{{{p}}}\xspace } } { { { \mathsf{f}}}_{i}}{\mleft({\bigl .{ \mathrm{d}_h{\mleft({{{{\bm{q } } } } , { { { \bm{x}}}}}\mright)}}}\mright ) } \leq \sum_{{{{\bm{x}}}}\in { \ensuremath{{{p}}}\xspace } } \exp { \mleft ( { \bigl . - { \mathrm{d}_h{\mleft({{{{\bm{q } } } } , { { { \bm{x}}}}}\mright ) } }i /{{{r}}}}\mright)}. { \label{equation : coll}}\end{aligned}\ ] ] this quantity can be interpreted as a convolution over the point set .observe that as is a monotonically decreasing function of ( for a fixed ) , we have that .the expected number of collisions with , for all the projections of , is } } = { { { \mathsf{s}}}{\mleft({i}\mright ) } } { { { \gamma}}_{i}}. { \label{equation : collisions}}\ ] ] if we were to be naive , and just scan the lists in the , the query time would be . as such , if , then we are `` happy '' since the query time is small . of course , a priori it is not clear whether ( or , more specifically , ) is small .intuitively , the higher the value is , the stronger the data - structure `` pushes '' points away from the query point .if we are lucky , and the nearest neighbor point is close , and the other points are far , then we would need to push very little , to get which is relatively small , and get a fast query time .the standard analysis works according to the worst case scenario , where one ends up in the last layer .[ example : low : dim ] the quantity depends on how the data looks like near the query . for example , assume that locally near the query point , the point set looks like a uniform low dimensional point set . specifically , assume that the number of points in distance from the query is bounded by , where is some small constant and is the distance of the nearest - neighbor to .we then have that by setting , we have therefore , the algorithm would stop in expectation after rounds .namely , if the data near the query point locally behaves like a low dimensional uniform set , then the expected query time is going to be , where the constant depends on the data - dimension .[ lemma : fast ] if there exists a point within distance of the query point , then the algorithm would compute , with high probability , a point which is within distance of the query point .let be the nearest neighbor to the query .for any data - structure , the probability that does not collide with is at most since the algorithm ultimately stops in one of these data - structures , and scans all the points colliding with the query point , this implies that the algorithm , with high probability , returns a point that is in distance .an interesting consequence of is that if the data - structure stops before it arrives to , then it returns the _ exact _ nearest - neighbor since the data - structure accepts approximation only in the last level . stating it somewhat differently , only if the data - structure gets overwhelmed with collisions it returns an approximate answer .[ rem : m : large ] one can duplicate the coordinates times , such that the original distance becomes .in particular , this can be simulated on the data - structure directly without effecting the performance . as such , in the following , it is safe to assume that is a sufficiently large say larger than .[ lemma : n : i ] for any , we have .we have that since for any positive integer , we have . as such , since we can assume that , we have that now , we have [ lemma : worst : case ] for a query point , the worst case query time is , with high probability .the worst query time is realized when the data - structure scans the points colliding under the functions of .we partition into two point sets : the close points are , and the far points are .any collision with a point of during the scan terminates the algorithm execution , and is thus a _ good _ collision .collision is when the colliding point belongs to .let be the partition of the projections of into blocks used by the algorithm .for any , we have since and by .such a block , has probability of to not have the nearest - neighbor to ( i.e. , ) in its collision lists .if this event happens , we refer to the block as being _useless_. for a block , let be the total size of the collision lists of for the projections of when ignoring good collisions altogether .we have that } } & \leq { \left| { b_j } \right| } \cdot { \left| { { { \ensuremath{{{p}}}\xspace } _ { > } } } \right| } \cdot { { { \mathsf{f}}}_{{{{\mathcal{n}}}}}}{\mleft({\bigl .( 1+{{{\varepsilon } } } ) { { { r}}}}\mright ) } \leq { \left| { b_j } \right| } n \exp{\mleft ( { - ( 1+{{{\varepsilon } } } ) { { { r}}}{{{\mathcal{n}}}}/{{{r}}}}\mright ) } = \\ & { \left| { b_j } \right|}n \exp{\mleft ( { - ( 1+{{{\varepsilon } } } ) { { { \mathcal{n}}}}}\mright ) } = { \left| { b_j } \right| } = o{\mleft({n^{1/(1+{{{\varepsilon}}})}}\mright ) } , \end{aligned}\ ] ] since .in particular , the is _ heavy _ , if .the probability for a block to be heavy , is , by markov s inequality .in particular , the probability that a block is heavy or useless , is at most . as such , with high probability , there is a light and useful block .since the algorithm scans the blocks by their collision lists size , it follows that with high probability , the algorithm scans only light blocks before it stops the scanning , which is caused by getting to a point that belongs to . as such , the query time of the algorithm is .next , we analyze the data - dependent running time of the algorithm .let , for , where .let be the smallest value such that .then , the expected query time of the algorithm is .the above condition implies that , for any . by , for , we have that thus , by markov s inequality , with probability at least , we have that , and the algorithm would terminate in this iteration . as such , let be an indicator variable that is one of the algorithm reached the .however , for that to happen , the algorithm has to fail in iterations . as such, we have that the of the algorithm , if it happens , takes time , and as such , the overall expected running time is proportional to namely , the expected running time is bounded by using the bound from , and since .[ theo : l : s : h : sensitive ] given a set of points , and parameters and , one can preprocess the point set , in time and space , such that given a query point , one can decide ( approximately ) if , in expected time , where is the hamming distance .formally , the data - structure returns , either : `` '' , and the data - structure returns a witness , such that .this is the result returned if .`` '' , and this is the result returned if . the data - structure is allowed to return either answer if .the query returns the correct answer , with high probability . furthermore ,if the query is `` easy '' , the data - structure would return the _ exact _ nearest neighbor . specifically ,if , and there exists , such that , then the data - structure would return the exact nearest - neighbor in expected time .[ rem : l : dim ] if the data is dimensional , in the sense of having bounded growth ( see ) , then the above data - structure solves approximate in time , where the constant hidden in the depends ( exponentially ) on the data dimension .this result is known , see datar _et al . _* appendix a ) . however , our data - structure is more general , as it handles this case with no modification , while the data - structure of datar _et al._is specialized for this case .[ rem : f : tune ] fine tuning the scheme to the hardness of the given data is not a new idea . in particular ,et al . _* section 4.3.1 ) suggest fine tuning the construction parameters for the set of queries , to optimize the overall query time .contrast this with the new data - structure of , which , conceptually , adapts the parameters on the fly during the query process , depending on how hard the query is .ultimately , our data - structure is a prisoner of our underlying technique of sampling coordinates .thus , the main challenge is to come up with a different approach that does not necessarily rely on such an idea . in particular, our current technique does not work well for points that are sparse , and have only few non - zero coordinates .we believe that this problem should provide fertile ground for further research . [ [ acknowledgments . ] ] acknowledgments .+ + + + + + + + + + + + + + + + the authors thank piotr indyk for insightful discussions about the problem and also for the helpful comments on the presentation of this paper .the authors also thank jen gong , stefanie jegelka , and amin sadeghi for useful discussions on the applications of this problem .clmw11 n. ailon and b. chazelle .approximate nearest neighbors and the fast johnson - lindenstrauss transform . in _ proc .38th annu .acm sympos .theory comput ._ ( stoc ) __ , pages 557563 , 2006 .[ ] . a. andoni , m. datar , n. immorlica , p. indyk , and v. s. mirrokni .locality - sensitive hashing using stable distribution . in t.darrell , p. indyk , and g. shakhnarovich , editors , _ nearest - neighbor methods in learning and vision : theory and practice _ , pages 6172 .mit press , 2006 .a. andoni and p. indyk . near - optimal hashing algorithms for approximate nearest neighbor in high dimensions . , 51(1):117122 , 2008 .http://dx.doi.org/10.1145/1327452.1327494 [ ] .a. andoni , p. indyk , r. krauthgamer , and h. l. nguyen .approximate line nearest neighbor in high dimensions . in _ proc .20th acm - siam sympos. discrete algs . _( soda ) _ _ , pages 293301 , 2009 .http://dx.doi.org/10.1137/1.9781611973068.33 [ ] .a. andoni , p. indyk , h. l. nguyen , and i. razenshteyn . beyond locality - sensitive hashing . in _ proc .25th acm - siam sympos. discrete algs . _( soda ) _ _ , pages 10181028 , 2014 .http://dx.doi.org/10.1137/1.9781611973402.76 [ ] .s. arya , d. m. mount , n. s. netanyahu , r. silverman , and a. y. wu .an optimal algorithm for approximate nearest neighbor searching in fixed dimensions ., 45(6):891923 , 1998 .url : http://www.cs.umd.edu/~mount/papers/dist.pdf , http://dx.doi.org/10.1145/293347.293348 [ ] .a. andoni and i. razenshteyn .optimal data - dependent hashing for approximate near neighbors . in _ proc .47th annu .acm sympos .theory comput ._ ( stoc ) __ , pages 793801 , 2015 .f. cismondi , a. s. fialho , s. m. vieira , s. r. reti , j. m. c. sousa , and s. n. finkelstein .missing data in medical databases : impute , delete or classify ?, 58(1):6372 , 2013 .http://dx.doi.org/10.1016/j.artmed.2013.01.003 [ ] .e. j. cands , x. li , y. ma , and j. wright .robust principal component analysis ?, 58(3):11 , 2011 .http://dx.doi.org/10.1145/1970392.1970395 [ ] .a. chakrabarti and o. regev .an optimal randomized cell probe lower bound for approximate nearest neighbor searching ., 39(5):19191940 , february 2010 . http://dx.doi.org/10.1137/080729955 [ ]. k. l. clarkson and p. w. shor .applications of random sampling in computational geometry , ii . , 4:387421 , 1989 .http://dx.doi.org/10.1007/bf02187740 [ ] .m. datar , n. immorlica , p. indyk , and v. s. mirrokni .locality - sensitive hashing scheme based on -stable distributions . in _ proc .20th annu .( socg ) _ _ , pages 253262 , 2004 .s. har - peled . a replacement for voronoi diagrams of near linear size .in _ proc .42nd annu .ieee sympos . found .( focs ) _ _ , pages 94103 , 2001 .url : http://sarielhp.org/p/01/avoronoi , http://dx.doi.org/10.1109/sfcs.2001.959884 [ ] .j. hays and a. a. efros .scene completion using millions of photographs ., 26(3):4 , 2007 .http://dx.doi.org/10.1145/1276377.1276382 [ ] .s. har - peled , p. indyk , and r. motwani .approximate nearest neighbors : towards removing the curse of dimensionality ., 8:321350 , 2012 .special issue in honor of rajeev motwani .[ ] . p. indyk and r. motwani .approximate nearest neighbors : towards removing the curse of dimensionality . in _ proc .30th annu .acm sympos .theory comput ._ ( stoc ) __ , pages 604613 , 1998 .[ ] . p. indyk .nearest neighbors in high - dimensional spaces . in j.e. goodman and j. orourke , editors , _ handbook of discrete and computational geometry _ , chapter 39 , pages 877892 .crc press llc , 2nd edition , 2004 .http://dx.doi.org/10.1201/9781420035315.ch39 [ ] .p. indyk .stable distributions , pseudorandom generators , embeddings , and data stream computation ., 53(3):307323 , 2006 .http://dx.doi.org/10.1145/1147954.1147955 [ ] .m. t. islam .approximation algorithms for minimum knapsack problem .master s thesis , dept .math & comp .sci . , 2009 .https://www.uleth.ca/dspace/handle/10133/1304 .r. krauthgamer and j. r. lee .navigating nets : simple algorithms for proximity search . in _ proc .15th acm - siam sympos. discrete algs . _( soda ) _ _ , pages 798807 , philadelphia , pa , usa , 2004 . society for industrial and applied mathematics .j. kleinberg .two algorithms for nearest - neighbor search in high dimensions . in _ proc .29th annu .acm sympos .theory comput ._ ( stoc ) __ , pages 599608 , 1997 .e. kushilevitz , r. ostrovsky , and y. rabani .efficient search for approximate nearest neighbor in high dimensional spaces ., 2(30):457474 , 2000 .url : http://epubs.siam.org/sam-bin/dbq/article/34717 .s. mahabadi .approximate nearest line search in high dimensions . in _ proc .26th acm - siam sympos .discrete algs ._ ( soda ) __ , pages 337354 , 2015 .w. mulzer , h. l. nguyn , p. seiferth , and y. stein .approximate -flat nearest neighbor search . in _ proc .47th annu .acm sympos .theory comput ._ ( stoc ) __ , pages 783792 , 2015 . http://dx.doi.org/10.1145/2746539.2746559 [ ] .r. panigrahy .entropy based nearest neighbor search in high dimensions . in _ proc .17th acm - siam sympos .discrete algs . _( soda ) _ _ , pages 11861195 , 2006 .h. samet . .the morgan kaufmann series in computer graphics and geometric modeling .morgan kaufmann publishers inc . , 2005 .g. shakhnarovich , t. darrell , and p. indyk . .the mit press , 2006 .j. a. c. sterne , i. r. white , j. b. carlin , m. spratt , p. royston , m. g. kenward , a. m. wood , and j. r. carpenter .multiple imputation for missing data in epidemiological and clinical research : potential and pitfalls ., 338:b2393 , 2009 .url : http://dx.doi.org/10.1136/bmj.b2393 .b. j. wells , k. m. chagin , a. s. nowacki , and m. w. kattan .strategies for handling missing data in electronic health record derived data . , 1(3 ) , 2013 .http://dx.doi.org/10.13063/2327-9214.1035 [ ] .[ apnd : eps : approx ] the tail of a point in has to be quite heavy , for the data - structure to reject it as an. it is thus natural to ask if one can do better , that is classify a far point as far , even if the threshold for being far is much smaller ( i.e. , ultimately a factor of ) . maybe surprisingly , this can be done , but it requires that such a far point would be quite dense , and we show how to do so here . for the sake of simplicity of expositionthe result of this section is provided only under the norm .the algorithm is the same as the one presented in , except that for the given parameter we use -data - structures .we will specify it more precisely at the end of this section .also the total number of data - structures is let be a parameter and consider the -truncated point .since is -heavy , we have that .now , we have } } = { { { \tau}}}{\left\| { { { { \bm{v } } } } } \right\|_{1 } } \geq ( 1+{{{\varepsilon } } } ) { { { \tau}}}{{{r}}}\qquad\text{and}\qquad \sigma^2 = { \mathop{\mathbf{v}}{\mleft [ { \bigl . { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } } \mright ] } } = { { { \tau}}}(1-{{{\tau } } } ) { \left\| { { { { \bm{v } } } } } \right\|_{2}}^2 . \end{aligned}\ ] ] now , by chebyshev s inequality , we have that } } \geq { \mathop{\mathbf{pr}}{\mleft [ { \bigl . { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } \geq ( 1+{{{\varepsilon}}}/4 ) { { { \tau}}}{{{r } } } } \mright ] } } \geq { \mathop{\mathbf{pr}}{\mleft [ { \bigl . { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } \geq \frac{(1+{{{\varepsilon } } } ) { { { \tau}}}{{{r}}}}{1+{{{\varepsilon}}}/2 } } \mright ] } } \\ & \geq { \mathop{\mathbf{pr}}{\mleft [ { { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } \geq \frac{\mu}{1+{{{\varepsilon}}}/2 } } \mright ] } } = { \mathop{\mathbf{pr}}{\mleft [ { { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } - \mu \geq { \mleft({\frac{1}{1+{{{\varepsilon}}}/2 } -1 } \mright ) } \mu } \mright ] } } \\ & = { \mathop{\mathbf{pr}}{\mleft [ { - { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } + \mu \leq { \frac{{{{\varepsilon}}}/2}{1+{{{\varepsilon}}}/2 } \mu } } \mright ] } } = 1- { \mathop{\mathbf{pr}}{\mleft [ { - { \left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } + \mu \geq { \frac{{{{\varepsilon}}}}{2+{{{\varepsilon } } } } \mu } } \mright ] } } \\ & \geq 1 - { \mathop{\mathbf{pr}}{\mleft [ { { \hspace{0.6pt}}{\left| { \bigl.{\left\| { { { { { \mathpzc{m}}}}}{{{\bm{v } } } } } \right\|_{1 } } - \mu } \right| } \geq \frac{{{{\varepsilon}}}}{2+{{{\varepsilon } } } } \cdot \frac { \mu}{\sigma } \cdot \sigma } \mright ] } } \geq 1 - { \mleft({\frac{2+{{{\varepsilon}}}}{{{{\varepsilon}}}}}\mright)}^2 { \mleft({\frac{\sigma}{\mu}}\mright)}^2 \\ & = 1 - \frac{9}{{{{\varepsilon}}}^2 } \cdot \frac { { { { { \tau}}}(1-{{{\tau } } } ) { \left\| { { { { \bm{v } } } } } \right\|_{2}}^2 } } { { { { \tau}}}^2 { \left\| { { { { \bm{v } } } } } \right\|_{1}}^2 } \geq 1 - \frac{9}{{{{\varepsilon}}}^2}\frac { { ( 1-{{{\tau } } } ) \xi } } { { { { \tau}}}(1+{{{\varepsilon } } } ) } \geq 1 - \frac { 9\xi } { { { { \varepsilon}}}^2{{{\tau } } } } , \end{aligned}\ ] ] by . now by setting , this probability would be at least .[ lemma : light : good : eps ] consider a point such that .conditioned on the event of , we have that } } \geq 1- \frac{1}{1+{{{\varepsilon}}}/32 } \geq { { { \varepsilon}}}/33 $ ] , where . for all , let . by , with probability at least , we have that in particular , let .we have that } } = { \mathop{\mathbf{e}}{\mleft [ { \sum_{i=1}^{{{t}}}z_i } \mright ] } } \geq ( 1+{{{\varepsilon}}}/4){{{t}}}{{{\tau}}}{{{r}}}(1-{{{\varepsilon}}}/10 ) \geq ( 1+{{{\varepsilon}}}/8){{{t}}}{{{\tau}}}{{{r}}}. \end{aligned}\ ] ] now , by hoeffding s inequality , we have that } } & \leq { \mathop{\mathbf{pr}}{\mleft [ { \bigl .z \leq { { { { u } } } } } \mright ] } } \leq { \mathop{\mathbf{pr}}{\mleft [ { \bigl .{ \left| { z - \mu } \right| } \geq \frac{{{{\varepsilon}}}}{16 } { { { t}}}{{{\tau}}}{{{r } } } } \mright ] } } \leq 2 \exp{\mleft ( { - \frac{2({{{t}}}{{{\tau}}}{{{r}}}{{{\varepsilon}}}/16)^2 } { { { { t}}}{\mleft ( { ( 1+{{{\varepsilon}}}/4){{{\tau}}}{{{r}}}}\mright)}^2}}\mright ) } \\ & \leq 2 \exp{\mleft ( { - \frac{t{{{\varepsilon}}}^2}{16 ^ 2}}\mright ) } \leq \frac{2}{n^{{{{\beta}}}{{{\varepsilon}}}^2/256}}. \end{aligned}\ ] ] [ lemma : heavy : tail : eps ] let be two parameters. for a query point , let be its -fold nearest neighbor in , and let .then , with high probability , the algorithm returns a point , such that is a -light , where .the data - structure performs of - queries .as before , we set and . also , by conditions of we have .also , let .let be the set of largest coordinates in . by similar arguments as in, there exists data structures say such that does not contain any of the coordinates of .since by assumption , and by , with probability at least the distance of from is at most . since there are such structures , we know that , with high probability , for one of them , say , this holds . by , any point ( of ) that is -heavy , would be in distance at least in the projection from the projected , and since is a -data - structure under the norm , we conclude that no such point can be returned. note that since for the reported point , the point can not be -heavy , and the coordinates in can contribute at most , the point can not be -heavy .thus , the data - structure returns the desired point with high probability .
|
we introduce a new variant of the nearest neighbor search problem , which allows for some coordinates of the dataset to be arbitrarily corrupted or unknown . formally , given a dataset of points in high - dimensions , and a parameter , the goal is to preprocess the dataset , such that given a query point , one can compute quickly a point , such that the distance of the query to the point is minimized , when ignoring the `` optimal '' coordinates . note , that the coordinates being ignored are a function of both the query point and the point returned . we present a general reduction from this problem to answering queries , which is similar in spirit to ( locality sensitive hashing ) . specifically , we give a sampling technique which achieves a bi - criterion approximation for this problem . if the distance to the nearest neighbor after ignoring coordinates is , the data - structure returns a point that is within a distance of after ignoring coordinates . we also present other applications and further extensions and refinements of the above result . the new data - structures are simple and ( arguably ) elegant , and should be practical specifically , all bounds are polynomial in all relevant parameters ( including the dimension of the space , and the robustness parameter ) .
|
as an answer to this question , i propose certain _ interaction rituals _ .some examples are marching , singing and noise making at street demonstrations before a physical confrontation with the incumbent power and dance and religious ceremonies before a hunt or fight .these rituals can ( in principle ) be performed by strangers with some shared cultural background , who thereby create fledgling ties and a preliminary group boundary .obviously , existing groups with a network have a larger variety of rituals to choose from .although interaction rituals cost time and effort , many can be executed at relatively low costs compared with the contributions that pertain to the public good , which are required anyway , without exogenous resources . therefore many ( newly - forming ) groups can self - organize their cooperation . in some cases ,an interaction ritual can already accomplish a collective good without additional contributions , for example a mass demonstration that convinces a government to change a law before riots begin .although this paper focuses on the difficult cases that require substantial contributions , the low - contribution cases can be largely explained in terms of recruitment into those groups , which has been studied by other researchers . to sharpen interaction ritual theory ,i apply a well - studied synchronization model in a new way . going beyond previous explanations and models that acknowledged the importance of networks ,the synchronization model predicts exactly which patterns of social ties yield cooperation more easily , i.e. at a lower effort and , therefore , more quickly .the network measure that is used is called _ algebraic connectivity _ .it is high when the network appears roundish without large holes in it , whereas it is low for networks with skinny parts , large distances or highly unequal distributions of the number of ties ( relatively high maximum degree ) . by focusing on both network topology and shared intentionality ,it becomes clear why interaction rituals are widely used to encourage cooperation and why some rituals with low connectivity algebraic connectivity is also closely related to social cohesion .because information transmission through a network can be unreliable , cohesion was defined considering a multiplicity of independent connections , which happens to be mathematically implied by algebraic connectivity . in many studies on cooperation ,network topology is ignored or abstracted away from , and in some experiments without communication topology turned out to be unimportant . in those experiments , subjects could respond to other people s actions in previous rounds by either cooperating or defecting with everybody in their neighborhood or group . for topology to have an effect ,however , people must be able to reciprocate the ( in)actions of specific people ; otherwise , free riding is inconsequential for some people whereas others are punished for cooperating .this coarse - grained behavior in the experiments is an artefact of their design , and is clearly inefficient , whereas for fine grained reciprocity with specific individuals , and for the diffusion of reputations , topology is important . byfar the best - known theory concerning the onset of cooperation among ( mostly ) strangers explains that a _ critical mass _ of initiators can win over the rest .below the critical mass , contributions typically have little impact on the production of the public good , and the question is how a critical mass can be formed .somehow , there should be zealous , heroic , resourceful , well - organized or like - minded people in the first place .the synchronization model can solve the start - up problem by explaining how ordinary people with heterogeneous ideas and commitments can reach a shared intentionality and form a critical mass .it does so without strong assumptions regarding individual rationality .critical mass theory , in contrast , rests on the assumption that people know the marginal effect of their contributions .this knowledge people may have in straightforward or highly repetitive situations , but under multi - fold uncertainty , people have no more than a hunch regarding their benefits and costs , and are sometimes fatally inaccurate .after all , people s cognitive limitations are an important reason for them to connect to others and to join interaction rituals in the first place . in the next section, collins interaction ritual theory will be interpreted concerning its explanatory power for collective action , complemented with findings by numerous others .subsequently , the synchronization model will be applied , which predicts , with algebraic connectivity , that cooperation occurs in a burst , not gradually or sequentially .this _ tipping point _ is a second result , which is consistent with observations of , for example , collective protests , which are bursty indeed .the model increases the explanatory power of interaction ritual theory also in a third way . whereas collins took a mutual focus of attention and a shared mood as ingredients " ( 2004 : 48 ), the model explains this homogeneity as an outcome from initial heterogeneity , which is consistent with mcneill s empirical findings . after these predictions, simpler situations will be discussed , with uncertainty only concerning the number of contributors , not regarding the goods timing , benefits and costs .then , the onset of cooperation can be explained without rituals and synchronization , and a certain level of consensus will suffice .however , i will argue that algebraic connectivity is also important in these cases , and i will explicate topology s relation with social cohesion .collective action , social cohesion , consensus building and information transmission thus turn out to be interrelated by the algebraic connectivity of the participants network , which deepens our understanding of the relational foundations of social life .finally , dynamic networks will be discussed .to start out , members of a ( newly - forming ) group involved with a ( possibly ill - defined ) public good can perform certain interaction rituals .many rituals involve rhythmic entrainment , which increases perceptions of similarity .entrainment can be synchronous , evenly paced , anti - phase ( in partner dance ) , or sequential ( in a stadium wave ) and can vary strongly in frequency . in experiments that compare treatments with asynchronous and synchronous movements , contributions to public goodswere significantly higher in the latter .interaction rituals increase solidarity , which denotes the bonding strength of individuals to an entire group , which is distinguished from social cohesion that describes the pattern of ties among individual group members .rituals may also increase the strengths of ties , which is discussed later .moral justification , religion or sacred values can enhance emotional intensity that further increases solidarity . for usit is important that when the uncertainties are higher and more numerous , stronger group - directed emotions should be aroused for solidarity to increase . a competitive group or an enemy can provide an extra push .rare but emotionally intense rituals , for example , initiation in the french foreign legion , have a stronger effect on solidarity than frequently occurring low arousal rituals such as prayers . for the participants to learn about one another s commitments , and to sense one another s body language and emotions, they should interact in physical co - presence , which is a key feature of interaction rituals .these co - resent interactions make possible for commitments to become ( local ) common knowledge .interactions can be initiated for the first time or can be based on an existing network . either way , interactions are modelled as symmetric ties ( and absent ties ) among the participants who are indexed and , and later generalized to ( possibly asymmetric ) weighted ties . under uncertainty , the participants will have changing emotions and thoughts concerning their collective action(s ) .because our question regards the synchronization of the group , not the complexities of the participants psychologies , focal actor s psychological state is characterized by one variable that varies over time .if the difference between focal actor s state and her social contact s state is stable , their synchronization is higher when the difference is smaller . both fluctuating and large differences indicate asynchronous states . through empathy , a focal actors state may change by the states of her social contacts ( collins 2004 : 54 , 78 ) , and a more different contact state has a larger influence on this change . however , if someone is simultaneously influenced by multiple people , the influence of an individual contact will relatively weaken amidst a larger number of them .therefore , the magnitude of a given contact s influence is divided by the focal actor s degree , just like in models of social influence and network autocorrelation .the effect size of social contacts states is also moderated by focal actor s solidarity : if she identifies strongly with the entire group , her contacts in this group will have a relatively strong influence on her , whereas if her solidarity is low , she is psychologically less affected by the emotions and concerns of these contacts . finally , a focal actor s psychological state changes in response to her own commitment .usually , people will have different commitments to different goods and goals of their group , which possibly but not necessarily correlate with their solidarity . for a given public good , the participants commitments are modelled as a symmetric single - peaked distribution that is mean - centred at 0 , for example a gaussian .to analyze the effect of interaction rituals , i use kuramoto s model .this model has already solved numerous problems in physics , biology , engineering , complex networks and computer science , thereby establishing cross - disciplinary parsimony . in kuramoto s original model, is a frequency and is a phase .obviously , people are no oscillators , but we can obtain tractability through these simplifications . reading the model s coupling strength " as solidarity is straightforward . in the model , time indices are dropped , denotes degree , and is used as a shorthand for psychological change , . using the variables that were just discussed ,the model for participants is : in a moment , there will be variation of solidarity across individuals , , but in the simplest model version everybody has the same solidarity .the degree to which all participants are synchronized is indicated by an order parameter , . , where is the average phase .psychological synchronization can be approximated by heart rates in field studies or measured more precisely but more cumbersome by brain scanners in a lab .] analytic solutions were derived for complete graphs , wherein every node is connected to every other node and is very large .when solidarity increases , psychological states remain incoherent ( ) , and solidarity appears to have no effect at all . at a critical threshold ,however , there is a sudden transition toward stable , although not perfect ( ) , synchronization of a large majority , and eq.1 implies that commitments synchronize at the same moment .this two - fold phase transition to synchronization becomes even more explosive " when considering that solidarity varies across individuals and is correlated with commitments . is substituted for in eq.([synchro1 ] ) .] this phase transition has also been found for many sparse graphs with ( much ) smaller . except for very small groups , social networksare sparse , clustered into subgroups and have skewed degree distributions and short network distances .solidarity is limited by the nervous system , and can not reach arbitrarily high values . for a social network to synchronize at a feasible level of solidarity , its connectivity must compensate for the differences among commitments or else the order parameter jitters and synchronization is not achieved .the stability of the order parameter is studied through the algebraic connectivity of the graph , which is denoted , and is the second smallest eigenvalue of the laplacian . and are adjacent or otherwise 0 .if in the laplacian spectrum , , there are ( near ) zero eigenvalues , they indicate the presence of ( almost ) disconnected graph components that wo nt synchronize with each other . ] a mathematical analysis of eq.1 shows that increasing a network s algebraic connectivity yields synchronization at a lower solidarity , with less costly rituals and more quickly .density , average distance and degree distribution do not generally predict this outcome . as a function of solidarity , indicated by triangles and dots , respectively.,title="fig : " ] as a function of solidarity , indicated by triangles and dots , respectively.,title="fig : " ] to illustrate , fig.1 compares a wheel ( ) to a bow tie ( ) topology , which are equal in size ( 7 ) , density ( 0.57 ) , average distance ( 1.43 ) , degree distribution , degree centralization ( 0.6 ) , and coreness ( both are 3-cores - core , everybody has at least ties with others who in turn have at least ties . ] ) . in large graphs , below the critical threshold , whereas in our small graphs , jitters with time ( not shown ) but that means that there is no synchronization by definition . then , at a very small increment of solidarity that crosses the threshold , synchronization sets in and reaches a high value right away . for each of 20 draws of initial values ( from a uniform distribution between and ) and commitments ( from a gaussian with ) , the wheel ( triangles )synchronizes at a lower solidarity than the bow tie ( dots ) .the reason for the large variation of critical thresholds across these 20 draws is that synchronization in small graphs is very sensitive to small differences between initial values .sociologically , this means that individuals and their differences do matter for outcomes at the group level .algebraic connectivity can increase when the strength of ties increases , as a consequence of the interaction ritual , but people have limited capability to have strong ties , at the expense of other ties .this social _ homeostasis _ also constrains the total number of ties that individuals can maintain .the members can connect with strong ties ( ) only if their group is small and for a limited time . in fig.1 ,the bow tie s connectivity would double by a two - fold increase of all tie strengths , whereas relaying two ties to create a wheel has the same effect .relaying a given number of ties ( and keeping their strengths as they are ) is clearly more efficient . alternatively ,algebraic connectivity can increase by increasing group size , but because of homophily and homeostatis , large groups always cluster into subgroups . therefore , there is a still unknown maximum connectivity .this implies that the potential for groups to synchronize has a sweet spot somewhere in between small and large group size , and co - depends on the network topology .well - connected subgroups synchronize at .if the network is formed on the basis of homophily of commitments , such that the variation of commitments among subgroup members is relatively small ( indicated by the euclidean norm ) , the consequences for the critical thresholds of the subgroups and the entire group are in opposite directions : homophilous subgroups synchronize at a lower solidarity whereas the entire group synchronizes at a higher solidarity .although the latter challenges overall synchronization , it may happen that remaining ( asynchronous ) participants are an audience who supports the synchronous subgroups , which often occurs when small groups of street protesters confront the police .alternatively , the audience may be won over and join the action if the initiators form a critical mass .these audience members do not need to know the marginal effect of their contributions ; following initiators with whom they identify is sufficient .if an interaction ritual is in full swing and participant s algebraic connectivity is sufficient , the model predicts that their psychological states and commitments , respectively , fuse into one , which is necessary for cooperation under multi - fold uncertainty .the simultaneity of their synchronization will yield a stronger boost of emotional energy , or _ collective effervescence _, than if it were gradual or sequential .if , in contrast , the ritual is ill - performed or connectivity is low , emotional energy is drained rather than heightened .successful interaction rituals have a longer lasting effect on solidarity ( collins 2004 , p.149 ) .however , at some point during or after a ( series of ) collective action(s ) , the participants or their resources will be exhausted .then their interactions weaken and their solidarity decreases . if commitments are correlated with solidarity or with degree , there is _ hysteresis _ : the backward transition from synchrony to asynchrony occurs at a lower solidarity ( or weaker ties ) than the forward transition .the initial differences among commitments are then recovered .after a successful collective action , future actions by the same group will be easier to organize .cooperation under adverse conditions has a cold - start problem for people who have not yet formed a group , and can pose a threshold for existing groups .both cold starts and thresholds can be overcome by interaction rituals .in contrast with collins , who took a mutual focus of attention and a shared mood as a starting point for interaction rituals in general , the model captured cognitions and emotions as psychological states and explained their homogeneity as an outcome from initial heterogeneity a substantial gain in parsimony and insight .the model , and algebraic connectivity in particular , increased precision beyond anything feasible with unaided reason , and the sudden phase transition showed the non - linear character of challenging collective actions . when the uncertainties and expected costs are low , however , an interaction ritual is not necessary . in these casesit is often sufficient if people converse with one another regarding the public good , which establishes social ties and exchanges commitments , thoughts and emotions .face to face contact is therefore essential , whereas high solidarity is not , and synchronization can be loosened to a level of consensus that is acceptable to the participants .however , algebraic connectivity helps to achieve rapid consensus . in a graph coloring experiment ,algebraic connectivity was inversely proportional to the time to reach consensus .there , the subjects had to choose the same color as their network - neighbors , in highly clustered networks and in networks with randomly rewired edges which implies higher algebraic connectivity than in the clustered networks .an easily obtained consensus is not always beneficial , though , and it can also be a symptom of groupthink . for the diffusion of reputations under realistic conditions , social networks should be robust against noise misinterpreted , wrongly transmitted or manipulated information and node removal .these requirements motivated a definition of social cohesion as the minimum number , , of independent paths connecting arbitrary pairs of nodes in a network .this number is equivalent to the minimum number of nodes that must be removed to make the network fall apart . only in very small networks ,for example 7 members of a team , everyone can be connected directly to everyone else ( in this example , and ) .it was proven for all incomplete networks ( missing at least one tie ) that .algebraic connectivity thus indicates not only a synchronization potential in a broad sense , including consensus and robustness against small perturbations of psychological states , but also a lower bound for social cohesion and redundancy of information channels .low algebraic connectivity , in contrast , facilitates anti - coordination , e.g. choosing a different color for oneself than one s neighbors in a graph coloring game .this anti - coordination is useful when group members want to differentiate themselves to perform complementary tasks . to study dynamic networks ,the kuramoto model has been adapted as follows . in a simulation ,ties strengthen among people who perceive one another as similar concerning their psychological states , i.e. an assortment of kindred spirits , which occurs under the constraint of homeostasis .a tie that disappears is modeled as a fading tie , .starting the simulation with a random network , of strangers who meet for the first time , and letting their solidarity increase from zero to a low value , subgroups emerge that are internally synchronous but mutually asynchronous . if solidarity continues to increase to , those subgroups merge into one synchronized group .when considering that solidarity varies across individuals , which is represented by a gaussian distribution of , it takes longer for synchronous subgroups to emerge .individuals with very low , for example people who take a passive stance in the ritual , stay out of sync with everyone else .as long as these individuals are a minority , the overall pattern is qualitatively the same as with one for all .synchronization interpreted more broadly is widespread in nature and in social phenomena , of which we reviewed several .other cases are energy demand that causes consumption spikes once consumers synchronize , which happens unintentionally despite policy against it , and traders at the stock market who reduce their risk of loosing money when they synchronize their transactions. the most widely known examples are conventions and norms , which also feature an opposition between easy local coordination versus difficult to overcome global anti - coordination , just like synchronization in homophilous subgroups versus overall synchronization .apart from energy consumers , actors across these situations have in common that they reduce uncertainty , at least locally .the main results of this study are that in general , the onset of cooperation is predicted to require less effort and occur more rapidly if participants network has a higher algebraic connectivity . alternatively ,if subgroups with high connectivity or similar members synchronize first , they can form a critical mass that wins over the rest . under multi - fold uncertainty ,a shared intentionality is necessary , which can be achieved by interaction rituals that result in a burst of cooperation .bilek , e. , m. ruf , a. schfer , c. akdeniz , v. d. calhoun , c. schmahl , c. demanuele , h. tost , p. kirsch , and a. meyer - lindenberg ( 2015 ) .information flow between interacting human brains : identification , validation , and relationship to social expertise .112:52075212 .gracia - lzaro , c. , a. ferrer , g. ruiz , a. tarancn , j. a. cuesta , a. snchez , and y. moreno ( 2012 ) .heterogeneous networks do not promote cooperation when humans play a prisoner s dilemma .109:1292212926 .konvalinka , i. , d. xygalatas , j. bulbulia , u. schjdt , e .-jegind , s. wallot , g. van orden , and a. roepstorff ( 2011 ) .synchronized arousal between performers and related spectators in a fire - walking ritual .
|
a significant challenge is to explain how people cooperate for public goods . the problem is more difficult for people who hardly know one another , their public good is unclear at the outset and its timing and costs are uncertain . however , history shows that even under adverse conditions , people can cooperate . as a prelude to cooperation , people can establish ( or reinforce ) social ties and increase their solidarity through interaction rituals . consequently , individuals commitments and psychological states may synchronize , so that they can depend on like - minded people rather than on a rational grasp of their situation , which is not feasible under difficult circumstances . a necessary condition is that the network that is formed ( or used ) during the ritual compensates for participants initial differences . a model shows exactly what network patterns are optimal , and it predicts that at a critical level of solidarity , a heterogeneous majority homogenizes in a sudden phase transition . this synchronization yields a boost of emotional energy for a burst of collective action . * _ key words _ : * public goods | social networks | solidarity | synchronization | interaction rituals paradoxically , people can act collectively and effectively while being tempted to defect and exploit the results of other people s efforts . in contrast to current models and lab experiments of cooperation , many of these situations are characterized by multi - fold uncertainty . examples are defending settlements , mass protests , revolts against dictatorial regimes and hunting large animals by ancestral groups with light armament . in these situations , contributing may be retaliated by the opponent ( or prey ) ; therefore , the costs can be unexpectedly high and the participants may be paralysed by fear . moreover , the public good itself may be unclear or have different meanings for different people , for example democracy , " and may be realized in the future , if at all , which implies that its future value is discounted in the present . however , history shows that even under adverse conditions , people often manage to cooperate . my research question is : how ? in situations where most people are strangers to one another , for example east european protesters against communism in 1989 , they have a cold start problem . these people must determine the public good they actually want and how to achieve it , and they must become committed to it . high average commitments do not predict cooperation , though ; the lowly committed can free ride on the highly committed , who in turn may distrust the lowly committed and abstain from contributing . in experiments , cooperation was higher among people who were equally committed . however , homogeneous commitments are often also insufficient , because people may still may have conflicting ideas concerning what to do and by whom . thus , people must also synchronize their framing of the situation , their definition of the public good , and their plans of action ; in short , they must synchronize their cognitive - emotional states . finally , their synchronization of their commitments and of their psychological states must become common knowledge . then , people have a _ shared intentionality _ , which means that they know that other people feel , think and want the same as they do ; on this basis they can start to cooperate as a single - minded family . to achieve a shared intentionality , people must interact first . established groups already have a network of relatively stable ties , but under multi - fold uncertainty , typical solutions to dilemma s of cooperation may not suffice ; then , people need additional means to get over the hump . although their network makes it easier for them , by providing knowledge regarding one another and sustaining norms that strangers lack , established groups have in common with strangers that they must reach a shared intentionality , too . the research question can be narrowed down accordingly : how can collections of strangers and established groups develop a shared intentionality under adverse conditions ?
|
in this section we review all the basic properties of ball lightning as extensively reported in the scientific literature with an account of the main models proposed to explain some of their most peculiar properties .two of the main characteristic features of ball lightning ( in short bl ) are the unpredictability of its behavior ( formation , stability , motion , etc . ) and the variability of its properties ( structure , size , color , temperature , etc . ) .nevertheless , there are many sufficient qualitative similarities in the qualitative accepted properties to imply that ball lightning either is a real unique phenomenon or at least represents a homogeneous class of related physical phenomena . over the past 200 years , more than 2000 observations of ball lightninghave been reported in sufficient details for scientists to take them seriously .they are almost invariably associated with stormy weather .sometimes they just disappear , so they probably be gaseous .but it is not obvious how there can be a surface between two gases that is sufficiently stable to allow the bl to bounce or to squeeze through small holes . a possible explanation is provided by an approximate thermodynamic analysis of the process which must be occurring as ions escape from a hot air plasma into moist , electrically charged air .the process is the hydration of ions at high relative humidity which is basically an electrostatic phenomenon ( turner , 1994 ) . mainly after the books of singer , stakhanov , barry and stenhoff is a clear evidence for the existence of bl as a real physical phenomenon .all books consider individual accounts and statistical analyses of a large number of observations . another survey by smirnov contains an even larger collection of statistical information .now there is a remarkable consensus about the main characteristics derived from over 2000 verified observations .bl are generally observed during thundery weather , though not necessarily during a storm .it appears to be a free floating globe of glowing gas , usually spherical in shape , which can enter buildings or airplanes .the formation of bl is rarely seen , although it has occasionally been observed forming from linear lightning in the sky and growing out of and detaching from electrical discharges on the ground .it has also been observed to fall from the cloud base .rarely also it has been reported bl rolling on or bouncing off , usually wet , surfaces .the credibility of the reports is an important issue when we handle with rare observations .it is apparent , on the basis of thousand of observations that the bl objects have considerable stability .because they are gaseous this is a surprising feature and it has not yet received a satisfactory explanation .there is a general qualitative agreement about most of the main characteristics of a bl , such as size , shape , colour , stability , temperature , liftime and demise .however some of the estimated ranges for the quantitative properties differ for different authors .for example , the estimated energy density is in the range by stakhanov or a wider range but below the value by barry .bichkov et al . consider the most energetic bl effects and estimate a very high energy density and this can be explained by a polymer composite model .the energy source for bl are generally divided into two basic groups : one assumes that the ball is continuously powered by some external sources , such as the electric field ( ef ) from clouds or a radio - frequency ( rf ) field transmitted from discharges in the clouds .the other group assumes that no external sources are needed and that the ball is generated with sufficient energy to sustain it for its full lifetime .this internal source of energy are mainly organic materials , unstable molecules and plasma from a lightning stroke from clouds .a small number of the observations has been accurately investigated to determine the reliability of the eyewitness and to evaluate the related reports .none of the attempts has succeeded in obtaining even a photograph of the elusive phenomenon .in fact , only a few photographs have been obtained by chance by observers who also saw the object photographed , ( singer , ) .a number of photographs of alleged ball lightning are of exceptionally poor quality and have insufficient detail for evaluation or to yield useful data ( stenhoff, ) .reviews by barry and stenhoff confirm that `` the evidence provided by still photographs alleged to be of ball lightning is very questionable . still photographs taken by chance will always be a matter of controversy .the likelihood of obtaining probative photographic evidence of ball lightning through chance observation is small .videotapes ( or films ) have the potential to yield more useful data , but there is still the possibility of error . ''indeed , the reported films contain artifacts of different nature ( stenhoff , ) .the sizes , as reported by observations , vary usually in the range 5 - 30 cm of diameter .the average observed diameter is 24 cm , but twice as large diameters are often observed , birbrair .it is important to note that the size of a typical bl depends on too many unknown quantities to allow a realistic prediction , turner .a characteristic almost always noted is that the size of the ball hardly changes during the observed lifetime .there is a wide range of plasma temperatures required by different analyses .one of the highest estimates for a bl was that of dmitriev , who suggested a value of 14000 c. in the model of powell and finkelstein the plasma temperature estimated by the radiation emitted with a radio - frequency excitation was 2000 - 2500 k. the core of a bl are hot enough to melt holes in glass .very little heat appears to be emitted from the external surface . in the view of stakhanov , with an internal source of energy , the bl plasma is of quite low temperature ( 500 - 700 k ) with ions extensively hydrated .this low temperature is based mainly on heat loss calculations and on common evidence for the low surface temperature of some bl as reported by many direct observations .turner considers more convincing all the evidence for a central plasma zone with a temperature of at least 2000 k. color and brightness vary .the observed bl colors cover the region from ( red ) to ( violet ) , ofuruton et al . .the fact that a wide range of colors has been reported seems to reflect the presence of impurities in the plasma and does not appear correlated with the size or other properties .the most common reported color is flame - like , approximatively orange , but occasionally brilliant white or red , blue or less often green ( singer , ). observed motion of bl : it frequently moves horizontally at speeds between 0.1 and 10 m / s and a meter or so above the ground .there are also reports of vertical motions or more irregular type .following stakhanov , we find that in 30% of observations a slow rotation of the ball was reported .mostly of bl lasts ( or is observed ) for less than 50 s , although some russiam surveys reported that a bl can lasts over 100 s ( stakhanov , ) .during its life time it rarely changes significantly in either size or color but its life can end in two quite different ways : explosively or by simply disappearing .the motion of the ball , which is sometimes directly down from the clouds , sometimes upward from its appearance near the ground , sometimes in a straight line at low velocity , ( singer , ) .the incompatibility of existing models with all the accepted qualitative properties of bl has led to many authors to propose and to prefer quite different models ( e.g. singer , ; stakhanov , ; barry , ; turner , , ; stenhoff , , etc . ) .first of all , it is not wide accepted that ball lightning is of one basic type phenomenon .the external source of energy can explains the long life , greater than 100 s , of some bl .however , the internal source of energy better explains the frequently observed motion of the ball and its occasional appearance inside enclosed and electrically shielded areas .singer prefers an external supply of energy by rf induced powering , whereas stakhanov , prefers an energy self - sufficient ball where the energy originates from the plasma of a near lightning stroke .turner considers the possibility of both internal and external sources of energy .in fact , the ball could store the electrical ( or electromagnetic ) energy received from a local discharge and use it to replace the external source when it becomes unavailable . in this case ,the electric field of a thunderstorm provides most of the required external power , at least during the formation of a bl .ignatovich considers an electromagnetic model i.e. a thin spherical layer filled with electromagnetic radiation retained because of the total internal reflection and the layer itself is conserved because of electrostiction forces generated by the radiation .this model can explain both the high energy and the long life of the bl .a possible example of high temperature superconducting circular current around the tube of a torus , is a model proposed by birbrair . in his modelthe shape of bl is not a common sphere but he note that many different forms including the torus are also observed . in this approachwe can explain both the high energies ( 100 kj ) and the small intensity of radiation , for exploding bl .turner , in order to explain the structure and stability of bl considers a central plasma core surrounded by a cooler intermediate zone in which recombination of most or all of the high - energy ions takes place .further out , is a zone in which temperatures are low enough for ions to become hydrated .moreover , near the surface of the ball there is a region in which a thermochemical cooling process can take place .powell and finkelstein model assumes that a bl is powered by the electric field which exists between the earth and cloud base .they suggest that for a bl with typical size and temperature ( 2000 - 2500 k ) the multiplication of electrons by atomic collisions should be sufficient to sustain the plasma at realistic electric fields .muldrew , , considers a mathematical model of bl assuming that a solid , positively charged core exists at its center .the large amount of energy occasionally associated with bl is mainly due to the electrostatic energy of the charge on the core .the upper energy limit is determined by the size and strength of the core and this energy can be orders of magnitude greater than the energy which can be confined by atmospheric pressure alone . a pure electron layer and a plasma layer surround the core .an electromagnetic field is completely trapped by the electron and plasma layers .the electron temperature is sufficiently high that absorption by electron - ion collisions is small , enabling the ball to have a lifetime of seconds or more .gilman , suggests a model that consists of highly excited rydberg atoms with large polarizabilities that bind them together , with cohesion properties that comes from photon exchange forces instead of electron exchange forces . in this modelwe assume that the density of a bl must be comparable with that of air and it is able to explain the deformability property of some observed bl .torchigin , prefers a radically new approach , where a bl is not composed by material particles but it is a pure optical phenomenon where only an intense light and compressed air interact . in this model a bl is a light bubble which shell is a thin film where the refractive index n is increased as compared with the near space .the shell confines an intense light circulatin within it in all possible directions .this nonlinear optical model can better explain most of the irregular motions and shapes of some bl .the model of tsintsadze , is based on a weekly ionized gas in which the electromagnetic radiation can be accumulated through a bose - einstein condensation or density inhomogeneity of plasma .this model can explain the observed stability of bl , its motion and deformability ; further , it can explain the external conditions for instability and its explosive disappearing .the model presented by coleman , , is based on burning atmospheric vortices where combustion is the source of the observed luminosity .this model can explain the complex and irregular motions of bl .an extensive list of current difficulties which cause the current modelling problems is contained in the review by turner . for a very detailed list of observational properties and characteristics delineated from the numerous surveys of eyewitness reports and additional physical parameters that have been estimated for bl phenomenon on the basis of statistical analysis performed on the surveys , we refer to the review by davis , .we can conclude this short review on the subject of essential properties of ball lightning by quoting the words of turner , : `` because we do not know how to make ( laboratory ) long - lived ball lightning or to model what seem ( all ) the crucial processes , we are forced to use any published material which is potentially related to any ball lightning property if we wish to make progress more rapidly than in the past '' .on a cloudy and almost raining summer day of the past year ( june 20 , 2010 ) one of the authors ( p.v . ) , during a planned trekking , he was walking along a mountain path near the town pruno ( stazzema , lucca - italy ) located at 470 m above sea level .the meteo conditions was very bad on the morning with variable intensity rainfall , interrupted by moments of `` quite '' .there was no wind and temperature was around 15 - 18 c. at a given point , early in the afternoon , he stops on a little bridge that crosses the deglio s river that , due to heavy rains even the day before , had an exceptional flow rate .he decide to stay there to catch a short movie of the river upstream and , in particular , to frame a precise area of the river from a distance of about 20 m by using the zoom of his digital video - camera .the movie lasts about six seconds and in this short time interval , within the monochrome viewfinder , he feels something unexplained strange , but he does not give some attention . only in the final editing of the video , he understands what was strange . in a totally random way in that shot he took over a small ball of orange light , but more white inside the nucleus , moving with irregular motion , with estimated size of a few centimeters .within those six seconds , the object remained visible for about three seconds , maintaining a constant size and brightness , after which it suddenly vanishes . halving the speed of the movie , in fact, it is noted that instead of vanishing abruptly the ball accelerates upwards in a diagonal line and then vertically , leaving the camera field of view . at the time of the video recording was not raining and the sky was partially covered .the witness did not receive any special smell , no sound or noise except that of the flowing river. was it a bl ?the luminous ball recorded in the video looks like to a bl , but we need a detailed image analysis to see if this object can be a good candidate for a true bl . the reader interested to know more on the video - camera recording can refer to the following web link : http://fulmineglobulare.xoom.itin this section we report the processes used to eleborate the video and related images .the original video is in a digital format and thus it is possible to perform a digital photometric rgb analysis of the recorded luminous ball object .we recall here that the rgb model is commonly used for the sensing , representation , and display of images in electronic systems .an rgb - color is a ( red , green , blue ) vector .components are here integers between 0 and 255 .an rgb - system is in close connection with the consolidated tristimulus color - vision theory of young - helmholtz - maxwell for human .it is based on 3 cones with maximal sensitivity at 564 , 534 and 420 nm supplemented with brightness channel of rods .further , we show a basic dynamical analysis of the ball image in the time interval of its visibility , taking into account the real physical scales of objects as recorded in the video . in order to determine the physical scales, we went in the exact observation site to make accurate measurements of the background objects .the distance between the viewpoint and the area of appearance of the ball was about 17.6 meters , while the comparison between the size of the surrounding rocks and the image in the movie was able to estimate the size of the pixel in the object plane of about 2.2 mm .a photometric rgb analysis of the ball image required the selection of different frames extracted from the original video .in particular , we have identified three frames taken from the original movie corresponding to three different positions of the ball with three different conditions of the background .further , we performed the analysis of spatial variation of intensity of the luminous `` sphere '' , measured along a diameter , splitting the rgb channels .note that although the object shape is near spherical one , in the following frames , due to the video format conversion , the ball images appear slightly vertically elongated .figure 1 shows the selected frame in the first position .note the location of the ball near the edge of the border between the background consisting of water and rock . in order to characterize both the ball and the background , we display in figure 2 a portion of the frame around the ball and its rgb analysis . in the figureare reported both the pixel scale and the scale on the object plane , appropriately scaled for a direct comparison .we note firstly that the intensity of the emission of the water is higher ( about a factor of 2 ) than that of the rock .thus , the emission of background where is located the object is decreasing from left to right generating an asymmetric global emission . secondly , the water emission ( on the left ) is well rgb characterized , with a predominant b channel , followed by green and red channels . to support this fact ,we show in figure 3 the emission from a water background in a different location ( in the middle of the fall ) where was present only water . on the other hand ,the rock emission is not well rgb characterized , showing only an intermittent dominant red component . using the above background characterization , and the fact that the object emission is characterized by a strong red dominant component, we have identified the spatial extension of our phenomenon , on the basis of the relative rgb behavior , as shown in figure 2 by vertical bars .the left bar was positioned at the inversion between the b ( water ) and r ( object ) components ; whereas the right bar was positioned where the r component is no more dominant .the spatial extension estimated of the ball is : 3.26 cm ( ) .this value is compatible with the accepted size range , as repoted in literature , of a typical dimension for a small bl . inside the region, we can note the presence of a intense central peak with an extension of about 1.5 cm and two secondary peaks corresponding to a ring feature of the image . in the following images we reported only the region around the central peak .figure 4 shows the rgb analysis of frame in figure 1 .the profile of rgb emission curves clearly shows that there is a red dominant component in the light emitted from the ball followed by green and blue ones .the value of r peak is : 218 , whereas the base level is : 135 .the extension of the main peak central core is : 1.5 cm .figures 5,6 show the second selected frame and its rgb analysis .note that here the position of the ball is now located with the background consisting of rock .the profiles of rgb emissions confirm the characteristics of previous analysis .the value of r peak is : 228 , whereas the base level is : 103 .this lower value is due to a lower emission from the background ( rock ) .the extension of the main peak central core is : 1.6 cm .figures 7,8 show the third selected frame and its rgb analysis .note that here the position of the ball is located with a background consisting of water .again the profiles of rgb emissions confirm the main characteristics of previous analyses .the value of r peak is : 212 , whereas the base level is : 165 .this higher value is due to a higher emission from the background ( water ) .the extension of the main peak central core is : 1.5 cm . from the last three figures we note that the maximun red emissions are almost the same , whereas the background emission is very different . in order to confirm this characteristic in figure 9we show the maximum values of r emissions and the relative background emissions for six different positions of the ball .the different locations are displayed in the inset , and correspond to the complete observed path of the ball covering very different background conditions .we stress here that the r peak values are almost constant whereas the background levels are strongly variable ( about a factor of 3 ) .this supports the fact that the ball is not `` transparent '' in the visible and the emission is constant during the observed time interval .from the above rgb analysis of the selected luminous ball images we can draw the following conclusions : \1 ) the rgb analysis shows that the type of light emission from the ball is not monochromatic ; + 2 ) there is no saturation of the image ; + 3 ) from 1 ) and the absence of blinking of the ball , mainly in the position related to turbulent water , would be excluded , as an explanation of the image , the projection of a laser source at a distance ; + 4 ) the light emission of the object dominates in the red band in all positions ; + 5 ) the detected intensity of light is emitted from the source and it is not a reflection from any kind of an external source and is almost constant ; + 6 ) the peak intensity does not change significantly when changing the emissivity of the background : the object is not `` transparent '' in the visible ; + 7 ) estimated size , motion , stability , type of light emission and the environmental high - humidity conditions , would suggest a probable ball lightning . + to strengthen the conclusion in 3 ) , after a preliminary characterization of the ccd video - camera response to three different types of lasers at wavelengths : nm ( red ) , nm ( green ) and nm ( orange ) , we projected the spot of red laser in the same area ( and weather conditions ) of initial observation of the ball .the related rgb analyses clearly showed that the luminous ball object recorded by the video - camera can not in any way be attributed to a diffused and/or a reflected monochromatic light from a laser .a laser image , unlike the ball image , is monochromatic .( for further details on laser tests see the site : http://fulmineglobulare.xoom.it ) in the following dynamical analysis , we will further support the above conclusions based on rgb emission analysis .we analyzed the motion of the luminous ball recorded in about 2 seconds of its visibility in quasi - stationary conditions , before his sudden demise with great acceleration . to determine the dynamics of the ball , we initially split the movie into individual frames , the frames were then aligned ( stacking ) isolating the image of the ball by eliminating the background context .the result is shown in figure 10 .as we can see , from a first inspection , the ball shows a high variability in the motion with quasi - stationary periods alternated with periods of high speed .of course , we dropped the final frames where the ball accelerates disappearing .starting from the individual frames , we extracted the coordinates ( in pixels ) of the `` core '' of the ball and transformed into the known physical size scale reported above .limiting ourselves to the dynamics in the plane perpendicular to the line of sight ( transverse components ) the calculation of euclidean distances , given that the interval between two successive frames is 1/25 s , led to a graph of speed shown in figure 11 .we can easily see continued acceleration and deceleration , with a few intervals at a constant speed .the peak shapes show a dynamics of ball which must be resolved well after the typical scan rate of movie : 1/25 s. most acceleration occurs at about 1.59 s where the velocity rose from 37.5 cm / s to 180.3 cm / s in 0.04 seconds ( acceleration : 3.6 g ) , and next reduced to 12.5 cm / s in 0.16 seconds . here no rotation of the ball was detected . after this analysis of the dynamics of the ballwe can safely say that the type of motion and the quantitative estimates of speed , agree well with the data and current models of ball lightning . to complete the analysis, we also performed a sound analysis : the file of environmental noise sampled at 8khz has been subjected to a wavelet analysis to highlight the possibility , during the time of appearance of the ball , to detect a noise characteristics of the bl .however , the obtained results clearly showed no evidence of different specific frequencies from those of the ambient noise background .following the results shown in the preceding sections we now make some brief remarks on the characteristics of ball recorded and some of bl models proposed in the literature .we have already noted that the size ( about 4 cm ) , the light emission ( almost constant ) , motion ( quite variable ) and duration ( ) of the ball are within the parameters allowed and accepted for a bl .the profiles of the rgb curves are in agreement with the electrochemical model of turner , , where a hot core plasma is surrounded by shells at lower temperature where ion recombination processes and hydration take place .the shape of the object is almost spherical one , and then , to explain the observed ball , we can exclude those models that consider more complex geometries .the observed relatively short duration of the ball does not require the assumption of an external sources of support or the use of electromagnetic models , such as by ignatovich , to explain a long lifetime of bl .the analysis of ball motion , even when it is seen projected onto a plane perpendicular to the line of observation , highlights both a horizontal component , common to many bl ( 54% ) , but also a significant vertical component ( 19% ) that only few models try to explain , ( stenhoff , ) .the nonlinear optical model , by torchigin et al . , appears quite consistent with our results ( e.g. observed emissivity , dimension , motion ) but it needs further investigations to justify a long lifetime , greater to 3 sec . as reported by stenhoff , a ball lightning is a relatively low energy phenomenon with maximum energy up to 3 kj . as reported ,a ball visible in daylight is comparable to a 150 w filament lamp , and considering a luminous efficiency of about 20% it emits about 30 w of power in the visible part of the spectrum ( stenhoff , ) . in our case , considering that the volume of the observed bl is about we obtain a maximum energy density of about . in order to estimate the effective power emitted by our ball, we need an energy calibration of the video camera that we would like to perform in a future work allowing a direct comparison with the literature data . if we consider the light emitted in the visible part of the spectrum , for a yellow - orange ball with the blackbody temperature , from the wien law , would be about 4900 k. red ball lightning would be cooler and blue ball lightning hotter .however , we have to consider that this is a quite unrealistic estimate based on a pure theoretical relation and it is not clear if the source of luminous energy is thermal or not ( stenhoff , ) .while in principle the existence of ball lightning is generally accepted , the lack of a conclusive , reliable and accurate theory has been partly responsible for some remaining skepticism about their real existence .in fact , most physicists have given a description in terms of plasma physics , but more detailed considerations , based on observations , have given rise to many unexpected problems . this has led to suggest new concepts and interpretations of this phenomenon that have been developed in many common areas of physics and also to assume more or less exotic phenomena such as antimatter , new fundamental particles ( dark matter ) , little black holes ( rabinowitz , ) . however , the authors believe that this atmospheric phenomenon can be well described within the physics of plasma , the electrochemical processes , nonlinear optics and electromagnetic fields . since at present the only way we can define the properties of ball lightning is through the direct accounts of observers , it is crucial to consider the reliability and accuracy of reports .this fact introduces a certain degree of subjectivity that depends on the experience and knowledge of the observer . in this paper , the authors aim to contribute to the wide collection of observational reports through the objective analysis of a recorded video in the daytime , which strongly suggests the presence of a small moving ball lightning , and that provides to specialists in the field new material for further detailed studies .the main goal here was to provide some useful information , not subject to personal interpretation , to improve some aspects of the proposed models and to better explain the rare phenomenon of ball lightning .following this viewpoint future work could be useful to give : 1 ) a more precise estimation of the temperature inside the ball by a more deep analysis of the spectral composition of emitted light and a comparison with a blackbody emission ; 2 ) a more precise estimation of the power of emitted light ; 3 ) a deep investigation of the spatial structure of the ball ( e.g. multiple rings ) ; 4 ) a more advanced dynamic analysis of the ball , e.g. with a 3d motion reconstruction using proper image postprocessing algorithms .
|
in this paper we describe a video - camera recording of a ( probable ) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization . the results strongly support the bl nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible bl event for further analyses . some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper . , , ball lightning , atmospheric phenomena , image and signal analysis . code 52.80.mg , 92.60.pw , 07.05.pj
|
the casimir force , arising due to quantum fluctuations of the electromagnetic field , has been widely studied over the past few decades and verified by many experiments . until recently, most works on the subject had been restricted to simple geometries , such as parallel plates or similar approximations thereof .however , new theoretical methods capable of computing the force in arbitrary geometries have already begun to explore the strong geometry dependence of the force and have demonstrated a number of interesting effects .a substantial motivation for the study of this effect is due to recent progress in the field of nano - technology , especially in the fabrication of micro - electro - mechanical systems ( mems ) , where casimir forces have been observed and may play a significant role in `` stiction '' and other phenomena involving small surface separations .currently , most work on casimir forces is carried out by specialists in the field . in order to help open this field to other scientists and engineers , such as the mems community ,we believe it fruitful to frame the calculation of the force in a fashion that may be more accessible to broader audiences . in ref ., with that goal in mind , we introduced a theoretical framework for computing casimir forces via the standard finite - difference time - domain ( fdtd ) method of classical computational electromagnetism ( for which software is already widely available ) .the purpose of this manuscript is to describe how these computations may be implemented in higher dimensions and to demonstrate the flexibility and strengths of this approach . in particular , we demonstrate calculations of casimir forces in two- and three - dimensional geometries , including three - dimensional geometries without any rotational or translational symmetry .furthermore , we describe a harmonic expansion technique that substantially increases the speed of the computation for many systems , allowing casimir forces to be efficiently computed even on single computers , although parallel fdtd software is also common and greatly expands the range of accessible problems .our manuscript is organized as follows : first , in sec .[ sec : mult - exp ] , we briefly describe the algorithm presented in ref . to compute casimir forces in the time domain .this is followed by an important modification involving a harmonic expansion technique that greatly reduces the computational cost of the method .second , sec .[ sec : num - imp ] presents a number of calculations in two- and three - dimensional geometries .in particular , sec .[ sec:2d - geoms ] presents calculations of the force in the piston - like structure of ref . , andthese are checked against previous results .these calculations demonstrate both the validity of our approach and the desirable properties of the harmonic expansion . in subsequent sections ,we demonstrate computations exploiting various symmetries in three dimensions : translation - invariance , cylindrical symmetry , and periodic boundaries . these symmetries transform the calculation into the solution of a set of two - dimensional problems . finally in sec .[ sec : full-3d ] we demonstrate a fully three - dimensional computation involving the stable levitation of a sphere in a high - dielectric fluid above an indented metal surface .we exploit a freely available fdtd code , which handles symmetries and cylindrical coordinates and also is scriptable / programmable in order to automatically run the sequence of fdtd simulations required to determine the casimir force .finally , in the appendix , we present details of the derivations of the harmonic expansion and an optimization of the computation of .in this section we briefly summarize the method of ref . , and introduce an additional step which greatly reduces the computational cost of running simulations in higher dimensions . in ref ., we described a method to calculate casimir forces in the time domain .our approach involves a modification of the well - known stress - tensor method , in which the force on an object can be found by integrating the minkowski stress tensor around a surface surrounding the object ( fig .[ fig : dblocks ] ) , and over all frequencies .our recent approach abandons the frequency domain altogether in favor of a purely time - domain scheme in which the force on an object is computed via a series of independent fdtd calculations in which sources are placed at each point on .the electromagnetic response to these sources is then integrated in time against a predetermined function .the main purpose of this approach is to compute the effect of the entire frequency spectrum in a single simulation for each source , rather than a separate set of calculations for each frequency as in most previous work . , separated by a distance ,are sandwiched between two perfectly conducting plates ( the materials are either perfect metallic or perfect magnetic conductors ) .the separation between the blocks and the cylinder surface is denoted as .,scaledwidth=30.0% ] we exactly transform the problem into a mathematically equivalent system in which an arbitrary dissipation is introduced .this dissipation will cause the electromagnetic response to converge rapidly , greatly reducing the simulation time .in particular , a frequency - independent , spatially uniform conductivity is chosen so that the force will converge very rapidly as a function of simulation time .for all values of , the force will converge to the same value , but the optimal results in the shortest simulation time and will depend on the system under consideration . unless otherwise stated , for the simulations in this paper , we use ( in units of , being a typical length scale in the problem ) . in particular , the casimir force is given by : where is a geometry - independent function discussed further in the appendix , and the are functions of the electromagnetic fields on the surface defined in our previous work . written in terms of the electric field response in direction at to a source current in direction , , the quantity defined as : where , the differential area element and is the unit normal vector to at .a similar definition holds for involving the magnetic field green s function . as described in ref ., computation of the casimir force entails finding both and the field response with a separate time - domain simulation for every point . while each individual simulation can be performed very efficiently on modern computers , the surface will , in general , consist of hundreds or thousands of pixels or voxelsthis requires a large number of time - domain simulations , this number being highly dependent upon the resolution and shape of , making the computation potentially very costly in practice .we can dramatically reduce the number of required simulations by reformulating the force in terms of a harmonic expansion in , involving the distributed field responses to distributed currents .this is done as follows [ an analogous derivation holds for ] : as is assumed to be a compact surface , we can rewrite as an integral over : where in this integral is a scalar unit of area , and denotes a -function with respect to integrals over the surface .given a set of orthonormal basis functions defined on and complete over , we can make the following expansion of the function , valid for all points : the can be an arbitrary set of functions , assuming that they are complete and orthonormal on . inserting this expansion of the -function into equation ( [ eq : gammaxx ] ) and rearranging terms yields : the term in parenthesescan be understood in a physical context : it is the electric - field response at position and time to a current source on the surface of the form .we denote this quantity by : where the subscript indicates that this is a field in response to a current source determined by . is exactly what can be measured in an fdtd simulation using a current for each .this equivalence is illustrated in fig .[ fig : harmonic ] . .the left part shows an expansion using point sources , where each dot represents a different simulation .the right part corresponds to using for each side of .either basis forms a complete basis for all functions in .,scaledwidth=48.0% ] the procedure is now only slightly modified from the one outlined in ref .: after defining a geometry and a surface of integration , one will additionally need to specify a set of harmonic basis functions on .for each harmonic moment , one inserts a current function on and measures the field response .summing over all harmonic moments will yield the total force . in the following section, we take as our harmonic source basis the fourier cosine series for each side of considered separately , which provides a convenient and efficient basis for computation .we then illustrate its application to systems of perfect conductors and dielectrics in two and three dimensions .three - dimensional systems with cylindrical symmetry are treated separately , as the harmonic expansion ( as derived in the appendix ) becomes considerably simpler in this case .in principle , any surface and any harmonic source basis can be used . point sources , as discussed in ref . , are a simple , although highly inefficient , example . however, many common fdtd algorithms ( including the one we employ in this paper ) involve simulation on a discretized grid . for these applications ,a rectangular surface with an expansion basis separately defined on each face of is the simplest . in this case, the field integration along each face can be performed to high accuracy and converges rapidly .the fourier cosine series on a discrete grid is essentially a discrete cosine transform ( dct ) , a well known discrete orthogonal basis with rapid convergence properties .this in contrast to discretizing some basis such as spherical harmonics that are only approximately orthogonal when discretized on a rectangular grid . in this sectionwe consider a variant of the piston - like configuration of ref ., shown as the inset to fig .[ fig : dblocks - force ] .this system consists of two cylindrical rods sandwiched between two sidewalls , and is of interest due to the non monotonic dependence of the casimir force between the two blocks as the vertical wall separation is varied .the case of perfect metallic sidewalls has been solved previously ; here we also treat the case of perfect magnetic conductor sidewalls as a simple demonstration of method using magnetic materials . while three - dimensional in nature , the system is translation - invariant in the -direction and involves only perfect metallic or magnetic conductors . as discussed in ref .this situation can actually be treated as the two - dimensional problem depicted in fig .[ fig : harmonic ] using a slightly different form for in eq .( [ eq : time - force ] ) ( given in the appendix ) . the reason we consider the three - dimensional case is that we can directly compare the results for the case of metallic sidewalls to the high - precision scattering calculations of ref . ( which uses a specialized exponentially convergent basis for cylinder / plane geometries ) . for this system ,the surface consists of four faces , each of which is a line segment of some length parametrized by a single variable .we employ a cosine basis for our harmonic expansion on each face of .the basis functions for each side are then : where is the length of the edge , and for all points not on that edge of . these functions , and their equivalence to a computation using -function sources as basis functions , are shown in fig .[ fig : harmonic ] .in the case of our fdtd algorithm , space is discretized on a yee grid , and in most cases will turn out to lie in between two grid points .one can run separate simulations in which each edge of is displaced in the appropriate direction so that all of its sources lie on a grid point .however , we find that it is sufficient to place suitably averaged currents on neighboring grid points , as several available fdtd implementations provide features to accurately interpolate currents from any location onto the grid . , normalized by the pfa force .red / blue / black squares show the te / tm / total force in the presence of metallic sidewalls , as computed by the fdtd method ( squares ) .the solid lines indicate the results from the scattering calculations of , showing excellent agreement .dashed lines indicate the same force components , but in the presence of perfect magnetic - conductor sidewalls ( computed via fdtd ) .note that the total force is nonmonotonic for electric sidewalls and monotonic for magnetic sidewalls.,scaledwidth=48.0% ] the force , as a function of the vertical sidewall separation , and for both te and tm field components , is shown in fig .[ fig : dblocks - force ] and checked against previously known results for the case of perfect metallic sidewalls .we also show the force ( dashed lines ) for the case of perfect magnetic conductor sidewalls . in the case of metallic sidewalls ,the force is nonmonotonic in . as explained in ref ., this is due to the competition between the tm force , which dominates for large but is suppressed for small , and the te force , which has the opposite behavior , explained via the method of images for the conducting walls .switching to perfect magnetic conductor sidewalls causes the tm force to be enhanced for small and the te force to be suppressed , because the image currents flip sign for magnetic conductors compared to electric conductors . as shown in fig .[ fig : dblocks - force ] , this results in a monotonic force for this case . snapshots ( blue / white / red = positive / zero / negative ) for the term in the harmonic cosine expansion on the leftmost face of for the double blocks configuration of at selected times ( in units of ).,scaledwidth=48.0% ] the result of the above calculation is a time - dependent field similar to that of fig . [ fig : visualization ] , which when manipulated as prescribed in the previous section , will yield the casimir force . as in ref ., our ability to express the force for a dissipationless system ( perfect - metal blocks in vacuum ) in terms of the response of an artificial dissipative system means that the fields , such as those shown in fig .[ fig : visualization ] , rapidly decay away , and hence only a short simulation is required for each source term .in addition , fig . [fig : harmonic - convergence ] shows the convergence of the harmonic expansion as a function of .asymptotically for large , an power law is clearly discernible .the explanation for this convergence follows readily from the geometry of : the electric field , when viewed as a function along , will have nonzero first derivatives at the corners .however , the cosine series used here always has a vanishing derivative .this implies that its cosine transform components will decay asymptotically as .as is related to the correlation function , their contributions will decay as .one could instead consider a fourier series defined around the whole perimeter of , but the convergence rate will be the same because the derivatives of the fields will be discontinuous around the corners of .a circular surface would have no corners in the continuous case , but on a discretized grid would effectively have many corners and hence poor convergence with resolution . in the cosine basis to the total casimir force for the double blocks configuration ( shown in the inset),scaledwidth=48.0% ] dispersion in fdtd in general requires fitting an actual dispersion to a simple model ( eg .a series of lorentzians or drude peaks ) .assuming this has been done , these models can then be analytically continued onto the complex conductivity contour .as an example of a calculation involving dispersive materials , we consider in this section a geometry recently used to measure the classical optical force between two suspended waveguides , confirming a prediction that the sign of the classical force depends on the relative phase of modes excited in the two waveguides .we now compute the casimir force in the same geometry , which consists of two identical silicon waveguides in empty space . we model silicon as a dielectric with dispersion given by : where rad / sec , and , .this dispersion can be implemented in fdtd by the standard technique of auxiliary differential equations mapped into the complex- plane as explained in ref . . for perfect metals ( assuming plate area equal to the interaction area of the waveguides).,scaledwidth=48.0% ] the system is translation - invariant in the direction .if it consisted only of perfect conductors , we could use the trick of the previous section and compute the force in only one 2d simulation .however , dielectrics hybridize the two polarizations and require an explicit integral , as discussed in ref . .each value of corresponds to a separate two - dimensional simulation with bloch - periodic boundary conditions .the value of the force for each is smooth and rapidly decaying , so in general only a few points are needed . to simulate the infinite open space around the waveguides, it is ideal to have `` absorbing boundaries '' so that waves from sources on do no reflect back from the boundaries .we employ the standard technique of perfectly matched layers ( pml ) , which are a thin layer of artificial absorbing material placed adjacent to the boundary and designed to have nearly zero reflections .the results are shown in red in fig .[ fig : tang ] .we also show ( in blue ) the force obtained using the proximity force approximation ( pfa ) calculations based on the lifshitz formula .for the pfa , we assume two parallel silicon plates , infinite in both directions perpendicular to the force and having the same thickness as the waveguides in the direction parallel to the force , computing the pfa contribution from the surface area of the waveguide .as expected , at distances smaller than the waveguide width , the actual and pfa results are in good agreement , while as the waveguide separation increases , the pfa becomes more inaccurate .for example , by a separation of 300 nm , the pfa result is off by 50 .we also show for comparison the force for the same surface between two perfectly metallic plates , also assuming infinite extent in both transverse directions . in the case of cylindrical symmetry , we can employ a cylindrical surface and a complex exponential basis in the direction . for a geometry with cylindrical symmetry and a separable source with dependence , the resulting fields are also separable with the same dependence , andthe unknowns reduce to a two - dimensional problem for each .this results in a substantial reduction in computational costs compared to a full three - dimensional computation . treating the reduced system as a two - dimensional space with coordinates , the expression for the force ( as derived in the appendix ) is now :\int_s ds_j({\mathbf{x}})\,\gamma_{ij;n}({\mathbf{x}},t ) \label{eq : force - cyl}\ ] ] where the -dependence has been absorbed into the definition of above : , \label{eq : gamma - cyl}\ ] ] and , being a one - dimensional cartesian line element . as derived in the appendix , the jacobian factor obtained from converting to cylindrical coordinates cancels out , so that the one - dimensional ( -independent ) measure is the appropriate one to use in the surface integration .also , the ] alone in eq .( [ eq : force - cyl ] ) . given an dependence in the fields , one can write maxwell s equations in cylindrical coordinates to obtain a two - dimensional equation involving only the fields in the plane. this simplification is incorporated into many fdtd solvers , as in the one we currently employ , with the computational cell being restricted to the plane and appearing as a parameter .when this is the case , the implementation of cylindrical symmetry is almost identical to the two - dimensional situation .the only difference is that now there is an additional index over which the force must be summed . to illustrate the use of this algorithm with cylindrical symmetry, we examine the 3d system shown in the inset of fig .[ fig:3d - blocks - force ] .this configuration is similar to the configuration of cylindrical rods of fig .[ fig : dblocks - force ] , except that instead of translational ( ) invariance we instead impose rotational ( ) invariance . in this case , the two sidewalls are joined to form a cylindrical tube .we examine the force between the two blocks as a function of ( the case has been solved analytically ) . for the cylindrically - symmetric piston configuration shown in the figure .both plates are perfect metals , and the forces for both perfect metallic and perfect magnetic conductor sidewalls are shown .note that in contrast to fig .[ fig : dblocks - force ] , here the force is monotonic in for the metallic case and non monotonic for the magnetic case.,scaledwidth=48.0% ] due to the two - dimensional nature of this problem , computation time is comparable to that of the two - dimensional double block geometry of the previous section .rough results ( at resolution 40 , accurate to within a few percentage points ) can be obtained rapidly on a single computer ( about 5 minutes running on 8 processors ) are shown in fig .[ fig:3d - blocks - force ] for each value of .only indices are needed for the result to have converged to within 1 , after which the error is dominated by the spatial discretization .pml is used along the top and bottom walls of the tube .in contrast to the case of two pistons with translational symmetry , the force for metallic sidewalls is monotonic in .somewhat surprisingly , when the sidewalls are switched to perfect magnetic conductors the force becomes non monotonic again .although the use of perfectly magnetic conductor sidewalls in this example is unphysical , it demonstrates the use of a general - purpose algorithm to examine the material - dependence of the casimir force . if we wished to use dispersive and/or anisotropic materials , no additional code would be requiredperiodic dielectric systems are of interest in many applications .the purpose of this section is to demonstrate computations involving a periodic array of dispersive silicon dielectric waveguides above a silica substrate , shown in fig .[ fig : grating ] .-direction and translation - invariant in the -direction , so the computation involves a set of two - dimensional simulations.,scaledwidth=48.0% ] as discussed in ref . , the casimir force for periodic systems can be computed as in integral over all bloch wavevectors in the directions of periodicity . here , as there are two directions , and , that are periodic ( the latter being the limit in which the period goes to zero ) .the force is then given by : where is the force computed from one simulation of the unit cell using bloch - periodic boundary conditions with wavevector .in the present case , the unit cell is of period 1 m in the direction and of zero length in the direction , so the computations are effectively two - dimensional ( although they must be integrated over ) .we use the dispersive model of eq .( [ eq : silicon ] ) for silicon , while for silica we use where and ( rad / sec ) . as a final demonstration, we compute the casimir force for a fully three - dimensional system , without the use of special symmetries .the system used is depicted in fig .[ fig:3d - indented - sphere ] .this setup demonstrates stable levitation with the aid of a fluid medium , which has been explored previously in ref . .with this example , we present a setup similar to that used previously to measure repulsive casimir forces , with the hope that this system may be experimentally feasible .a silica sphere sits atop a perfect metal plane which has a spherical indentation in it .the sphere is immersed in bromobenzene . as the system satisfies ,the sphere feels a repulsive casimir force upwards .this is balanced by the downward force of gravity , which confines the sphere vertically .in addition , the casimir repulsion from the sides of the spherical indentation confine the sphere in the lateral direction .the radius of the sphere is m , and the circular indentation in the metal is formed from a circle of radius m , with a center m above the plane . for computational simplicity ,in this model we neglect dispersion and use the zero - frequency values for the dielectrics , as the basic effect does not depend upon the dispersion ( the precise values for the equilibrium separations will be changed with dispersive materials ) .these are for silica and . an efficient strategy to determine the stable point is to first calculate the force on the glass sphere when its axis is aligned with the symmetry axis of the indentation .this configuration is cylindrically - symmetric and can be efficiently computed as in the previous section .results for a specific configuration , with a sphere radius of nm and an indentation radius of m , are shown in fig .[ fig:3d - z - force ] . )force on the silica sphere ( depicted in the inset ) as the height of the sphere s surface above the indentation surface is varied .the point of vertical equilibrium occurs at nm.,scaledwidth=48.0% ] the force of gravity is balanced against the casimir force at a height of nm . to determine the strength of lateral confinement, we perform a fully three - dimensional computation in which the center of the sphere is displaced laterally from equilibrium by a distance ( the vertical position is held fixed at the equilibrium value nm ) .the results are shown in fig .[ fig:3d - x - force ] .it is seen that over a fairly wide range ( nm ) the linear term is a good approximation to the force , whereas for larger displacements the casimir force begins to increase more rapidly .of course , at these larger separations the vertical force is no longer zero , due to the curvature of the indentation , and so must be re - computed as well ., when the vertical position is fixed at nm , the height at which gravity balances the casimir force , scaledwidth=49.0% ] .[ fig:3d - x - force ] the fully three - dimensional computations are rather large , and require roughly a hundred cpu hours per force point .however , these casimir calculations parallelize very easily every source term , polarization , and -point can be computed in parallel , and individual fdtd calculations can be parallelized in our existing software so we can compute each force point in under an hour on a supercomputer ( with 1000 + processors ) .in contrast , the 2d and cylindrical calculations require tens of minutes per force point .we believe that this method is usable in situations involving complex three - dimensional materials ( e.g , periodic systems or systems with anisotropic materials ) .we have demonstrated a practical implementation of a general fdtd method for computing casimir forces via a harmonic expansion in source currents .the utility of such a method is that many different systems ( dispersive , anisotropic , periodic boundary conditions ) can all be simulated with the same algorithm . in practice, the harmonic expansion converges rapidly with higher harmonic moments , making the overall computation complexity of the fdtd method for grid points and spatial dimensions .this arises from the number of computations needed for one fdtd time step , while the time increment used will vary inversely with the spatial resolution , leading to time steps per simulation .in addition , there is a constant factor proportional to the number of terms retained in the harmonic expansion , as an independent simulation is required for each term . for comparison , without a harmonic expansion one would have to run a separate simulation for each point on . in that case, there would be points , leading to an overall computational cost of .we do not claim that this is the most efficient technique for computing casimir forces , as there are other works that have also demonstrated very efficient methods capable of handling arbitrary three - dimensional geometries , such as a recently - developed boundary - element method .however , these integral - equation methods and their implementations must be substantially revised when new types of materials or boundary conditions are desired that change the underlying green s function ( e.g. , going from metals to dielectrics , periodic boundary conditions , or isotropic to anisotropic materials ) , whereas very general fdtd codes , requiring no modifications , are available off - the - shelf .we are grateful to s. jamal rahi for sharing his scattering algorithm with us .we are also grateful to peter bermel and ardavan oskooi for helpful discussions .in ref . we introduced a geometry - independent function , which resulted from the fourier transform of a certain function of frequency , termed , which is given by : once is known , it can be integrated against the fields in time , allowing one to compute a decaying time - series which will , when integrated over time , yield the correct casimir force . has the behavior that it diverges in the high - frequency limit . for large , has the form : viewing as a function , we could only compute its fourier transform by introducing a cutoff in the frequency integral at the nyquist frequency , since the time signal is only defined up to a finite sampling rate and the integral of a divergent function may appear to be undefined in the limit of no cutoff . applying this procedure to compute yields a time series that has strong oscillations at the nyquist frequency .the amplitude of these oscillations can be quite high , increasing the time needed to obtain convergence and also making any physical interpretation of the time series more difficult .these oscillations are entirely due to the high - frequency behavior of , where .however , and only appear when when they are being integrated against smooth , rapidly decaying field functions or . in this case, can be viewed as a tempered distribution ( such as the -function ) .although diverges for large , this divergence is only a power law , so it is a tempered distribution and its fourier transform is well - defined without any truncation . in particular , the fourier transform of is given by : adding and subtracting the term from , the remaining term decays to zero for large and can be fourier transformed numerically without the use of a high - frequency cutoff , allowing to be computed as the sum of plus the fourier transform of a well - behaved function .this results in a much smoother which will give the same final force as the used in ref . , but will also have a much more well - behaved time dependence . in fig .[ fig : gt - new ] we plot the convergence of the force as a function of time for the same system using the obtained by use of a high - frequency cutoff and for one in which is transformed analytically and the remainder is transformed without a cutoff .the inset plots obtained without using a cutoff ( since the real part is not used in this paper ) for .if a complex harmonic basis is used , one must take care to use the full and not only its imaginary part .determined from a numerical transform as in ref . and from the analytic transform of the high - frequency components .inset : $ ] obtained without a cutoff , in which the high - frequency divergence is integrated analytically .compare with fig . 1 of ref . , scaledwidth=48.0% ]in addition to the treatment of the high - frequency divergence in the previous section , we find it convenient to also fourier transform the low - frequency singularity of analytically . as discussed in ref . , the low - frequency limit of is given by : after removing both the high- and low - frequency divergences of , we perform a numerical fourier transform on the function , which is well - behaved in both the high- and low - frequency limits . in the present text we are only concerned with real sources , in which case all fields are real and only the imaginary part of contributes to the force in equation ( [ eq : time - force ] ) . the imaginary part of is then : as discussed in ref ., the stress - tensor frequency integral for a three - dimensional -invariant system involving only vacuum and perfect metallic conductors is identical in value to the integral of the stress tensor for the associated two - dimensional system ( corresponding to taking a crossection ) , with an extra factor of in the frequency integrand . in the time domain, this corresponds to solving the two - dimensional system with a new .the extension of the above derivation to three dimensions and non - cartesian coordinate systems is straightforward , as the only difference is in the representation of the -function .because the case of rotational invariance presents some simplification , we will explicitly present the result for this case below . for cylindrical symmetry, we work in cylindrical coordinates and choose a surface that is also rotationally invariant about the -axis . is then a surface of revolution , consisting of the rotation of a parametrized curve about the axis .the most practical harmonic expansion basis consists of functions of the form . given a dependence , many fdtd solverswill solve a modified set of maxwell s equations involving only the coordinates . in this case , for each the problem is reduced to a two - dimensional problem where both sources and fields are specified only in the -plane .once the fields are determined in the -plane , the force contribution for each is given by : where the values of range over the full three - dimensional system . herewe introduce the cartesian line element along the one - dimensional surface in anticipation of the cancellation of the jacobian factor from the integration over .we have explicitly written only the contribution for , the contribution for being identical in form. for simplicity , assume that consists entirely of constant and constant surfaces ( the more general case follows by an analogous derivation ) . in these cases ,the surface -function is given by : in either case , we see that upon substitution of either form of into eq .( [ eq : cyl - force - m ] ) , we obtain a cancellation with the first factor .now , one picks an appropriate decomposition of into functions ( a choice of const or const merely implies that the will either be functions of , or , respectively ) .we denote either case as , with the and dependence implicit . as noted in the text , is simply the field measured in the fdtd simulation due to a three - dimensional current source of the form . in the case of cylindrical symmetry, this field must have a dependence of the form : this factor of cancels with the remaining .the integral over then produces a factor of that cancels the one introduced by .after removing these factors , the problem is reduced to one of integrating the field responses entirely in the plane .the contribution for each and is then : if one chooses the to be real - valued , the contributions for and are related by complex conjugation .the sum over can then be rewritten as the real part of a sum over only non negative values of .the final result for the force from the electric field terms is then : \sum_{n } \int_s ds_j(r , z)\ , f_n(r , z ) \gamma^e_{ij;n}(t , r , z)\ ] ] where the -dependence has been absorbed into the definition of as follows : \ ] ] we have also explicitly included the dependence on and to emphasize that the integrals are confined to the two - dimensional plane .the force receives an analogous contribution from the magnetic - field terms .
|
our preceding paper , ref . , introduced a method to compute casimir forces in arbitrary geometries and for arbitrary materials that was based on a finite - difference time - domain ( fdtd ) scheme . in this manuscript , we focus on the efficient implementation of our method for geometries of practical interest and extend our previous proof - of - concept algorithm in one dimension to problems in two and three dimensions , introducing a number of new optimizations . we consider casimir piston - like problems with nonmonotonic and monotonic force dependence on sidewall separation , both for previously solved geometries to validate our method and also for new geometries involving magnetic sidewalls and/or cylindrical pistons . we include realistic dielectric materials to calculate the force between suspended silicon waveguides or on a suspended membrane with periodic grooves , also demonstrating the application of pml absorbing boundaries and/or periodic boundaries . in addition we apply this method to a realizable three - dimensional system in which a silica sphere is stably suspended in a fluid above an indented metallic substrate . more generally , the method allows off - the - shelf fdtd software , already supporting a wide variety of materials ( including dielectric , magnetic , and even anisotropic materials ) and boundary conditions , to be exploited for the casimir problem .
|
the comparison and clustering of different time series is an important topic in statistical data analysis and has various applications in fields like economics , marketing , medicine and physics , among many others .examples are the grouping of stocks in several categories for portfolio selection in finance or the identification of similar birth and death rates in population studies .one approach to identify similarities or dissimilarities between two stationary processes is to compare the spectral densities of both time series , which directly yields to the testing problem for equality of spectral densities in multivariate time series data .this problem has found considerable interest in the literature [ see for example or for some early results ] , but in the nonparametric situation nearly all proposed procedures are only reasoned by simulation studys or heuristic proofs , see , , and among many others .most recently , , , and provided mathematical details for the above testing problem using different -type statistics , but nevertheless in all mentioned articles it is always required that the different time series have the same length , which is typically not the case in practice . considered different metrics for the comparison of time series with unequal sample sizes in a simulation study and provided a theoretical result , which however does not yield a consistent test as it was also pointed out by the authors .+ this paper generalizes the approach of to the case of unequal sample sizes and yields a consistent test for the equalness of spectral densities for time series with different length . although the limiting distribution will be the same as in note that our proof is completely differentthis is due to the fact that one essential part in the proofs of is that the different processes have the same fourier coefficents which is not given if the observed time series have different sample sizes . for the sake of brevity we will focus on the case of two ( not necessarily independent ) stationary processes , butthe results can be easily extended to the case of an dimensional process .our aim throughout this paper is to estimate the -distance , where and are the spectral densities of the first and the second process respectively . under the null hypothesis distance equals zero while it is strictly positive if for , where is a subset of ] .this roughly speaking means that changes in the time series with less observations influence the more frequently observed series but not vice versa , which is for example the case if interest rates and stock returns are compared . throughout the paperwe also assume that the technical condition is satisfied for an ( ) .note that the assumption of gaussianity is only imposed to simplify technical arguments [ see remark [ rem3 ] ] .furthermore innovations with variances different to can be included by choosing other coefficents .we define the spectral densities ( ) through an unbiased ( but not consistent ) estimator for is given by the periodogram and although the periodogram does not estimate the spectral density consistently , a riemann - sum over the fourier coefficents of an exponentiated periodogram is ( up to a constant ) a consistent estimator for the corresponding integral over the exponentiated spectral density . for example , theorem 2.1 in yields where ( ) are the fourier coefficents of the smaller time series . if we can show that we can construct an consistent estimator for through .although looks very much like , note that the convergence in is different since the coefficents are not necessarily the fourier coefficents of the time series .this implies that the proof of has to be done in a completely different way than the proof of in .we now obtain the following main theorem .[ thm1 ] if , and are hlder continuous of order and for a , then as with although condition imposes some restrictions on the growth rate of and , it is not very restrictive , since in practice there usually occur situations where even holds for a ( if for example daily data are compared with monthly data ) and on the other hand this condition needs only to be satisfied in the limit . from theorem it now follows by a straightforward application of the delta - method that , where , which becomes under .to obtain a consistent estimator for the variance under the null hypothesis we define and analogous to the proof of theorem [ thm1 ] it can be shown that therefore an asymptotic niveau--test for is given by : reject if where denotes the quantile of the standard normal distribution .this test has asymptotic power where is the distribution function of the standard normal distribution .this yields that the test has asymptotic power one for all alternatives with .[ rem1 ] + it is straightforward to construct an estimator , which converges to the variance also under the alternative .this enables us to construct asymptotic confidence intervals for .the same statement holds , if we consider the normalized measure ,which can be estimated by . from theorem [ thm1 ] and a straightforward application of the delta method, it follows that where can be easily calculated . by considering a consistent estimator for ( which can be constructed through the corresponding riemann sums of the periodogram ) , provides an asymptotic level test for the so called _ precise hypothesis _ where [ see ] .this hypothesis is of interest , because spectral densities of time series in real - world applications are usually never exactly equal and a more realistic question is then to ask , if the processes have approximately the same spectral measure .an asymptotic level test for is obtained by rejecting the null hypothesis , whenever .[ rem2 ] + theorem [ thm1 ] can also be employed for a cluster and a discriminant analysis of time series data with different length , since it yields an estimator for the distance measure , where which can take values between 0 and 1 .a value close to 0 indicates some kind of similarities between two processes , whereas a value close to 1 exhibits dissimilarities in the second - order structure . the distance measure can be estimated by where the maximum is necessary , because the term might be negative . [ remunkor ] +the main ideas of the proof of theorem [ thm1 ] can be furthermore employed to construct tests for various other hypothesis .for example a test for zero correlation can be derived by testing for which can be done by estimating .an estimator for this quantity is easily derived using the above approach and furthermore the calculation of the variance is straightforward , which we omit for the sake of brevity .[ rem3 ] + although we only considered the bivariate case , our method can be easily extended to an dimensional process .moreover , a cumbersome but straightforward examination yields that our test also has asymptotic level , if we skip the assumption of gaussianity since ( under the null hypothesis ) all terms which consist the fourth cumulants of the processes cancel out . a similar phenomenon can be observed for the tests proposed by , , and .in this section we study the size and the power of test in the case of finite samples .all simulations are based on 1000 iterations and we consider all different combinations of with . forthe sake of brevity we only present the results for the case and note that the rejection frequencies do not change at all if we consider correlations different to zero .we furthermore tested our approach using non - linear garch models and obtained a very good performance also in this case .the results are not displayed for the sake of brevity but are available from the authors upon request . to demonstrate the approximation of the nominal level, we consider the five processes }+0.8x_{t-1}1_{[0.5t\le t\le 0.75t]}+z_t 1_{[t\ge0.75 t ] } \quad \text{for } t=1, ... ,t,\end{aligned}\ ] ] where the -model corresponds to a longmemory - process given by with the backshift - operator ( i.e. ) .note that the models and both do not fit into the theoretical framework considered in section 2 , since for the -process we obtain which contradicts and the structural - break model does not even has a stationary solution .nevertheless since these models are of great interest in practice , we investigate the performance of our approach in these cases as well .the results are given in table [ tab1 ] and it can be seen that the test is very robust against different choices of and . furthermore our method also seems to work for the models and although the convergence is slightly slower . to study the power of the test we additionally present the results of a comparison of with for and with ( all other comparison between the processes yield better results than the depicted ones ) .in this section we investigate how the clustering - method described in remark [ rem2 ] performs , if it is applied to real world data .therefore we took three log - returns of stock prices from the financial sector , three log - returns from the health sector and two key interest rates .exemplarily for the finance sector we choosed the stocks of barclays , deutsche bank and goldman sachs and the health sector is represented by glaxosmithkline , novartis and pfizer .the key interest rates were taken from great britain and the eu and all time series data were recorded between march 1st , 2003 and july 29th , 2011 . while the interest rates data were observed monthly , the stock prices were recorded daily or weekly . however , even if two stock prices were observed daily they might differ in length , since they are for example traded on different stock exchanges with different trading days .the result of our cluster analysis using is presented in the dendrogram given in figure [ cluster ] .we get three different groups which correspond to the finance sector , the health sector and the key interes rates . * acknowledgements * this work has been supported in part by the collaborative research center `` statistical modeling of nonlinear dynamic processes '' ( sfb 823 , teilprojekt c1 , c4 ) of the german research foundation ( dfg ) .by using the cramer - wold device , we have to show that for all vectors . for the sake of brevity , we restrict ourselve to the case since the more general follows with exactly the same arguments. therefore we show and we do that by using the method of cumulants , which is described in chapter 2.3 . of ( and whosenotations we will make heavy use of ) , i.e. in the following it is proved that which will yield the assertion . which yields that ( without the -term ) can be divided into the sums of three terms which are called , and respectively .for the first term we obtain the conditions ( all others cases are equal to zero ) .this results in with , where was used .it now follows by the hlder continuity condition that equals .if we consider the summand , we obtain the conditions , + which yields if we now employ the identity it follows with that if is chosen there are only finitely many which yields a non - zero summand. therefore we obtain that and with the same arguments it can be shown that . with ( we only have to consider partitions with two elements in each set , because of the gaussianity of the innovations ; in the non - gaussian case we would get an additional term containing the fourth cumulant ) .every chosen partition will imply conditions for the choice of as in the calculation of the expectation .for some partitions there will not be a in the exponent of after inserting the conditions and for other partitions there will still remain one .let us take an example of the latter one and consider the partition which corresponds to .we name the corresponding term of this partition in with and obtain the conditions , , , which yields where the last equality again follows with .now as in the handling of in the calculation of the expectation , implies that . +every other indecomposable partition is treated in exactly the same way and there are only three partitions which corresponding term in does not vanish in the limit .these partitions correspond to one of the following three terms : by using .now the hlder continuity condition implies and since the partitions and yield the same result , we have shown .+ with the same arguments as in the proof of it can be seen that * proof of for the case : * since the proof is done by combining standard cumulants methods with the arguments that are used in the previous proof , we will restrict ourselve to a brief explanation of the main ideas .we obtain which consists only of sets with two elements ( again this suffices because of the gaussianity of the innovations ) , it follows directly that at most of the variables ( , ) are free to choose . by using the same arguments as in the calculation of the variance and the expectationit then follows by the indecomposability of the partition that in fact only of the remaining variables are free to choose .this implies which yields the assertion .
|
this paper deals with the comparison of several stationary processes with unequal sample sizes . we provide a detailed theoretical framework on the testing problem for equality of spectral densities in the bivariate case , after which the generalization of our approach to the dimensional case and to other statistical applications ( like testing for zero correlation or clustering of time series data with different length ) is straightforward . we prove asymptotic normality of an appropriately standardized version of the test statistic both under the null and the alternative and investigate the finite sample properties of our method in a simulation study . furthermore we apply our approach to cluster financial time series data with different sample length . * comparing spectral densities of stationary time series with unequal sample sizes * philip preu and thimo hildebrandt + fakultt fr mathematik + ruhr - universitt bochum , germany + ams subject classification : 62m10 , 62m15 , 62g10 keywords and phrases : spectral density , integrated periodogram , cluster analysis , time series , stationary process , unequal length
|
innovation appears to be an ubiquitary concept , which applies to a variety of contexts , including economy , physics , sociology , ethology , biology , linguistics , and so on .a typical setting able to support innovation includes a component , as a research group , whose specific goal is to produce a breakthrough which in turn is a precondition to find out new technologies , services , and even forms of art . as such , the capability of producing innovation becomes also an indicator of the wellness of a society . on the other hand , in its pure form, innovation is in fact an unexpected outcome , most likely due to random guessing , lateral thinking or serendipity .the most prominent examples of this kind of mechanism can be found in science , where innovation is fundamental for promoting groundbreaking intuitions . in this context innovationis motivated by the goal of dealing with unsolved problems and sometimes carries out , as side - effect , the emergence of new research fields .a relevant and recent example is constituted by the modern and vibrant field of complex networks , that is deeply affecting several scientific sectors just to cite few , social networks , epidemiology , genomics , neuroscience , financial systems , and many others . here, scientometrics aims is to understand the emergence and the evolution of scientific collaborations , and the metrics to measure related results .it is worth pointing out that innovation is not specifically tied to human activities , as by design all biological systems are able to support it . in biology, innovation pertains to it from both structural and behavioral perspectives . in the former case, one may observe ( or infer ) the emergence of new gene sequences in living organisms able to improve their fitness . in the latter case, specific studies concerning animal behavior pointed out that also animals are able to come up with innovative solutions .remarkably , also evolutionary computation ( in particular , genetic algorithms ) , strongly emphasizes the role of innovation versus development ( in the jargon used in that research community : exploration versus exploitation ) while evolving bit strings according to an oversimplification of the general principles that hold for the evolution of biological systems . in the light of these observations , we deem that the emergence of innovation can be viewed as an evolutionary process where several actors are involved ( see ) .in particular , some of them propose new ideas , while others develop these ideas turning them into practical technologies , services , and so on . as a matter of fact, the actual characteristics of an individual typically lay in the middle between innovators and developers .however , to better investigate the coexistence between these characters , we assume that an individual can be either innovator or developer . besides, a similar view has also been proposed in the field of mathematics by dyson , who divided mathematicians in two groups : birds and frogs . according to his picture , the former fly high in the air and survey broad vistas of mathematics out to the far horizons , and becoming aware about the connections between different fields , while the latter from their position are able to appreciate with more detail the flowers that grow nearby , i.e. they have a more granular and fine views of mathematical concepts and theories .this work is aimed at investigating the tight relationship that holds between innovators and developers , studying the underlying process using the evolutionary game theory framework ( egt hereinafter ) . to this end a specific game , named innovation game ,has been set up for shading light on the equilibria reached by a population composed of innovators and developers .the first issue that has been tackled was about the underlying context within which interactions between innovators and developers are supposed to occur . in fact , despite the availability of many tools that have been devised in support of collaborative work , apparently the most effective collaborations among humans still occurs on a local basis . as a consequence ,our population splits into small groups with fixed size , though preserving the possibility of rendering pseudo - random groups . in so doing ,we admit the possibility of ensuring mobility among agents , depending on the adopted grouping strategy . in either case, the density of innovators in a group ( , hereinafter ) constitutes a parameter of the system .we then concentrated on how to model the presence of innovators and developers in a group .let us briefly summarize the concerns about innovators and developers .as for innovators , although their presence is mandatory to get new insights , we had to consider the fact that they often represent a risk for they may be not successful at all over long periods of time .to account for both aspects , we introduced an _ award factor _ , aimed at accounting for the benefit for including innovators in a group , together with a penalty , aimed at accounting for the cost of unsuccessful insights ( or no insights at all ) over time .as for developers , their modeling did not require any specific care , as they typically tend to be effective from the very beginning of any activity they are involved in ( e.g. , a research project ) , and tend to establish tight relationships with their neighbors and with the hosting structure as well .then , we had to model the way innovative thinking can propagate over the given population . to better understand the underlying issues ,let us consider the relevant ( and well - known ) case concerning scientific publications . in this case, the whole process starts with publishing the results concerning a novel insight , or an improvement over an existing idea , in a scientific journal .depending on the degree of `` penetrance '' of the published paper , it may be undergo citations and the underlying idea or technology may be further improved by other people .hence , at least in principle , one can evaluate publications in accordance with the amount of citations .it is worth to clarify that receiving a citation ( i.e. a mention and/or further attention ) does not imply that an idea is more important than another .it just means that a community , according to its guidelines and rules , decides to follow and investigate specific ideas rather than others ( let us recall , for instance , that both einstein s general relativity and the higgs s boson required a lot of time before being accepted and recognized as real breakthroughs ) .notwithstanding the peculiarities related to the time required for a novel insight to be accepted and disseminated within the scientific community , in either case the number of novelties appears to be tightly related with the amount of innovators see fig .[ fig : innovators_function ] . ) in a population with agents .each curve refers to results achieved using a different threshold , which can be used to measure the amount of innovation e.g ., the number of citations in a scientific context or the number of implementations of a new technology . the main role of thresholds ( i.e. ) is to link the emergence of novelties with the need of a community of developers able to put them into practice .results has been averaged over different simulation runs .[ fig : innovators_function],scaledwidth=55.0% ] going back to our attempt to model the innovation game , it should be now clearer the reason why the density of innovators becomes a crucial parameter of the model . in this scenario ,one may wonder whether increasing the mobility among agents increases the density of innovators ( while keeping fixed the amount of available resources ) in a system where each agent can change its behavior according to a gained payoff , i.e. being driven by a rational mindset .this question is also motivated by qualitative observations of the real world .for instance , many researchers like to spend part of their time visiting external labs , and in general workers often change companies also for improving their experience and skills . in addition , also biology suggests that mobility can be helpful for new solutions , as marriages between individuals without any degree of kinship reduce the probability to transmit diseases to their offspring .eventually , as below reported , results confirmed our hypothesis and suggest also that the emergence of innovation depends on the resources assigned to it .summarizing , the proposed innovation game occurs within a population of agents organized in groups of size . here , the payoff of each agent depends on the following factors : the heterogeneity of the formed groups , the number of innovators in these groups , and an award factor .the latter is a numerical parameter that represents the efforts made by a system for promoting innovation .hence , the payoff can be defined as follows : with magnetization , introduced for measuring the group heterogeneity , which in our case reads with number of developers in the group . assigning to innovators a spin equal to , and to developers a spin equal to , the value of falls in the range $ ] .the population evolves according to the following dynamics : at each time step , one agent ( say ) is randomly selected , with one of its neighbors ( say ) .so , according to the group to which they belong , and receive a payoff ( and , respectively ) as defined in eq .[ eq : payoff ] .then , the agent imitates ( see also ) the strategy of the agent with a probability depending on the difference between their payoffs , so that the greater , the greater the probability that imitates . as for the probability that imitates , it is computed by a fermi - like function : \right)^{-1}\ ] ] with and strategies of the two considered agents , noise ( temperature ) , set to ( see ) .[ eq : payoff ] can be divided into two parts : and .the former allows to promote heterogeneous groups , as the magnetization goes to when the amount of innovators approaches that of developers in the same group .the second part represents the additional fee ( i.e. the cost ) due to the presence of innovators . from a statistical physics point of view , the dynamics of the population can be assimilated to that of a spin system , so that at high temperature one expects a disordered ( paramagnetic ) phase , while at low temperatures ( i.e. lower than the critical one ) one expects an ordered ( ferromagnetic ) phase .notably , the paramagnetic phase corresponds to a population composed of both kinds of agents , whereas the ferromagnetic one corresponds to the presence of only one species .thus , at low temperatures , if the payoff is composed only of the first ( left ) part and starting with a mixed population ( i.e. at the density of innovators is equal to ) , at equilibrium one observes a disordered phase composed by the same amount of innovators and developers .conversely , due to the presence of the right term in eq .[ eq : payoff ] , the expected equilibrium ( at low temperatures ) corresponds to a population composed of only developers ( i.e. ) . in our view , the _ award factor _ can be mapped to the temperature of a spin system . in doing so , for different values of , we can draw the resulting agent combinations in a group with individuals see fig .[ fig : basic ] . remarkably , the right part of eq .[ eq : payoff ] plays the role of field generator , since for low values of , there is only one possible ordered equilibrium , as observed in a spin system at low temperatures in presence of an external field .apparently , the plot * c * of fig . [ fig : basic ] , reporting a scheme versus , highlights that three different phases indicated as * ( 1 ) * , * ( 2 ) * , and * ( 3 ) * emerge .clearly , here it is worth to clarify that both * ( 1 ) * and * ( 2 ) * correspond to a paramagnetic phase ( as shown on the right side of the schema , illustrating a pictorial representation of the free energy ) .in particular , phase * ( 1 ) * can be achieved only for high values of , when the effect of the field generator becomes too small for affecting the system , i.e. the imitation process .notably , as shown in plots * a * and * b * of fig .[ fig : basic ] , increasing the payoff becomes a symmetric function around the value .the latter represents the case with an equal amount of innovators and developers .then , according to a preliminary overview driven by statistical physics , we aim to characterize the phase transition occurring in our population on varying its temperature , i.e. the _ award factor_. thus , we can link the related result with the scenario represented in fig . [fig : density_innovators ] , so that for a given amount of resources one knows the expected number of innovations ( e.g. , original ideas ) over time .in order to investigate the dynamics of the proposed model and its equilibria , we performed numerical simulations . to this end , we considered two different configurations : a well - mixed population and a population arranged on a square lattice with periodic boundary conditions . considering a number of agents , the side of the square lattice is .in addition , setting , each agent in the lattice belongs to different groups , therefore also for the well - mixed case we consider this scenario .now , it is important to emphasize that comparing the dynamics of the model in the two described configurations has two main motivations .first , we can evaluate if the network - reciprocity effect supports innovation , as it does for the cooperation in dilemma games ( e.g. , ) .second , the well - mixed case allows to represent a sort of mobility effect , so that we can evaluate its influence in the dynamics of the population , which can be helpful to shade light on the hypothesis that mobility is able to support innovation .figure [ fig : density_innovators ] shows the density of innovators , at equilibrium ( or after time steps ) , in function of . ,scaledwidth=55.0% ] as expected , for high values of , both configurations show that the density of innovators slowly tends to . instead ,for low values of we find a critical threshold .remarkably , in the well - mixed case , is smaller than in the lattice topology , indicating that with poor resources , innovators survive only when some kind of mobility is introduced in the system . ,i.e. in function of , for both configurations : * a * ) well mixed and * b * ) regular lattice .[ fig : variance_innovators],scaledwidth=100.0% ] then , as reported in fig .[ fig : variance_innovators ] , we studied the variance of , confirming the relevance of the critical values observed in fig .[ fig : density_innovators ] . with the aim to characterize , at least on a quality level , the nature of the transition occurring in our population tuning the value of , we studied the system magnetization .notably , the latter is an order parameter , whose relation with the temperature ( represented by in the proposed model ) , is well known in statistical physics ( although not always simple to quantify ) . in function of .the inset shows a focus of in the small range close to the critical value , i.e. .[ fig : magnetization],scaledwidth=55.0% ] a glance to the system magnetization suggests that the observed phase transition can be classified as of first - order . in particular ,once that increases up to , innovators are able to survive and quickly reach a density similar to that of developers .this work studies the emergence of innovation under the framework of evolutionary game theory . in particular , by means of a model , inspired to the well - known dyson s classification of mathematicians , we analyze the dynamics of a population in terms of innovators and developers . in the proposed model innovators are expected to generate benefits for the society , although they represent also a risk as their work can sometimes be unsuccessful . in order to investigate all related issues , we defined a game where agents form small groups ( see also ) .their payoff depends on the heterogeneity of these groups ( see ) , on the amount of innovators , and on an award factor .notably , heterogeneity supports the emergence of groups composed of both kinds of agents .the amount of innovators is controlled by an additional fee which basically includes in the model the fact that they may be unsuccessful for long periods of time .eventually , the award factor represents the policy of a system in favor of innovation .it is worth recalling that the award factor plays a role similar to that of the synergy factor used in the public goods game for promoting cooperation . after providing a brief overview of the proposed model inspired by statistical physics , mainly based on the structure of the payoff ( eq . [ eq : payoff ] ), we performed many numerical simulations .in particular , two main configurations have been investigated : square lattice , with periodic boundary conditions , and well - mixed populations .notably , the latter allows to represent a mobility effect , which is very important in several real contexts .simulations showed that the network reciprocity effect , useful for promoting cooperation in many dilemma games having a nash equilibrium of defection , here reduces the amount of innovators .in addition , we found that the mobility effect plays a beneficial role for supporting innovation , in particular when poor resources ( e.g. , financial ) are reserved for innovation .going forward to real systems , we deem that our results may be a starting point to provide general interpretations of well known phenomena .in fact , the density of innovators could even be interpreted as the fraction of time allowed to people , working in a company or institution , to devise new projects and/or ideas . a notable example in support of this interpretation are some online services provided by google ( e.g. , gmail ) , which have been devised and designed by collaborators that were allowed to spend a fraction of their working time on new and independent projects .finally , we studied the order - disorder phase transition occurring in our model , computing the critical thresholds of the award factor ( i.e. ) .the latter allows to link the model to a simpler scenario , that investigates whether a trade - off between innovators and developers in a society can be found ( as shown in fig .[ fig : innovators_function ] ) .it is worth highlighting that , from a game theory perspective , our model is not a dilemma game , like for instance the prisoner s dilemma . in other words , hereagents do not take decisions choosing between their own benefit and that of their community . before concluding ,let us to spend few words about a possible experimental validation of our model .it is well known that , in the era of big data , the access to real data may allow to verify the correctness of hypotheses and theoretical models .remarkably , while the latter allow to speculate about the nature of a phenomenon , often providing useful insights , direct investigation , based on real data , typically allow to confirm ( or confute ) theories , and often open the way to new developments . in this work, we followed only a theoretical approach , based on qualitative observations of real scenarios .however , notwithstanding the fact that the model proposed in this work has not yet been validated with real - world data , let us briefly describe the kinds of dataset that might be suitable to this extent . in our view , a main experimental scenario would be the world of academia . in principle , every scientific paper that describes an original research represents an innovation .hence , a dataset where research grants , and other forms of support , related to a group , to a department or to an university , could be very helpful to investigate the correlation between funding , publication of scientific articles and innovation .in this context , a more granular dataset , e.g. , describing the amount of funding devised for exchange programs ( as visiting professorship and/or studentship ) , would allow to evaluate if the proposed model can fit real - word data on mobility and its capability to promote innovation .considering industry , real data related to the investments of companies on innovation , as well as in supporting employers to attend workshops and conferences , could be useful .obviously , in this case , real exchange programs ( as those that hold for academia ) often can not be implemented for a number of reasons .however , promoting the attendance to events like workshops might be considered , to some extent , as a kind of mobility . here , the task would be to identify relations between the amount of financial resources reserved for mobility ( as above described ) , by a company and the level of innovation of its products or services . as well as the amount of time granted to collaborators for developing independent projects .finally , let us emphasize that the proposed model finds some preliminary confirmations .notably , mapping the mobility effect to the way people get married ( see also ) , i.e. promoting marriages between people that belong to different families , strongly helps to reduce the transmission of hereditary diseases .as for future work , we aim to analyze the proposed model by arranging agents on more complex topologies ( e.g. , scale - free networks ) .numerical simulations have been performed on a square lattice with continuous boundary conditions and in a non - structured population , so that all agents have a degree equal to , i.e. they have four nearest - neighbors . in both configurations ,agents form groups of size , and the population evolves according to the following dynamics : 1 .define a population with agents , with the same amount of innovators and developers ( i.e. = 0.5 ) 2 .randomly select an agent , say , and one of its neighbors , say 3 . according to their respective groups of belonging , and a payoff ( i.e. and , respectively ) 4 . agent imitates the strategy of according to the probability defined in eq .[ eq : prob_transition ] 5 .repeat from , until the population reaches an ordered phase or a number of time steps elapsed the ordered phase mentioned in step indicates those configurations where agents have the same strategy / behavior .the maximum number or time steps has been set to .the authors wish to thank luciano pietronero and sergi valverde for their priceless suggestions .10 baronchelli , a. , felici , m. , loreto , v. , caglioti , e. , steels , l. : sharp transition towards shared vocabularies in multi - agent systems. _ journal of statistical mechanics : theory and experiment _ * 2006 * p06014 ( 2006 ) dyson , f. : birds and frogs ._ notices of the ams _ * 56(2 ) * 212 - 223 ( 2009 ) perc , m. , grigolini , p. : collective behavior and evolutionary games - an introduction ._ chaos , solitons & fractals _ * 56 * 1 - 5 ( 2013 ) gracia - lazaro , c. , et al . : heterogeneous networks do not promote cooperation when humans play a prisoner s dilemma . _ pnas _ * 109 - 32 * 1292212926 ( 2012 ) santos , f.c . ,pacheco , j.m .: scale - free networks provide a unifying framework for the emergence of cooperation . _ physical review letters _ * 95 - 9 * 098104 ( 2005 ) perc , m. , gomez - gardenes , j. , szolnoki , a. , floria , l.m . , and moreno , y. : evolutionary dynamics of group interactions on structured populations : a review ._ j. r. soc .interface _ * 10 - 80 * 20120997 ( 2013 ) antonioni , a. , tomassini , m. , buesser , p. : random diffusion and cooperation in continuous two - dimensional space . _journal of theoretical biology _ * 344 * 4048 ( 2014 ) huang , k. : statistical mechanics ._ wiley 2nd ed . _ ( 1987 ) nowak , m.a . : five rules for the evolution of cooperation . *science * * 314 - 5805 * 15601563 ( 2006 ) szolnoki , a. , chen , x. : cooperation driven by success - driven group formation ._ physical review e _ * 94 - 4 * 042311 ( 2016 )
|
innovation is a key ingredient for the evolution of several systems , as social and biological ones . focused investigations , and lateral thinking may lead to innovation , as well as serendipity , and other random processes . some individuals are talented at proposing innovation ( say innovators ) , while others at deeply exploring proposed novelties , then getting further insights on a theory , or developing products , services , and so on ( say developers ) . this aspect raises an issue of paramount importance : under which conditions a system is able to maintain innovators ? according to a simple model , this work investigates the evolutionary dynamics that characterize the emergence of innovation . notably , we consider a population of innovators and developers , where agents form small groups whose composition is crucial for their payoff . the latter depends on the heterogeneity of the formed groups , on the amount of innovators they include , and on an award - factor that represents the policy of the underlying system for promoting innovation . under the hypothesis that a mobility effect may support the emergence of innovation , we compare the equilibria reached by a population of innovators and developers in two different cases : well - mixed and structured . results confirm the beneficial role of mobility , and show a phase transition when the award - factor reaches a critical threshold .
|
the graphics processing unit ( gpu ) has been an essential part of personal computers for decades .their role became much more important in the 90s when the era of 3d graphics in gaming started .one of the hallmarks of this is the violent first - person shooting game doom by the i d software company , released in 1993 . wandering around the halls of slaughter, it was hard to imagine these games leading to any respectable science .however , twenty years after the release of doom , the gaming industry of today is enormous , and the continuous need for more realistic visualizations has led to a situation where modern gpus have tremendous computational power . in terms of theoretical peak performance, they have far surpassed the central processing units ( cpu ) .the games started to have real 3d models and hardware acceleration in the mid 90s , but an important turning point for the scientific use of gpus for computing was around the first years of this millennium , when the widespread programmability of gpus was introduced .combined with the continued increase in computational power as shown in fig .[ fig : flops ] , the gpus are nowadays a serious platform for general purpose computing . also , the memory bandwidth in gpus is very impressive .the three main vendors for gpus , intel , nvidia , and ati / amd , are all actively developing computing on gpus . at the moment, none of the technologies listed above dominate the field , but nvidia with its cuda programming environment is perhaps the current market leader . at this point, we have hopefully convinced the reader that gpus feature a powerful architecture also for general computing , but what makes gpus different from the current multi - core cpus ? to understand this , we can start with traditional graphics processing , where hardware vendors have tried to maximize the speed at which the pixels on the screen are calculated .these pixels are independent primitives that can be processed in parallel , and the number of pixels on computer displays has increased over the years from the original doom resolution of , corresponding to 64000 pixels , to millions . the most efficient way to process these primitives is to have a very large number of arithmetic logical units ( alus ) that are able to perform a high number of operations for each video frame .the processing is very data - parallel , and one can view this as performing the same arithmetic operation in parallel for each primitive . furthermore , as the operation is the same for each primitive , there is no need for very sophisticated flow control in the gpu and more transistors can be used for arithmetics , resulting in an enormously efficient hardware for performing parallel computing that can be classified as `` single instruction , multiple data '' ( simd ) .now , for general computing on the gpu , the primitives are no longer the pixels on the video stream , but can range from matrix elements in linear algebra to physics related cases where the primitives can be particle coordinates in classical molecular dynamics or quantum field values .traditional graphics processing teaches us that the computation would be efficient when we have a situation where the same calculation needs to be performed for each member of a large data set .it is clear that not all problems or algorithms have this structure , but there are luckily many cases where this applies , and the list of successful examples is long .however , there are also limitations on gpu computing . first of all , when porting a cpu solution of a given problem to the gpu , one might need to change the algorithm to suit the simd approach .secondly , the communication from the host part of the computer to the gpu part is limited by the speed of the pcie bus coupling the gpu and the host . in practice, this means that one needs to perform a serious amount of computing on the gpu between the data transfers before the gpu can actually speed up the overall computation .of course , there are also cases where the computation as a whole is done on gpu , but these cases suffer from the somewhat slower serial processing speed of the gpu .additional challenges in gpu computing include the often substantial programming effort to get a working and optimized code .while writing efficient gpu code has become easier due to libraries and programmer friendly hardware features , it still requires some specialized thinking .for example , the programmer has to be familiar with the different kinds of memory on the gpu to know how and when to use them .further , things like occupancy of the multiprocessors ( essentially , how full the gpu is ) and memory access patterns of the threads are something one has to consider to reach optimal performance . fortunately , each generation of gpus has alleviated the trouble of utilizing their full potential .for example , a badly aligned memory access in the first cuda capable gpus from nvidia could cripple the performance by drastically reducing the memory bandwidth , while in the fermi generation gpus the requirements for memory access coalescing are much more forgiving .particle dynamics simulation , often simply called molecular dynamics ( md ) , refers to the type of simulation where the behaviour of a complex system is calculated by integrating the equation of motion of its components within a given model , and its goal is to observe how some ensemble - averaged properties of the system originate from the detailed configuration of its constituent particles ( fig .[ fig : md1 ] ) . in its classical formulation ,the dynamics of a system of particles is described by their newtonian equations : where is the particle s mass , its position , and is the interaction between the -th and -th particles as provided by the model chosen for the system under study .these second order differential equations are then discretised in the time domain , and integrated step by step until a convergence criterion is satisfied .the principles behind md are so simple and general that since its first appearance in the 70s , it has been applied to a wide range of systems , at very different scales . for example , md is the dominant theoretical tool of investigation in the field of biophysics , where structural changes in proteins and lipid bilayers interacting with drugs can be studied , ultimately providing a better understanding of drug delivery mechanisms . at larger scales , one of the most famous examplesis known as the _ millenium simulation _, where the dynamics of the mass distribution of the universe at the age of 380000 years was simulated up to the present day , giving an estimate of the age of cosmic objects such as galaxies , black holes and quasars , greatly improving our understanding of cosmological models and providing a theoretical comparison to satellite measurements .despite the simplicity and elegance of its formulation , md is not a computationally easy task and often requires special infrastructure .the main issue is usually the evaluation of all the interactions , which is the most time consuming procedure of any md calculation for large systems .moreover , the processes under study might have long characteristic time scales , requiring longer simulation time and larger data storage ; classical dynamics is chaotic , i.e. the outcome is affected by the initial conditions , and since these are in principle unknown and chosen at random , some particular processes of interest might not occur just because of the specific choice , and the simulation should be repeated several times . for these reasons ,it is important to optimise the evaluation of the forces as much as possible .an early attempt to implement md on the gpu was proposed in 2004 and showed promising performance ; at that time , general purpose gpu computing was not yet a well established framework and the n - body problem had to be formulated as a rendering task : a _ shader _ program computed each pair interaction and stored them as the pixel color values ( rbg ) in an texture .then , another shader would simply sum these values row - wise to obtain the total force on each particle and finally integrate their velocities and positions .the method is called all - pairs calculation , and as the name might suggest , it is quite expensive as it requires force evaluations .the proposed implementation was in no way optimal since the measured performance was about a tenth of the nominal value of the device , and it immediately revealed one of the main issues of the architecture that still persists nowadays : gpus can have a processing power exceeding the teraflop , but , at the same time , they are extremely slow at handling the data to process since a memory read can require hundreds of clock cycles .the reason for the bad performance was in fact the large amount of memory read instructions compared to the amount of computation effectively performed on the fetched data , but despite this limitation , the code still outperformed a cpu by a factor of 8 because every interaction was computed concurrently . a wide overview of optimisation strategies to get around the memory latency issues can be found in ref . , while , for the less eager to get their hands dirty , a review of available md software packages is included in ref . . in the current gpu programming model, the computation is distributed in different threads , grouped together as blocks in a grid fashion , and they are allowed to share data and synchronise throughout the same block ; the hardware also offers one or two levels of cache to enhance data reuse , thus reducing the amount of memory accesses , without harassing the programmer with manual pre - fetching .a more recent implementation of the all - pair calculation exploiting the full power of the gpu can achieve a performance close to the nominal values , comparable to several cpu nodes .the present and more mature gpgpu framework allows for more elaborate kernels to fit in the device , enabling the implementation of computational tricks developed during the early days of md that make it possible to integrate n - body dynamics accurately with much better scaling than .for example , in many cases the inter - particle forces are short range , and it would be unnecessary to evaluate every single interaction since quite many of them would be close to zero and just be neglected .it is good practice to build lists of neighbours for each particle in order to speed up the calculation of forces : this also takes an operation , although the list is usually only recalculated every 100 - 1000 timesteps , depending on the average mobility of the particles .the optimal way to build neighbour lists is to divide the simulation box in voxels and search for a partcle s neighbours only within the adjacent voxels ( fig .[ fig : partition]a ) , as this procedure requires only instructions .performance can be further improved by sorting particles depending on the index of the voxel they belong , making neighbouring particles in space , to a degree , close in memory , thus increasing coalescence and cache hit rate on gpu systems ; such a task can be done with radix count sort in with excellent performance , and it was shown to be the winning strategy . unfortunately , most often the inter - particle interactions are not exclusively short range and can be significant even at larger distances ( electrostatic and gravitational forces ) . therefore , introducing an interaction cut - off leads to the wrong dynamics . for dense systems , such as bulk crystals or liquids ,the electrostatic interaction length largely exceeds the size of the simulation space , and in principle one would have to include the contributions from several periodic images of the system , although their sum is not always convergent .the preferred approach consists of calculating the electrostatic potential generated by the distribution of point charges from poisson s equation : the electrostatic potential can be calculated by discretising the charge distribution on a grid , and solving eq .[ eq : poisson ] with a fast fourier transform ( fft ) , which has complexity ( where is the amount of grid points ) : this approach is called particle - mesh ewald ( pme ) . despite being heavily non - local, much work has been done to improve the fft algorithm and make it cache efficient , so it is possible to achieve a 20-fold speed up over the standard cpu fftw or a 5-fold speedup when compared to a highly optimised mkl implementation . the more recent multilevel summation method ( msm ) uses nested interpolations of progressive smoothing of the electrostatic potential on lattices with different resolutions , offering a good approximation of the electrostatic problem in just operations .the advantage of this approach is the simplicity of its parallel implementation , since it requires less memory communication among the nodes , which leads to a better scaling than the fft calculation in pme .the gpu implementation of this method gave a 25-fold speedup over the single cpu .equation [ eq : poisson ] can also be translated into a linear algebra problem using finite differences , and solved iteratively on multi - grids in theoretically operations .even though the method initially requires several iterations to converge , the solution does not change much in one md step and can be used as a starting point in the following step , which in turn will take much fewer iterations . on the other hand , for sparse systems such as stars in cosmological simulations , decomposing the computational domain in regular boxes can be quite harmful because most of the voxels will be empty and some computing power and memory is wasted there . the optimal way to deal with sucha situation is to subdivide the space hierarchically with an octree ( fig .[ fig : partition]b ) , where only the subregions containing particles are further divided and stored . finding neighbouring particles can be done via a traversal of the tree in operations .octrees are conventionally implemented on the cpu as dynamical data structures where every node contains reference pointers to its parent and children , and possibly information regarding its position and content .this method is not particularly gpu friendly since the data is scattered in memory as well as in the simulation space . in gpu implementations ,the non - empty nodes are stored as consecutive elements in an array or texture , and they include the indices of the children nodes .they were proved to give a good acceleration in solving the n - body problem .long range interactions are then calculated explicitly for the near neighbours , while the fast multipole method ( fmm ) can be used to evaluate contributions from distant particles .the advantage of representing the system with an octree becomes now more evident : there exists a tree node containing a collection of distant particles , which can be treated as a single multipole leading to an overall complexity .although the mathematics required by fmm is quite intensive to evaluate , the algorithms involved have been developed and extensively optimised for the gpu architecture , achieving excellent parallel performance even on large clusters .in all the examples shown here , the gpu implementation of the method outperformed its cpu counterpart : in many cases the speedup is only 4 - 5 fold when compared to a highly optimised cpu code , which seems , in a way , a discouraging result , because implementing an efficient gpu algorithm is quite a difficult task , requiring knowledge of the target hardware , and the programming model is not as intuitive as for a regular cpu .to a degree , the very same is true for cpu programming , where taking into account cache size , network layout , and details of shared / distributed memory of the target machine when designing a code leads to higher performance . these implementation difficulties could be eased by developing better compilers , that check how memory is effectively accessed and provide higher levels of gpu optimisation on older cpu codes automatically , hiding the complexity of the hardware specification from the programmer . in some cases , up to 100 fold speedups were measured , suggesting that the gpu is far superior . these cases might be unrealistic since the nominal peak performance of a gpu is around 5 times bigger than that of a cpu .therefore , it is possible that the benchmark is done against a poorly optimised cpu code , and the speedup is exaggerated . on the other hand ,gpus were also proven to give good scaling in mpi parallel calculations , as shown in refs . and .in particular , the amber code was extensively benchmarked in ref . , and it was shown how just a few gpus ( and even just one ) can outperform the same code running on 1024 cpu cores : the weight of the communication between nodes exceeds the benefit of having additional cpu cores , while the few gpus do not suffer from this latency and can deliver better performance , although the size of the computable system becomes limited by the available gpu memory .it has to be noted how gpu solutions , even offering a modest 4 - 5 fold speedup , do so at a lower hardware and running cost than the equivalent in cpus , and this will surely make them more appealing in the future . from the wide range of examples in computational physics ,it is clear that the gpu architecture is well suited for a defined group of problems , such as certain procedures required in md , while it fails for others .this point is quite similar to the everlasting dispute between raytracing and raster graphics : the former can explicitly calculate photorealistic images in complex scenes , taking its time ( cpu ) , while the latter resorts to every trick in the book to get a visually `` alright '' result as fast as possible ( gpu ). it would be best to use both methods to calculate what they are good for , and this sets a clear view of the future hardware required for scientific computing , where both simple vector - like processors and larger cpu cores could access the same memory resources , avoiding data transfer .density functional theory ( dft ) is a popular method for _ ab - initio _ electronic structure calculations in material physics and quantum chemistry . in the most commonly used dft formulation by kohn and sham , the problem of interacting electrons is mapped to one with non - interacting electrons moving in an effective potential so that the total electron density is the same as in the original many - body case . to be more specific ,the single - particle kohn - sham orbitals are solutions to the equation where the effective hamiltonian in atomic units is .the three last terms in the hamiltonian define the effective potential , consisting of the hartree potential defined by the poisson equation , the external ionic potential , and the exchange - correlation potential that contains all the complicated many - body physics the kohn - sham formulation partially hides . in practice ,the part needs to be approximated .the electronic charge density is determined by the kohn - sham orbitals as , where the :s are the orbital occupation numbers .there are several numerical approaches and approximations for solving the kohn - sham equations .they relate usually to the discretization of the equations and the treatment of the core electrons ( pseudo - potential and all electron methods ) .the most common discretization methods in solid state physics are plane waves , localized orbitals , real space grids and finite elements .normally , an iterative procedure called self - consistent field ( scf ) calculation is used to find the solution to the eigenproblem starting from an initial guess for the charge density .porting an existing dft code to gpus generally includes profiling or discovering with some other method the computationally most expensive parts of the scf loop and reimplementing them with gpus .depending on the discretization methods , the known numerical bottlenecks are vector operations , matrix products , fast fourier transforms ( ffts ) and stencil operations .there are gpu versions of many of the standard computational libraries ( like cublas for blas and cufft for fftw ) .however , porting a dft application is not as simple as replacing the calls to standard libraries with gpu equivalents since the resulting intermediate data usually gets reused by non standard and less computationally intensive routines .attaining high performance on a gpu and minimizing the slow transfers between the host and the device requires writing custom kernels and also porting a lot of the non - intensive routines to the gpu .gaussian basis functions are a popular choice in quantum chemistry to investigate electronic structures and their properties .they are used in both dft and hartree - fock calculations .the known computational bottlenecks are the evaluation of the two - electron repulsion integrals ( eris ) and the calculation of the exchange - correlation potential .yasuda was the first to use gpus in the calculation of the exchange - correlation term and in the evaluation of the coulomb potential .the most complete work in this area was done by ufimtsev _et al._. they have used gpus in eris , in complete scf calculations and in energy gradients . compared to the mature gamess quantum chemistry package running on cpus , they were able to achieve speedups of more than 100 using mixed precision arithmetic in hf scf calculations .et al._. have also done an eri implementation on gpus using the uncontracted rys quadrature algorithm .the first complete dft code on gpus for solid state physics was presented by genovese _ et al._.they used double precision arithmetic and a daubechies wavelet based code called bigdft .the basic 3d operations for a wavelet based code are based on convolutions .they achieved speedups of factor 20 for some of these operations on a gpu , and a factor of 6 for the whole hybrid code using nvidia tesla s1070 cards .these results were obtained on a 12-node hybrid machine . for solid state physics ,plane wave basis sets are the most common choice .the computational schemes rely heavily on linear algebra operations and fast fourier transforms .the vienna ab initio simulation package ( vasp ) is a popular code combining plane waves with the projector augmented wave method .the most time consuming part of optimizing the wave functions given the trial wave functions and related routines have been ported to gpus .speedups of a factor between 3 and 8 for the blocked davinson scheme and for the rmm - diis algorithm were achieved in real - world examples with fermi c2070 cards .parallel scalability with 16 gpus was similar to 16 cpus .additionally , hutchinson _et al . _ have done an implementation of exact - exchange calculations on gpus for vasp .quantum espresso is a electronic structure code based on plane wave basis sets and pseudo - potentials ( pp ) . for the gpu version , the most computationally expensive parts of the scf cycle were gradually transferred to run on gpus .ffts were accelerated by cufft , lapack by magma and other routines were replaced by cuda kernels .gemm operations were replaced by the parallel hybrid phigemm library .for single node test systems , running with nvidia tesla c2050 , speedups between 5.5 and 7.8 were achieved and for a 32 node parallel system speedups between 2.5 and 3.5 were observed .wand et al . and jia _et al._. have done an implementation for gpu clusters of a plane wave pseudo - potential code called petot .they were able to achieve speedups of 13 to 22 and parallel scalability up to 256 cpu - gpu computing units .gpaw is a density - functional theory ( dft ) electronic structure program package based on the real space grid based projector augmented wave method .we have used gpus to speed up most of the computationally intensive parts of the code : solving the poisson equation , iterative refinement of the eigenvectors , subspace diagonalization and orthonormalization of the wave functions .overall , we have achieved speedups of up to 15 on large systems and a good parallel scalability with up to 200 gpus using nvidia tesla m2070 cards .octopus is a dft code with an emphasis on the time - dependent density - functional theory ( tddft ) using real space grids and pseudo - potentials .their gpu version uses blocks of kohn - sham orbitals as basic data units .octopus uses gpus to accelerate both time - propagation and ground state calculations .finally , we would like to mention the linear response tamm - dancoff tddft implementation done for the gpu - based terachem code .quantum field theories are currently our best models for fundamental interactions of the natural world ( for a brief introduction to quantum field theories or qfts see for example or and references therein ) .common computational techniques include perturbation theory , which works well in quantum field theories as long as the couplings are small enough to be considered as perturbations to the free theory .therefore , perturbation theory is the primary tool used in pure qed , weak nuclear force and high momentum - transfer qcd phenomena , but it breaks up when the coupling constant of the theory ( the measure of the interaction strength ) becomes large , such as in low - energy qcd . formulating the quantum field theory on a space - time lattice provides an opportunity to study the model non - perturbatively and use computer simulations to get results for a wide range of phenomena it enables , for example , one to compute the hadronic spectrum of qcd ( see and references therein ) from first principles and provides solutions for many vital gaps left by the perturbation theory , such as structure functions of composite particles , form - factors and decay - constants . it also enables one to study and test models for new physics , such as technicolor theories and quantum field theories at finite temperature , or . for an introduction to lattice qft ,see for example , or . simulating quantum field theories using gpus is not a completely new idea and early adopters even used opengl ( graphics processing library ) to program the gpus to solve lattice qcd .the early gpgpu programmers needed to set up a program that draws two triangles that fill the output texture of desired size by running a `` shader program '' that does the actual computation for each output pixel . in this program ,the input data could be then accessed by fetching pixels input texture(s ) using the texture units of the gpu . in lattice qft , where one typically needs to fetch the nearest neighbor lattice site values, this actually results in good performance as the texture caches and layouts of the gpus have been optimized for local access patterns for filtering purposes .the idea behind lattice qft is based on the discretization of the path integral solution to expectation values of time - ordered operators in quantum field theories .first , one divides spacetime into discrete boxes , called the lattice , and places the fields onto the lattice sites and onto the links between the sites , as shown in fig .[ fig : lattice ] .then , one can simulate nature by creating a set of multiple field configurations , called an _ ensemble _ , and calculate the values of physical observables by computing ensemble averages over these states .live on lattice sites , whereas the gauge fields live on the links connecting the sites .also depicted are the staples connecting to a single link variable that are needed in the computation of the gauge field forces . ]the set of states is normally produced with the help of a markov chain and in the most widely studied qft , the lattice qcd , the chain is produced by combining a _ molecular dynamics _algorithm together with a _ metropolis _ acceptance test .therefore , the typical computational tasks in lattice qfts are : 1 . refresh generalized momentum variables from a heat bath ( gaussian distribution ) once per _ trajectory_. 2 .compute generalized forces for fields for each step 3 . integrate classical equations of motion for the fields at each step 4 .perform a metropolis acceptance test at the end of the trajectory in order to achieve the correct limiting distribution . in order to reach satisfying statistics ,normally thousands of these trajectories need to be generated and each trajectory is typically composed of 10 to 100 steps .the force calculation normally involves a matrix inversion , where the matrix indices run over the entire lattice and it is therefore the heaviest part of the computation .the matrix arises in simulations with dynamical fermions ( normal propagating matter particles ) and the simplest form for the fermion matrix is {x , y } \qquad \textrm{where } \quad\ ] ] here , is a constant related to the mass(es ) of the quark(s ) , is the _ kronecker delta function _( unit matrix elements ) , the sum goes over the spacetime dimensions , are 4-by-4 constant matrices and are the link variable matrices that carry the force ( gluons for example ) from one lattice site to the neighbouring one . in normal qcdthey are 3-by-3 complex matrices .the matrix in the equation , where one solves for the vector with a given , is an almost diagonal sparse matrix with a _ predefined sparsity pattern_. this fact makes lattice qcd ideal for parallelization , as the amount work done by each site is constant .the actual algorithm used in the matrix inversion is normally some variant of the conjugate gradient algorithm , and therefore one needs fast code to handle the multiplication of a fermion vector by the fermion matrix .this procedure is the generation of the lattice configurations which form the ensemble .once the set of configurations ] , where ] can be computed simply as \rangle \approx \frac{1}{n}\sum_{i=1}^n f[u_i],\ ] ] as lattice qfts are normally easily parallelizable , they fit well into the gpu programming paradigm , which can be characterized as parallel throughput computation .the conjugate gradient methods perform many fermion matrix vector multiplications whose arithmetic intensity ( ratio of floating point operations done per byte of memory fetched ) is quite low , making memory bandwidth the normal bottleneck within a single processor .parallelization between processors is done by standard mpi domain decomposition techniques .the conventional wisdom that this helps due to higher local volume to communication surface area ratio is actually flawed , as typically the gpu can handle a larger volume in the same amount of time , hence requiring the mpi - implementation to also take care of a larger surface area in the same time as with a cpu . in our experience ,gpu adoption is still in some sense in its infancy , as the network implementation seems to quickly become the bottleneck in the computation and the mpi implementations of running systems seem to have been tailored to meet the needs of the cpus of the system .another aspect of this is that normally the gpus are coupled with highly powered cpus in order to cater for the situation where the users use the gpus in just a small part of the program and need a lot of sequential performance in order to try to keep the serial part of the program up with the parallel part .the gpu also needs a lot of concurrent threads ( in the order of thousands ) to be filled completely with work and therefore good performance is only achievable with relatively large local lattice sizes .typical implementations assign one gpu thread per site , which makes parallelization easy and gives the compiler quite a lot of room to find instruction level parallelism , but in our experience this can result in a relatively high register pressure : the quantum fields living on the sites have many indices ( normally color and dirac indices ) and are therefore vectors or matrices with up to 12 complex numbers per field per site in the case of quark fields in normal qcd .higher parallelization can be achieved by taking advantage of the vector - like parallelism inside a single lattice site , but this may be challenging to implement in those loops where the threads within a site have to collaborate to produce a result , especially because gpus impose restrictions on the memory layout of the fields ( consecutive threads have to read consecutive memory locations in order to reach optimal performance ) . in a recent paper , the authors solve the _ gauge fixing _ problem by using overrelaxation techniques and they report an increase in performance by using multiple threads per site , although in this case the register pressure problem is even more pronounced and the effects of register spilling to the l1 cache were not studied .the lattice qcd community has a history of taking advantage of computing solutions outside the mainstream : the qcdsp computer was a custom machine that used digital signal processors to solve qcd with an order of one teraflop of performance .qcdoc used a custom imb powerpc - based asic and a multidimensional torus network , which later on evolved into the first version of the blue gene supercomputers .the ape collaboration has a long history of custom solutions for lattice qcd and is building custom network solutions for lattice qcd .for example , qcdpax was a very early parallel architecture used to study lattice qcd without dynamical fermions .currently , there are various groups using gpus to do lattice qft simulations .the first results using gpus were produced as early as 2006 in a study that determined the transition temperature of qcd .standardization efforts for high precision lattice qcd libraries are underway and the quda library scales to hundreds of gpus by using a local schwarz preconditioning technique , effectively eliminating all the gpu - based mpi communications for a significant portion of the calculation .they employ various optimization techniques , such as _ mixed - precision _ solvers , where parts of the inversion process of the fermion matrix is done at lower precision of floating point arithmetic and using reduced representations of the su3 matrices . scaling to multiple gpuscan also be improved algorithmically : already a simple ( almost standard ) _ clover improvement _ term in the fermion action leads to better locality and of course improves the action of the model as well , taking the lattice formulation closer to the continuum limit . domain decomposition and taking advantage of restricted additive schwarz ( ras ) preconditioning using gpuswas already studied in 2010 in , where the authors get the best performance on a lattice with vanishing overlap between the preconditioning domains and three complete ras iterations each containing just five iterations to solve the local system of sites .it should be noted though that the hardware they used is already old , so optimal parameters with up - to - date components could slightly differ . very soon after starting to work with gpus on lattice qfts , one notices the effects of amdahl s law which just points out the fact that there is an upper bound for the whole program performance improvement related to optimizing just a portion of the program .it is quite possible that the fermion matrix inversion takes up 90% of the total computing time , but making this portion of the code run 10 times faster reveals something odd : now we are spending half of our time computing forces and doing auxiliary computations and if we optimize this portion of the code as well , we improve our performance by a factor of almost two again therefore optimizing only the matrix inversion gives us a mere fivefold performance improvement instead of the promised order of magnitude improvement .authors of implemented practically the entire hmc trajectory on the gpu to fight amdahl s law and recent work on the qdp++ library implements _ just - in - time _ compilation to create gpu kernels on the fly to accommodate any non - performance critical operation over the entire lattice .work outside of standard lattice qcd using gpus includes the implementation of the neuberger - dirac overlap operator , which provides _ chiral symmetry _ at the expense of a non - local action .another group uses the arnoldi algorithm on a multi - gpu cluster to solve the overlap operator and shows scaling up to 32 gpus .quenched su2 and later quenched su2 , su3 and generic su( ) simulations using gpus are described in and even compact u(1 ) polyakov loops using gpus are studied in .scalar field theory the so - called model using amd gpus is studied in .the twqcd collaboration has also implemented almost the entire hmc trajectory computation with dynamical optimal domain wall fermions , which improve the chiral symmetry of the action . while most of the groups use exclusively nvidia s cuda - implementation , which offers good reliability , flexibility and stability , there are also some groups using the opencl standard . a recent study showed better performance on amd gpus than on nvidia ones using opencl , although it should be noted that the nvidia gpus were consumer variants with reduced double precision throughput and that optimization was done for amd gpus .the authors of have implemented both cuda and opencl versions of their staggered fermions code and they report a slightly higher performance for cuda and for nvidia cards .all in all , lattice qft using gpus is turning from being a promising technology to a very viable alternative to traditional cpu - based computing . when reaching for the very best strong scaling performance meaning best performance for small lattices single threaded performance does matter if we assume that the rest of the system scales to remove other bottlenecks ( communication , memory bandwith . ) in these cases , it seems that currently the best performance is achievable through high - end supercomputers , such as the ibm blue gene / q , where the microprocessor architecture is actually starting to resemble more a gpu than a traditional cpu : the powerpc a2 chip has 16 in - order cores , each supporting 4 relatively light weight threads and a crossbar on - chip network .a 17th core runs the os functions and an 18th core is a spare to improve yields or take place of a damaged core .this design gives the powerpc a2 chip similar performance to power ratio as an nvidia tesla 2090 gpu , making blue gene / q computers very efficient .one of the main advantages of using gpus ( or gpu - like architectures ) over traditional serial processors is the increased performance per watt and the possibility to perform simulations on commodity hardware .the stochastic techniques based on markov chains and the metropolis algorithm showed great success in the field theory examples above .there are also many - body wave function methods that use the wave function as the central variable and use stochastic techniques for the actual numerical work .these quantum monte carlo ( qmc ) techniques have shown to be very powerful tools for studying electronic structures beyond the mean - field level of for example the density functional theory .a general overview of qmc can be found from .the simplest form of the qmc algorithms is the variational qmc , where a trial wave function with free parameters is constructed and the parameters are optimized , for example , to minimize the total energy .this simple strategy works rather well for various different systems , even for strongly interacting particles in an external magnetic field .there have been some works porting qmc methods to gpus . in the early work by amos g. anderson _ , the overall speedup compared to the cpu was rather modest , from three to six , even if the individual kernels were up to 30 times faster .more recently , kenneth p. esler _ et al ._ have ported the qmcpack simulation code to the nvidia cuda gpu platform .their full application speedups are typically around 10 to 15 compared to a quad - core xeon cpu .this speedup is very promising and demonstrates the great potential gpu computing has for the qmc methods that are perhaps the computational technologies that are the mainstream in future electronic structure calculations .there are also many - body wave function methods that are very close to the quantum chemical methods .one example of these is the full configuration interaction method in chemistry that is termed exact diagonalization ( ed ) in physics .the activities in porting the quantum chemistry approaches to gpu are reviewed in , and we try to remain on the physics side of this unclear borderline .we omit , for example , works on the coupled cluster method on the gpu .furthermore , quantum mechanical transport problems are also not discussed here .lattice models are important for providing a general understanding of many central physical concepts like magnetism .furthermore , realistic materials can be cast to a lattice model .few - site models can be calculated exactly using the ed method .the ed method turns out to be very efficient on the gpu . in the simplest form of ed ,the first step is to construct the many - body basis and the hamiltonian matrix in it . then follows the most time - consuming part , namely the actual diagonalization of the hamiltonian matrix . in many cases ,one is mainly interested in the lowest eigenstate and possibly a few of the lowest excited states . for these, the lanczos algorithm turns out to be very suitable .the basic idea of the lanczos scheme is to map the huge but sparse hamiltonian matrix to a smaller and tridiagonal form in the so - called krylov space that is defined by the spanning vectors obtained from a starting vector by acting with the hamiltonian as .now , as the gpu is very powerful for the matrix - vector product , it is not surprising that high speedups compared to cpus can be found .the gpu has made a definite entry into the world of computational physics .preliminary studies using emerging technologies will always be done , but the true litmus test of a new technology is whether studies emerge where the new technology is actually used to advance science .the increasing frequency of studies that mention gpus is a clear indicator of this . from the point of view of high performance computing in computational physics ,the biggest challenge facing gpus at the moment is scaling : in the strong scaling case , as many levels of parallelism as possible inherent in the problem should be exploited in order to reach the best performance with small local subsystems .the basic variables of the model are almost always vectors of some sort , making them an ideal candidate for simd type parallelism .this is often achieved with cpus with a simple compiler flag , which instructs the compiler to look for opportunities to combine independent instructions into vector operations .furthermore , large and therefore interesting problems from a hpc point of view are typically composed of a large number of similar variables , be it particles , field values , cells or just entries in an array of numbers , which hints at another , higher level of parallelism of the problem that traditionally has been exploited using mpi , but is a prime candidate for a data parallel algorithm .also , algorithmic changes may be necessary to reach the best possible performance : it may very well be that the best algorithm for cpus is no longer the best one for gpus. a classic example could be the question whether to use lookup tables of certain variables or recompute them on - the - fly .typically , on the gpu the flops are cheap making the recomputation an attractive choice whereas the large caches of the cpu may make the lookup table a better option . on the other hand, mpi communication latencies should be minimized and bandwidth increased to accommodate the faster local solve to help with both weak and strong scaling . as far as we know, there are very few , if any , groups taking advantage of gpudirect v.2 for nvidia gpus , which allows direct gpu - to - gpu communications ( the upcoming gpudirect support for rdma will allow direct communications across network nodes ) reducing overhead and cpu synchronization needs .even gpudirect v.1 helps , as then one can share the _ pinned memory_ buffers between infiniband and gpu cards , removing the need to do extra local copies of data .the mpi implementations should also be scaled to fit the needs of the gpus connected to the node : currently the network bandwidth between nodes seems to be typically about two orders of magnitude lower than the memory bandwidth from the gpu to the gpu memory , which poses a challenge to strong scaling , limiting gpu applicability to situations with relatively large local problem sizes .another , perhaps an even greater challenge , facing gpus and similar systems is the ecosystem : currently a large portion of the developers and system administrators like to think of gpus and similar solutions as _ accelerators _ an accelerator is a component , which is attached to the main processor and used to speed up certain portions of the code , but as these `` accelerators '' become more and more agile with wider support for standard algorithms , the term becomes more and more irrelevant as a major part of the entire computation can be done on the `` accelerator '' and the original `` brains '' of the machine , the cpu , is mainly left there to take care of administrative functions , such as disk io , common os services and control flow of the program .as single threaded performance has reached a local limit , all types of processors are seeking more performance out of parallelism : more cores are added and vector units are broadened .this trend , fueled by the fact that transistor feature sizes keep on shrinking , hints at some type of convergence in the near future , but exactly what it will look like is anyone s best guess .at least in computational physics , it has been shown already that the scientists are willing to take extra effort in porting their code to take advantage of massively parallel architectures , which should allow them to do the same work with less energy and do more science with the resources allocated to them. the initial programming effort does raise a concern for productivity : how much time and effort is one willing to spend to gain a certain amount of added performance ? obviously , the answer depends on the problem itself , but perhaps even more on the assumed direction of the industry a wrong choice may result in wasted effort if the chosen solution simply does not exist in five years time .fortunately , what seems to be clear at the moment , is the overall direction of the industry towards higher parallelism , which means that a large portion of the work needed to parallelize a code for a certain parallel architecture will most probably be applicable to another parallel architecture as well , reducing the risk of parallelization beyond the typical mpi level .the answer to what kind of parallel architectures will prevail the current turmoil in the industry may depend strongly on consumer behavior , since a large part of the development costs of these machines are actually subsidized by the development of the consumer variants of the products .designing a processor only for the hpc market is too expensive and a successful product will need a sister or at least a cousin in the consumer market .this brings us back to doom and other performance - hungry games : it may very well be that the technology developed for the gamers of today , will be the programming platform for the scientists of tomorrow. we would like to thank kari rummukainen , adam foster , risto nieminen , martti puska , and ville havu for all their support .topi siro acknowledges the financial support from the finnish doctoral programme in computational sciences fics .this research has been partly supported by the academy of finland through its centres of excellence program ( project no .251748 ) and by the academy of finland project no . 1134018 .springel , v. , white , s.d.m . , jenkins , a. , frenk , c.s . , yoshida , n. , gao , l. , navarro , j. , thacker , r. , croton , d. , helly , j. , peacock , j.a ., cole , s. , thomas , p. , couchman , h. , evrard , a. , colberg , j. , pearce , f. : simulations of the formation , evolution and clustering of galaxies and quasars .435*(7042 ) ( 2005 ) 629636 kipfer , p. , segal , m. , westermann , r. : uberflow : a gpu - based particle engine . in : proceedings of the acm siggraph / eurographics conference on graphics hardware .hwws 04 , new york , ny , usa , acm ( 2004 ) 115122 anderson , j.a . ,lorenz , c.d ., travesset , a. : general purpose molecular dynamics simulations fully implemented on graphics processing units .journal of computational physics *227*(10 ) ( 2008 ) 5342 5359 moreland , k. , angel , e. : the fft on a gpu .in : proceedings of the acm siggraph / eurographics conference on graphics hardware .hwws 03 , aire - la - ville , switzerland , switzerland , eurographics association ( 2003 ) 112119 gu , l. , li , x. , siegel , j. : an empirically tuned 2d and 3d fft library on cuda gpu . in : proceedings of the 24th acm international conference on supercomputing .ics 10 , new york , ny , usa , acm ( 2010 ) 305314 ahmed , m. , haridy , o. : a comparative benchmarking of the fft on fermi and evergreen gpus . in : performance analysis of systems and software ( ispass ) , 2011 ieee international symposium on . ( 2011 ) 127 128 goodnight , n. , woolley , c. , lewin , g. , luebke , d. , humphreys , g. : a multigrid solver for boundary value problems using programmable graphics hardware . in : acm siggraph 2005 courses .siggraph 05 , new york , ny , usa , acm ( 2005 ) mcadams , a. , sifakis , e. , teran , j. : a parallel multigrid poisson solver for fluids simulation on large grids . in : proceedings of the 2010 acm siggraph/ eurographics symposium on computer animation .sca 10 , aire - la - ville , switzerland , switzerland , eurographics association ( 2010 ) 6574 meagher , d. : octree encoding : a new technique for the representation , manipulation and display of arbitrary 3-d objects by computer .rensselaer polytechnic institute .image processing laboratory ( 1980 ) hamada , t. , narumi , t. , yokota , r. , yasuoka , k. , nitadori , k. , taiji , m. : 42 tflops hierarchical n - body simulations on gpus with applications in both astrophysics and turbulence . in : proceedings of the conference on high performance computing networking , storage and analysis .sc 09 , new york , ny , usa , acm ( 2009 ) 62:162:12 takahashi , t. , cecka , c. , fong , w. , darve , e. : optimizing the multipole - to - local operator in the fast multipole method for graphical processing units . international journal for numerical methods in engineering * 89*(1 ) ( 2012 ) 105133 gtz , a.w . , williamson , m.j ., xu , d. , poole , d. , le grand , s. , walker , r.c .: routine microsecond molecular dynamics simulations with amber on gpus .1 . generalized born .journal of chemical theory and computation * 8*(5 ) ( 2012 ) 15421555 payne , m.c . ,teter , m.p . ,allan , d.c . , arias , t.a . ,joannopoulos , j.d .: iterative minimization techniques for _ ab initio _ total - energy calculations : molecular dynamics and conjugate gradients .* 64 * ( oct 1992 ) 10451097 luehr , n. , ufimtsev , i.s . ,martinez , t.j .: dynamic precision for electron repulsion integral evaluation on graphical processing units ( gpus ) .journal of chemical theory and computation * 7*(4 ) ( 2011 ) 949954 ufimtsev , i.s . ,martinez , t.j .: quantum chemistry on graphical processing units .analytical energy gradients , geometry optimization , and first principles molecular dynamics .journal of chemical theory and computation * 5*(10 ) ( 2009 ) 26192628 asadchev , a. , allada , v. , felder , j. , bode , b.m . ,gordon , m.s . ,windus , t.l . : uncontracted rys quadrature implementation of up to g functions on graphical processing units .journal of chemical theory and computation * 6*(3 ) ( 2010 ) 696704 genovese , l. , ospici , m. , deutsch , t. , mhaut , j.f . ,neelov , a. , goedecker , s. : density functional theory calculation on many - cores hybrid central processing unit - graphic processing unit architectures .the journal of chemical physics * 131*(3 ) ( 2009 ) 034103 genovese , l. , neelov , a. , goedecker , s. , deutsch , t. , ghasemi , s.a . ,willand , a. , caliste , d. , zilberberg , o. , rayson , m. , bergman , a. , schneider , r. : daubechies wavelets as a basis set for density functional pseudopotential calculations .the journal of chemical physics * 129*(1 ) ( 2008 ) 014109 hacene , m. , anciaux - sedrakian , a. , rozanska , x. , klahr , d. , guignon , t. , fleurat - lessard , p. : accelerating vasp electronic structure calculations using graphic processing units .journal of computational chemistry ( 2012 ) n / a n / a giannozzi , p. , baroni , s. , bonini , n. , calandra , m. , car , r. , cavazzoni , c. , ceresoli , d. , chiarotti , g.l ., cococcioni , m. , dabo , i. , corso , a.d . , de gironcoli , s. , fabris , s. , fratesi , g. , gebauer , r. , gerstmann , u. , gougoussis , c. , kokalj , a. , lazzeri , m. , martin - samos , l. , marzari , n. , mauri , f. , mazzarello , r. , paolini , s. , pasquarello , a. , paulatto , l. , sbraccia , c. , scandolo , s. , sclauzero , g. , seitsonen , a.p . , smogunov , a. , umari , p. , wentzcovitch , r.m . :quantum espresso : a modular and open - source software project for quantum simulations of materials .journal of physics : condensed matter * 21*(39 ) ( 2009 ) 395502 spiga , f. , girotto , i. : : a cpu - gpu library for porting quantum espresso on hybrid systems . in : parallel , distributed and network - based processing ( pdp ) , 201220th euromicro international conference on .2012 ) 368 375 wang , l. , wu , y. , jia , w. , gao , w. , chi , x. , wang , l.w . : large scale plane wave pseudopotential density functional theory calculations on gpu clusters . in : proceedings of 2011 international conference for high performance computing , networking , storage and analysis . sc 11 , new york , ny , usa , acm ( 2011 ) 71:171:10 jia , w. , cao , z. , wang , l. , fu , j. , chi , x. , gao , w. , wang , l.w .: the analysis of a plane wave pseudopotential density functional theory code on a gpu machine .computer physics communications * 184*(1 ) ( 2013 ) 9 18 enkovaara , j. , rostgaard , c. , mortensen , j.j . , chen , j. , duak , m. , ferrighi , l. , gavnholt , j. , glinsvad , c. , haikola , v. , hansen , h.a . , kristoffersen , h.h . ,kuisma , m. , larsen , a.h . ,lehtovaara , l. , ljungberg , m. , lopez - acevedo , o. , moses , p.g . ,ojanen , j. , olsen , t. , petzold , v. , romero , n.a ., stausholm - mller , j. , strange , m. , tritsaris , g.a . ,vanin , m. , walter , m. , hammer , b. , hkkinen , h. , madsen , g.k.h . ,nieminen , r.m . , nrskov , j.k . ,puska , m. , rantala , t.t . ,schitz , j. , thygesen , k.s . , jacobsen , k.w .: electronic structure calculations with gpaw : a real - space implementation of the projector augmented - wave method .journal of physics : condensed matter * 22*(25 ) ( 2010 ) 253202 hakala , s. , havu , v. , enkovaara , j. , nieminen , r.m . : parallel electronic structure calculations using multiple graphics processing units ( gpus ) . in : lecture notes in computer science .para 2012 , springer - verlag ( 2013 ) castro , a. , appel , h. , oliveira , m. , rozzi , c.a . , andrade , x. , lorenzen , f. , marques , m.a.l . , gross , e.k.u . , rubio , a. : : a tool for the application of time - dependent density functional theory .physica status solidi ( b ) * 243*(11 ) ( 2006 ) 24652488 andrade , x. , alberdi - rodriguez , j. , strubbe , d.a . ,oliveira , m.j.t . , nogueira , f. , castro , a. , muguerza , j. , arruabarrena , a. , louie , s.g . , aspuru - guzik , a. , rubio , a. , marques , m.a.l . : time - dependent density - functional theory in massively parallel computer architectures : the octopus project .journal of physics : condensed matter * 24*(23 ) ( 2012 ) 233202 isborn , c.m . ,luehr , n. , ufimtsev , i.s . ,martinez , t.j .: excited - state electronic structure with configuration interaction singles and and tamm - dancoff time - dependent density functional theory on graphical processing units .journal of chemical theory and computation * 7*(6 ) ( 2011 ) 18141823 alexandrou , c. , brinet , m. , carbonell , j. , constantinou , m. , guichon , p. , et al . : . in : proceedings of the xxist international europhysics conference on high energy physics .july 21 - 27 2011 .grenoble , rhones alpes france .volume eps - hep2011 .( 2011 ) 308 , d. , christ , n.h . , cristian , c. , dong , z. , gara , a. , garg , k. , joo , b. , kim , c. , levkova , l. , liao , x. , mawhinney , r.d . ,ohta , s. , wettig , t. : .nuclear physics b proceedings supplements * 94 * ( march 2001 ) 825832 , r. , biagioni , a. , frezza , o. , lo cicero , f. , lonardo , a. , paolucci , p.s . , petronzio , r. , rossetti , d. , salamon , a. , salina , g. , simula , f. , tantalo , n. , tosoratto , l. , vicini , p. : . in : proceedings of the xxviii international symposium on lattice field theory .june 14 - 19,2010 .villasimius , sardinia italy .( 2010 ) shirakawa , t. , hoshino , t. , oyanagi , y. , iwasaki , y. , yoshie , t. : qcdpax - an mimd array of vector processors for the numerical simulation of quantum chromodynamics . in : proceedings of the 1989 acm / ieee conference on supercomputing .supercomputing 89 , new york , ny , usa , acm ( 1989 ) 495504 babich , r. , clark , m.a . , jo , b. , shi , g. , brower , r.c . , gottlieb , s. : scaling lattice qcd beyond 100 gpus . in : proceedings of 2011 international conference for high performance computing , networking , storage and analysis .sc 11 , new york , ny , usa , acm ( 2011 ) 70:170:11 , a.w.g . , walker , r.c .: chapter 2 - quantum chemistry on graphics processing units . in wheeler ,r.a . , ed . :annual reports in computational chemistry .volume 6 of annual reports in computational chemistry .elsevier ( 2010 ) 21 35 meredith , j.s . ,alvarez , g. , maier , t.a . ,schulthess , t.c . ,vetter , j.s . :accuracy and performance of graphics processors : a quantum monte carlo application case study .parallel computing * 35*(3 ) ( 2009 ) 151 163
|
the use of graphics processing units for scientific computations is an emerging strategy that can significantly speed up various algorithms . in this review , we discuss advances made in the field of computational physics , focusing on classical molecular dynamics and quantum simulations for electronic structure calculations using the density functional theory , wave function techniques and quantum field theory .
|
quantum key distribution has attracted extensive attentions for its unconditional security compared with conversional cryptography .however , there still exist several technical limitations in practice , such as imperfect single - photon sources , large loss channels and inefficient detectors , which will impair the security . fortunately, many methods have been devised to deal with these imperfect conditions , among which , decoy - state method is thought to be a very useful candidate for substantially improving the performance of qkd .decoy - state method was firstly proposed by hwang , and advanced by wang and lo _ et al . __ assuming a weak coherent source ( wcs ) .subsequently , it was extended to parametric down- conversion sources ( pdcs ) .the main idea of decoy - state method is to randomly change the intensity of each pulse among different values , which allows one to estimate the behavior of vacuum , single - photon and multi - photon states individually . as a result, eve s presence will be detected .recently , more and more interesting ideas have been put forward to improve the performance of qkd , such as the one by adachi __ . _ _ in their proposal , both triggered and nontriggered components of pdcs are used to do some estimations for final secure key , and it needs only one intensity to transmit .however , because the intensity can not be changed during the whole experiment , and dark counts can not be measured directly , then the worst case of their contribution must be considered , which will inevitably limit final key rate and transmission distance . in this paper, we propose a new practical decoy - state scheme with pdcs , in which not only three decoy states with different intensities ( ) , but also all their triggered and nontriggered components are used to estimate the lower bound of fraction of single photon counts ( ) and upper bound of quantum bit - error rate ( qber ) of single - photon ( ) . as a result, a more accurate value of key generation rate , compared with existing methods , can be obtained .in our new scheme , we can essentially use almost the same experimental setup as that in our previous proposal , except that bob s detector need to work no matter what alice s detector is triggered or not . as is well known , the state of two - mode field from pdcs is : where represents an -photon state , and is the intensity ( average photon number ) of one mode .mode t ( trigger ) is detected by alice , and mode s ( signal ) is sent out to bob .we request alice to randomly change the intensity of her pump light among three values , so that the intensity of one mode is randomly changed among ( and ) .we denote as the probability of triggering at alice s detector when an -photon state is emitted , where is the detecting efficiency at alice s side , then the nontriggering probability is we define to be the yield of an -photon state , i.e. , the probability that bob s detector clicks whenever alice sends out state ; we also define be the gain of a -photon state , i.e. , the rate of events when alice emits -photon state and bob detects the signal , which can be divided into two groups , triggered by alice , and the rest ; and be the overall rate according to intensity , ( can be ) , it can also be divided into two groups , triggered by alice , and the rest , which can be expressed as : \frac{x^{n}}{\left ( 1+x\right ) ^{n+1}},\\ q_{x}^{(ut ) } & = y_{0}\frac{1-d_{a}}{1+x}+{\displaystyle \sum_{i=1}^{\infty } } y_{n}\left ( 1-\eta_{a}\right ) ^{n}\frac{x^{n}}{\left ( 1+x\right ) ^{n+1}},\end{aligned}\ ] ] where is the dark count rate of alice s detector . in the next step, we will use the triggered events of ( ) and the nontriggered events of ( ) to deduce a tight bound of the fraction of single - photon counts ( ). \frac{\mu^{n}}{\left ( 1+\mu \right ) ^{n+1}},\\ q_{\mu^{\prime}}^{(ut ) } & = y_{0}\frac{1-d_{a}}{1+\mu^{\prime}}+{\displaystyle \sum_{i=1}^{\infty } } y_{n}\left ( 1-\eta_{a}\right ) ^{n}\frac{{\mu^{\prime}}^{n}}{\left ( 1+\mu^{\prime}\right ) ^{n+1}}.\end{aligned}\ ] ] the two equations lead to: \nonumber \\ & + y_{1}\left [ \frac{\eta_{a}}{1-(1-\eta_{a})^{2}}\frac{\mu}{1+\mu}\left ( \frac{\mu^{\prime}}{1+\mu^{\prime}}\right ) ^{2}-\frac{1}{1-\eta_{a}}\frac { \mu^{\prime}}{1+\mu^{\prime}}\left ( \frac{\mu}{1+\mu}\right ) ^{2}\right ] + \nonumber \\ & + \sum_{n=3}^{\infty}y_{n}\left [ \frac{1-\left ( 1-\eta_{a}\right ) ^{n}}{1-(1-\eta_{a})^{2}}\frac{\mu^{n}{\mu^{\prime}}^{2}}{(1+\mu)^{n}(1+\mu^{\prime})^{2}}-\left ( 1-\eta_{a}\right ) ^{n-2}\frac{{\mu^{\prime}}^{n}{\mu}^{2}}{\left ( 1+\mu^{\prime}\right ) ^{n}(1+\mu)^{2}}\right ] .\label{complex1}\ ] ] assuming the condition can be satisfied , i.e. where ( because the values of and be chosen independently , the assumption above can be easily satisfied in experiment , ) then eq . ( 7 ) leads to the following inequality : } { \frac{\eta_{a}}{1-(1-\eta_{a})^{2}}\frac{\mu}{1+\mu}\left ( \frac{\mu^{\prime}}{1+\mu^{\prime}}\right ) ^{2}-\frac{1}{1-\eta_{a}}\frac { \mu^{\prime}}{1+\mu^{\prime}}\left ( \frac{\mu}{1+\mu}\right ) ^{2}}.\ ] ] this gives rise to the gain of single - photon pulse for triggered and nontriggered components as: and may be or here .also , if we have observed the quantum bit - error rate ( qber ) for triggered and nontriggered pulses of intensity , , , we can upper bound the qber value of single - photon pulse as: combing the two bounds , we have: normally , we use the value from for a tight estimation of . given all these, we can use the following formula to calculate the final key - rate of triggered signal pulses : \right \ } , \ ] ] where the factor of comes from the cost of basis mismatch in bennett - brassard 1984 ( bb84 ) protocol ; is a factor for the cost of error correction given existing error correction systems in practice .we assume here . is the binary shannon information function , given by furthermore , if the transmission distance is not so large , the nontriggered component can also be used to generate secret key just as in adachi _et al _ s proposal : \}.\end{aligned}\ ] ] in this case the final key rate is given by : .in an experiment , we need to observe the values of , , , , , and , , , , and then deduce the lower bound of fraction of single - photon counts ( ) and upper bound qber of single - photon pulses ( ) by theoretical results , and then one can distill the secure final key . in order to make a faithful estimation , we need a channel model to forecast what values for , , , , , and , , , _ would _ be , if we _ did _ the experiment without eve in principle .suppose is the combined overall transmittance and detection efficiency between alice and bob ; is the transmittance between alice and bob , ; is the transmittance in bob s side , . following these assumptions , , which approximates when , and the observed value for and should be: \left [ 1-\left ( 1-\eta_{a}\right ) ^{n}\right ] \frac{x^{n}}{\left ( 1+x\right ) ^{n+1}},\\ q_{x}^{(ut ) } & = \frac{\left ( 1-d_{a}\right ) d_{b}}{1+x}+{\displaystyle \sum_{i=1}^{\infty } } \left [ d_{b}+\left ( 1-\eta \right ) ^{n}\right ] \left ( 1-\eta_{a}\right ) ^{n}\frac{x^{n}}{\left ( 1+x\right ) ^{n+1}},\end{aligned}\ ] ] where can be or , and is the dark count rate of bob s detectors .we use the following for the error rate of an _n - photon _state : }{d_{b}+1-(1-\eta)^{n}},\ ] ] where , is the probability that the survived photon hits a wrong detector , which is independent of the transmission distance . belowwe shall assume to be a constant .therefore , the observed values should be : ; the lower line is the result of our previous proposal with , ( has the optimal value at each point in each line . ) ] ; the lower line is the result of our previous proposal with , ( has the optimal value at each point in each line . ) ] vs transmission distance .the upper line is the result of our new proposal ( ) , and the lower line is the result of adachi _ et al_. ( b ) the ratio of key rates between our new proposal and adachi _ et al _ s vs transmission distance . ] in practical implementation of qkd , we often use non - degenerated down - conversion to produce photon pairs , with one photon at the wavelength convenient for detection acting as heralding signal , and the other at the telecommunication windows for optimal propagation along the fiber or in open air acting as heralded signal .we can now calculate the final key rate with the assumed values above . for convenience of comparing with the results of adachi _et al_. , we use the same parameters as used in their paper which mainly come from gobby , yuan and shields ( gys ) experiment . at alices side , ; at bob s side , and the channel loss is at each distance we choose the optimal value for , so that we can have the highest key rate , and the final results are shown in fig . 1 , 2 , 3 ( according to eq .( 7 ) , is chosen in our new proposal ) .1 shows the key generation rate against transmission distance compared with our previous results , ( only triggered events are used . )it shows that our new scheme can generate a higher key rate than the old one even using only triggered signal .2 shows the key generation rate against transmission distance compared with our previous results , ( both triggered and nontriggered events are used . ) from it we can see that our new results can approach the ideal values very closely .moreover , there is no need to use a quite weak decoy state or nontriggered signal .for example , at the distance of 50 km , setting in our new scheme , and in the old one , we can get a ratio of key rate between the two scheme as 3.8 .the reason that we can get a more accurate estimation of key rate ( ) in the new proposal is as follows : we do nt omit those high order items ( in formula ( 6 ) ) in the deduction of , but use them to deduce a relationship between the intensity of decoy state ( ) and signal state ( ) , which inevitably results in a more bound estimation of (in addition , there is an inflexion in curve c ( d ) at the distance about 134 km , because the nontriggered events cease to contribute to the key rate . ) fig .3 ( a ) shows the optimal values of in our new proposal ( setting ) and those in adachi _et al _ s ; fig .3 ( b ) shows the ratio of the key generation rate between our new scheme and adachi _ et al _ s against transmission distance , ( both triggered events and nontriggered events are used . )it shows that our result is always larger than theirs when using almost the same level of data size . from the figures above, we can see that , our new results are better than those of both our previous proposal and adachi _ et al _ s .as is known , to give a more accurate estimation of the key rate , the value of should be chosen to be the smaller the better . in our previous proposal, the key rate could also be very close to the ideal value given a very weak decoy state .however , in a practical experiment , considering statistical errors , can not be too weak .so in our new scheme , we deduce a relation between and , and at each point , both and can be chosen with optimal values , which results in a more accurate estimation . comparing with adachi _et al _ s proposal , the advantages of our proposal are as follows : firstly , dark counts can be measured directly ; secondly , a weaker decoy state is used to get a more accurate estimation of , and a stronger signal of is used to get a higher secure key rate .in summary , we have proposed a new decoy - state scheme in qkd with pdcs , in which we use both three - intensity decoy - states and their triggered and nontriggered components to get a tight bound of the fraction of single - photon counts and single - photon qber , this allows us to accurately deduce the value of key generation rate . finally , the key generation rate vs transmission distance is numerically simulated .the simulations show that our new results are better than those of the existing proposal .furthermore , our proposal only assumes existing experimental technology , which makes the scheme a practical candidate in the implementation of qkd .q. w thanks yoritoshi adachi ( osaka university ) and s sauge for useful discussions , and gao - xin qiu for help in numerical simulation .this work was funded by the european community through the qap ( qubit applications-015848 ) project , and also supported by chinese national fundamental research program and tsinghua bairen program .
|
in this paper , a new decoy - state scheme for quantum key distribution with parametric down - conversion source is proposed . we use both three - intensity decoy states and their triggered and nontriggered components to estimate the fraction of single - photon counts and quantum bit - error rate of single - photon , and then deduce a more accurate value of key generation rate . the final key rate over transmission distance is simulated , which shows that we can obtain a higher key rate than that of the existing methods , including our own earlier work .
|
traditionally , learning classifier systems ( lcs ) use a ternary encoding to generalize over the environmental inputs and to associate appropriate actions .a number of representations have previously been presented beyond this scheme however , including real numbers , fuzzy logic and artificial neural networks .temporally dynamic representation schemes within lcs represent a potentially important approach since temporal behaviour of such kinds is viewed as a significant aspect of artificial life , biological systems , and cognition in general . in this paperwe explore examples of a dynamical system representation within the xcsf learning classifier system termed `` dynamical genetic programming '' ( dgp ) .traditional tree - based genetic programming ( gp ) has been used within lcs both to calculate the action and to represent the condition ( e.g. , ) .dgp uses a graph - based representation , each node of which is constantly updated with asynchronous parallelism , and evolved using an open - ended , self - adaptive scheme . in the discrete case ,each node is a boolean function and therefore the representation is a form of random boolean network ( rbn ) ( e.g. , ) . in the continuous case , each node performs a fuzzy logical function and the representation is a form of fuzzy logic network ( fln ) ( e.g. , ) .we show that xcsf is able to solve a number of well - known immediate and delayed reward tasks using this temporally dynamic knowledge representation scheme with competitive performance with other representations .moreover , we exploit the memory inherent to rbn for the discrete case .a significant benefit of symbolic representations is the expressive power to represent relationships between the sensory inputs .lisp s - expressions comprised from a set of boolean functions ( i.e. , and , or , and not ) have been used to represent symbolic classifier conditions in lcs to solve boolean multiplexer and woods problems , and to extract useful knowledge in a data mining assay .an analysis of the populations has subsequently shown an increasing prevalence of sub - expressions through the course of evolution as the system constructs the required building blocks to find solutions . however , when logical disjunctions are involved , optimality is unattainable because the symbolic conditions highly overlap , resulting in classifiers sharing their fitness with other classifiers and thereby lowering the fitness values .this was later extended to also include arithmetic functions ( i.e. , plus , minus , multiply , divide , and powerof ) and domain specific functions ( i.e. , valueat and addrof ) to solve a number of multiplexer tasks .in addition , lanzi _ et al . _ based classifier conditions on stack - based genetic programming and solved the 6 and 11 bit multiplexer as well as woods1 problems . herethe conditions are linear sequences of tokens , expressed in reverse polish notation , where each token represents either a variable , constant or function .the function set used comprised boolean operators ( i.e. , and , or , not and eor ) and arithmetic operators ( i.e. , + , - , , =) . ahulwalia and bull presented a simple form of lcs which used numerical s - expressions for feature extraction in classification tasks . hereeach rule s condition was a binary string indicating whether or not a rule matched for a given feature and the actions were s - expressions which performed a function on the input feature value .more recently , wilson has explored the use of a form of gene expression programming ( gep ) within lcs . herethe expressions are comprised from arithmetic functions and applied to regression tasks .the conditions are represented as expression trees which are evaluated by assigning the environmental inputs to the tree s terminals , evaluating the tree , and then comparing the result with a predetermined threshold . whenever the threshold value is exceeded, the rule becomes eligible for use as the output .landau _ et al . _ used a purely evolution - based form of lcs ( pittsburgh style ) in which the rules are represented as directed graphs where the genotypes are tokens of a stack - based language , whose execution builds the labeled graph .bit - strings are used to represent the language tokens and applied to non - markov problems .the genotype is translated into a sequence of tokens and then interpreted similarly to a program in a stack - based language with instructions to create the graph s nodes , connections and labels .subsequently , the unused conditions and actions in the stack are added to the structure which is then popped from the stack .tokens are used to specify the matching conditions and executable actions as well as instructions to construct the graph , and to manipulate the stack .the bit - strings were later replaced with integer tokens and again applied to non - markov problems .most relevant to the form of gp used herein is the relatively small amount of prior work on graph - based representations .neural programming ( np ) uses a directed graph of connected nodes , each performing an arbitrary function .potentially selectable functions include read , write , and if - then - else , along with standard arithmetic and zero - arity functions .additionally , complex user defined functions may be used .significantly , recursive connections are permitted and each node is executed with synchronous parallelism for some number of cycles before an output node s value is taken .poli ( e.g. , ) presented a similar scheme wherein the graph is placed over a two - dimensional grid and executes its nodes synchronously in parallel .connections are directed upwards and are only permitted between nodes situated on adjacent rows ; however by including identity functions , connections between non - adjacent layers are possible and thus any parallel distributed program may be represented .teller and veloso presented parallel algorithm discovery and orchestration ( pado ) which uses an arbitrary directed graph of nodes and an indexed memory .each node in the graph consists of an action and a branch - decision component , with multiple outgoing branches permitting the various potential flows of control .a stack is used from where each program s inputs are drawn and the results pushed .the potentially selectable actions are similar to np and include arithmetic operators , negation , minimum and maximum , and the ability to read from and write to the indexed memory , along with non - deterministic and deterministic branching instructions .the graphs are executed chronologically for a fixed amount of time with each node selecting the next to take control .the output nodes are then averaged giving additional weighting to the more recent states. other examples of graph - based gp typically contain sequentially updating nodes , e.g. , finite state machines ( e.g. , ) , cartesian gp , genetic network programming , linear - graph gp , and graph structured program evolution . schmidt and lipson have recently demonstrated a number of benefits from graph encodings over traditional trees , such as reduced bloat and increased computational efficiency .we have recently introduced the use of the graph - based random boolean networks within lcs . in this paperwe extend that work to the most recent form of lcs , wilson s xcsf , and to the continuous - valued domain with fuzzy logical functions .the most common form of discrete dynamical system is the cellular automaton ( ca ) which consists of an array of cells ( lattice of nodes ) where the cells exist in states from a finite set and update their states with synchronous parallelism in discrete time .traditionally , each cell calculates its next state depending upon its current state and the states of its closest neighbours .that is , cas may be seen as a graph with a ( typically ) restricted topology .packard was the first to use evolutionary computing techniques to design cas such that they exhibit a given emergent global behaviour .following packard , mitchell _ et al ._ have investigated the use of a ga to learn the rules of uniform binary cas . as in packards work , the ga produces the entries in the update table used by each cell , candidate solutions being evaluated with regard to their degree of success for the given task . used traditional gp to evolve the update rules and reported similar results to mitchell _ et al .sipper presented a non - uniform , or heterogeneous , approach to evolving cas .each cell of a one- or two - dimensional ca is also viewed as a ga population member , mating only with its lattice neighbours and receiving an individual fitness .he shows an increase in performance over mitchell __ by exploiting the potential for spatial heterogeneity in the tasks . in this paper , a more general form of dynamical system is exploited .the discrete dynamical systems known as random boolean networks ( rbn ) were originally introduced by kauffman ( see ) to explore aspects of biological genetic regulatory networks . since then they have been used as a tool in a wide range of areas , such as self - organisation ( e.g. , ) and computation ( e.g. , ) and robotics ( e.g. , ) .an rbn typically consists of a network of nodes , each performing a boolean function with inputs from other nodes in the network , all updating synchronously ( see figure [ fig : examplerbn ] ) . as such, rbn may be viewed as a generalization of binary cellular automata ( ca ) and unorganized machines .since they have a finite number of possible states and they are deterministic , the dynamics of rbn eventually fall into a basin of attraction .it is well - established that the value of affects the emergent behaviour of rbn wherein attractors typically contain an increasing number of states with increasing .three phases of behaviour are suggested : ordered when , with attractors consisting of one or a few states ; chaotic when , with a very large number of states per attractor ; and , a critical regime around , where similar states lie on trajectories that tend to neither diverge nor converge and 5 - 15% of nodes change state per attractor cycle ( see for discussions of this critical regime , e.g. , with respect to perturbations ) .analytical methods have been presented by which to determine the typical time taken to reach a basin of attraction and the number of states within such basins for a given degree of connectivity ( see ) .closely akin to the work described here , kauffman describes the use of simulated evolution to design rbn which must play a ( mis)matching game wherein mutation is used to change connectivity , the boolean functions , and .he reports the typical emergence of high fitness solutions with =2 to 3 , together with an increase in over the initialised size .sipper and ruppin extended sipper s heterogeneous ca approach to enable heterogeneity in the node connectivity , along with the node function ; they evolved a form of random boolean network .van den broeck and kawai explored the use of a simulated annealing - type approach to design feedforward rbn for the four - bit parity problem and lemke _ et al ._ evolved rbn of fixed and to match an arbitrary attractor .figure [ fig : k - affect - all ] shows the affect of on a 13 node rbn ; results are an average of one hundred runs for each value of .it can be seen that the higher the value of , the greater the number of states the networks will cycle through , as shown by the higher rate of change of node states .further , that after an initial rapid decline in the rate of change , this value stabilises as the states fall into their respective attractors . in the synchronous case ( figure [ fig : k - affect - synch ] ) when , the number of nodes changing state converges to around 20% , and when to just above 35% ; thus we can see that the ordered regime occurs when approximately 20% or less nodes are changing state each cycle , and the chaotic regime occurring for larger rates of change . as noted above , traditional rbn consist of nodes updating synchronously in discrete time steps , but asynchronous versionshave also been presented , after , leading to a classification of the space of possible forms of rbn .asynchronous forms of ca have also been explored ( e.g. , ) wherein it is often suggested that asynchrony is a more realistic underlying assumption for many natural and artificial systems since `` discrete time , synchronously updating networks are certainly not biologically defensible : in development the interactions between regulatory elements do not occur in a lock - step fashion '' .asynchronous logic devices are known to have the potential to consume less power and dissipate less heat , which may be exploitable during efforts towards hardware implementations of such systems .asynchronous logic is also known to have the potential for improved fault tolerance , particularly through delay insensitive schemes ( e.g. , ) .this may also prove beneficial for hardware implementations .harvey and bossomaier showed that asynchronous rbn exhibit either point attractors , as seen in asynchronous cas , or `` loose '' attractors where `` the network passes indefinitely through a subset of its possible states '' ( as opposed to distinct cycles in the synchronous case ) .thus the use of asynchrony represents another feature of rbn with the potential to significantly alter their underlying dynamics thereby offering another mechanism by which to aid the simulated evolutionary design process for a given task . di paulo showed it is possible to evolve asynchronous rbn which exhibit rhythmic behaviour at equilibrium .asynchronous cas have also been evolved ( e.g. , ) .figure [ fig : k - affect - asynch ] shows the percentage of nodes changing state on each cycle for various values of on a 13 node asynchronous rbn .it can be seen that , similar to the synchronous case ( see figure [ fig : k - affect - synch ] ) , the higher the value of , the greater the number of states the networks will cycle through in an attractor .these values are significantly lower than in the synchronous case however .for example , when , approximately 20% of nodes change each synchronously updated cycle compared with 5% when updated asynchronously .the difference is to be expected because , in the asynchronous case , `` the lack of synchronicity increases the complexity of the rbn , enhancing the number of possible states and interactions .and this complexity changes the attractor basins , transforming and enlarging them .this reduces the number of attractors and states in attractors '' .as previously mentioned , in the asynchronous case there are no cycle attractors , only point and loose attractors .an lcs rule ( also termed a classifier ) traditionally takes the form of an environment string consisting of the ternary alphabet [ 0,1 , # ] , a binary action string , and subsequent information including the classifier s expected payoff ( reward ) , the error rate ( in units of payoff predicted ) , and the fitness .the # symbol in the environment condition provides a mechanism to generalise the inputs received by matching for both logical 0 and 1 for that bit . for each phase in the learning cycle ,a _ match set _[ m ] is generated from the _ population set _ [ p ] , comprising all of the classifiers whose environment condition matches the current environmental input . in the event that the number of actions present in [ m ] is less than a threshold value , , covering is used to produce a classifier that matches the current environment state along with an action assigned randomly from those not present in [ m ] ; typically is set to the maximum number of possible actions so that there must be at least one classifier representing each action present .subsequently , a system prediction is made for each action in [ m ] , based upon the fitness - weighted average of all of the predictions of the classifiers proposing the action . if there are no classifiers in [ m ] advocating one of the potential system actions , covering is invoked to generate classifiers that both match the current environment state and advocate the relevant action .an action is then selected using the system predictions , typically by alternating exploring ( by either roulette wheel or random selection ) and exploiting ( the best action ) . in multi - step problems a biased selection strategy is often employed wherein exploration is conducted at probability otherwise exploitation occurs .action set _ [ a ] is then built comprising all the classifiers in [ m ] advocating the selected action .next , the action is executed in the environment and feedback is received in the form of a payoff , . in a single - step problem ,[ a ] is updated using the current reward .the ga is then run in [ a ] if the average time since the last ga invocation is greater than the threshold value , .when the ga is run , two parent classifiers are chosen ( typically by roulette wheel selection ) based on fitness .offspring are then produced from the parents , usually by use of recombination and mutation .typically , the offspring then have their payoff , error , and fitness set to the average of their parents. if subsumption is enabled and the offspring are subsumed by either parent , it is not included in [ p ] ; instead the parents numerosity is incremented . in a multi - step problem , the previous action set [ a] is updated using a q - learning type algorithm and the ga may be run as described above on [ a] as opposed to [ a ] for single - step problems .the sequence then loops until it is terminated after a predetermined number of problem instances . in xcsfeach classifier also maintains a vector of a series of weights , where there are as many weights as there are inputs from the environment , plus one extra , .that is , each classifier maintains a prediction ( ) which is calculated as a product of the environmental input ( ) and the classifier weight vector ( ) : each of the input weights is initially set to zero , and subsequently adapted to accurately reflect the prediction using a _ modified delta rule _ .the delta rule was modified such that the correction for each step is proportional to the difference between the current and correct prediction , and controlled by a correction rate , .the _ modified delta rule _ for the reinforcement update is thus : where is the correction rate and is the norm of the input vector .the values are used to update the weights of the classifier with : subsequently , the prediction error is updated with : this enables a more accurate , piecewise - linear , approximation of the payoff ( or function ) , as opposed to a piecewise - constant approximation , and can also be applied to binary problems such as the boolean multiplexer and maze environments , resulting in faster convergence to optimality as well as a more compact rule - base .see for further details .to use asynchronous rbn as the rules within xcsf ( see example rule in figure [ fig : ddgp - examplerule ] ) , the following scheme is adopted .each of an initial randomly created rule s nodes has randomly assigned connections , here .there are initially as many nodes as input fields for the given task and its outputs , plus one other , as will be described , i.e. , . the first connection of each input node is set to the corresponding locus of the input message .the other connections are assigned at random within the rbn as usual . in this way ,the current input state is always considered along with the current state of the rbn itself per network update cycle by such nodes .nodes are initialised randomly each time the network is run to determine [ m ] , etc .the population is initially empty and covering is applied to generate rules as in the standard xcsf approach .l l l + + + & truth table : & connections : + node 0 ( m ) : & 10011000100000001110011010101000 & 7 , 4 , 0 , 3 , 1 + node 1 ( out ) : & 10 & 3 + node 2 ( i ) : & 00011111 & _ input1 _ , 2 , 5 + node 3 ( i ) : & 0001 & _ input2 _ , 2 + node 4 ( i ) : & 11101110 & _ input3 _ , 6 , 3 + node 5 ( i ) : & 0110110100001010 & _ input4 _ , 2 , 7 , 6 + node 6 ( i ) : & 0001011101010101 & _ input5 _ , 5 , 2 , 3 + node 7 ( i ) : & 0100 & _ input6 _, 3 + node 8 ( n ) : & 00010111 & 3 , 1 , 5 + matching consists of executing each rule for cycles based on the current input .the value of is chosen to be a value typically within the basin of attraction of the rbn .asynchrony is here implemented as a randomly chosen node being updated on a given cycle , with as many updates per overall network update cycle as there are nodes in the network before an equivalent cycle to one in the synchronous case is said to have occurred .see for alternative schemes . in this study , where well - known maze problems are exploredthere are eight possible actions and accordingly three required output nodes .an extra `` matching '' node is also required to enable rbns to ( potentially ) only match specific sets of inputs . if a given rbn has a logical ` 0 ' on the match node , regardless of its output node s state , the rule does not join [ m ] . this scheme has also been exploited within neural lcs .a ` windowed approach ' is utilised where the output is decided by the most common state over the last steps up to .for example , if the last few states on a node updating prior to cycle is 0101001 and , then the ending node s state would be ` 0 ' and not ` 1 ' . when covering is necessitated , a randomly constructed rbn is created and then executed for cycles to determine the status of the match and output nodes .this procedure is repeated until an rbn is created that matches the environment state .self - adaptive mutation affecting a variable length representation was first explored by fogel _ where a self - adaptive value was used to control the deletion rate of states within finite state machines .furthermore , ghozeil and fogel used self - adaptive mutation to control the rate of addition and deletion of hyperboxes to cluster spatial data .self - adaptive mutation was first applied within lcs by bull_ et al . _ where each rule maintains its own mutation rate . self - adaptive mutation affecting rule size was first used in lcs with a neural representation .this is similar to the approach used in evolution strategies ( es ) where the mutation rate is a locally evolving entity in itself , i.e. , it adapts during the search process .self - adaptive mutation not only reduces the number of hand - tunable parameters of the evolutionary algorithm , it has also been shown to improve performance .following , mutation only is used here .a node s truth table is represented by a binary string and its connectivity by a list of integers in the range ] .this parameter is passed to its offspring .the offspring then applies its mutation rate to itself using a gaussian distribution , i.e. , , before mutating the rest of the rule at the resulting rate .due to the need for a possible different number of nodes within the rules for a given task , the dgp scheme is also of variable length .once the truth table and connections have been mutated , a new randomly connected node is either added or the last added node is removed with the same probability . the latter case only occurs if the network currently consists of more than the initial number of nodes .in addition , each rule maintains its own value which is initially seeded randomly between 1 and 50 .thereafter , offspring potentially increment or decrement by 1 at probability . is evolved in a similar fashion , however it is initially seeded between 0 and , and can not be greater than .thus dgp is temporally dynamic both in the search process and the representation scheme . whenever an offspring classifier is created and no changes occur to its rbn when undergoing mutation , the parent s numerosity is increased and mutation rate set to that of the offspring .the simplest form of short - term memory is a fixed - length buffer containing the most recent inputs ; a common extension is to then apply a kernel function to the buffer to enable non - uniform sampling of the past values , e.g. an exponential decay of older inputs .however it is not clear that biological systems make use of such shift registers .registers require some interface with the environment which buffers the input so that it can be presented simultaneously .they impose a rigid limit on the duration of patterns , defining the longest possible pattern and requiring that all input vectors be of the same length .furthermore , such approaches struggle to distinguish relative temporal position from absolute temporal position .whereas many gp systems are expression based , some have also utilised a form of memory or state .for example , linear gp ; indexed memory , e.g. , , , and ; and work on evolving data structures which maintain internal state , e.g. , .in addition , some systems have used ( instead of evolved ) data structures to manipulate the internal state , e.g. , pushgp .recently , poli _ et al . _ explored the use of soft assignment and soft return operations as forms of memory within linear and tree - based gp . for soft assignment, they replaced the traditional ( entirely destructive ) method of variable assignment with one of merging new values with previous ones , instead of overwriting them . to achieve this ,the new value becomes a weighted average of the old register value with the new value to be assigned , i.e. , where is a value in the range [ 0,1 ] specifying the assignment `` hardness '' .for soft return operations , tree function nodes return a weighted average of their first argument with the result of the corresponding calculation , i.e. , where is an input to a function , . herewe explore and extend the hypothesis of inherent content - addressable memory existing within synchronous rbn due to different possible routes to a basin of attraction for the asynchronous case by maintaining the node states across each input - update - output cycle .a significant advantage of this approach is that each rule / network s short - term memory is variable - length and _ adaptive _ , i.e. , the networks can adjust the memory parameters , selecting within the limits of the capacity of the memory , what aspects of the input sequence are available for computing predictions .in addition , as we use open - ended evolution , the maximum size of the short - term memory is also open - ended , increasing as the number of nodes within the network grows . here , nodes are initialised at random for the initial random placing in the maze but thereafter they are not reset for each subsequent matching cycle .consequently , each network processes the environmental input and the final node states then become the starting point for the next processing cycle , whereupon the network receives the new environmental input and places the network on a trajectory toward a ( potentially ) different locally stable limit point . therefore , a network given the same environmental input ( i.e. , the agent s current maze perception ) but with different initial node states ( representing the agent s history through the maze ) may fall into a different basin of attraction ( advocating a different action ) . _thus the rules dynamics are ( potentially ) constantly affected by the inputs as the system executes ._ we now apply ddgp - xcsf to two well - known multi - step non - markov maze environments that require memory to resolve perceptual aliasing : woods101 ( see figure [ fig : woods101 ] ) and woods102 ( see figure [ fig : woods102 ] ) . each cell in the maze environments is encoded with two binary bits , where white space is represented as a ` * ' , obstacles as ` o ' , and food as ` f ' .furthermore , actions are encoded in binary as shown in figure [ fig : maze - encoding ] . the task is simply to find the shortest path to the food ( f ) given a random start point .obstacles ( o ) represent cells which can not be occupied . in woods1the optimal number of steps to the food is 1.7 , in maze4 optimal is 3.5 steps , in woods101 it is 2.9 , and in woods102 it is 3.23 .a teletransportation mechanism is employed whereby a trial is reset if the agent has not reached the goal state within 50 discrete movements .+ the woods101 maze ( see figure [ fig : woods101 ] ) is a non - markov environment containing two _ communicating aliasing states _ ,i.e. , two positions which border on the same non - aliasing state and are identically sensed , but require different optimal actions .thus , to solve this maze optimally , a form of memory must be utilised ( with at least two internal states ) .optimal performance has previously been achieved in woods101 through the addition of a memory register mechanism in lcs , by a corporate lcs using rule - linkage , and by a neural lcs using recurrent links .furthermore , in a proof of concept experiment , the cyclical directed graph from neural programming has been shown capable of representing rules with memory to solve woods101 , however it was only found to do so twice in fifty experiments .figure [ fig : ddgp - xcsf - woods101-all ] shows the performance of ddgp - xcsf in the woods101 environment with , , , , , , , , and ( 16 inputs , 3 outputs , 1 match node ) . here ,optimality is observed after approximately 6,000 trials ( figure [ fig : ddgp - xcsf - woods101-perf ] ) .this is similar to the performance of lcs using a 1-bit memory register ( ,000 trials , ) .the number of macro - classifiers in the population converges to around 1800 ( figure [ fig : ddgp - xcsf - woods101-sizemut ] ) .furthermore , the average number of nodes in the networks increases by almost one and the number of connections declines fractionally ( figure [ fig : ddgp - xcsf - woods101-topology ] ) .the mutation rate ( also figure [ fig : ddgp - xcsf - woods101-sizemut ] ) declines rapidly from approx 35% to its lowest point , 1.2% , around the six thousandth trial , which is at the same moment optimal performance is also observed .lastly , figure [ fig : ddgp - xcsf - woods101-topology ] conveys that the first thousand trials sees a rapid increase in the number of cycles , , ( 30.6 to 34.4 ) and a rapid decrease in the value of ( 17 to 14.7 ) .subsequently , continues to increase , ( although at a much slower rate ) along with the average number of nodes in the networks ; remains stable at just fewer than 15 . + the woods102 maze ( see figure [ fig : woods102 ] ) is a non - markov environment containing _ aliasing conglomerates _ ,i.e. , adjacent aliasing states .the introduction of aliasing conglomerates increases the complexity of the learning task facing the agent significantly .`` it would appear that three memory - register bits are required to resolve [ the ] perceptual aliasing .however , since the two situations occur in separate parts of the environment , there is the possibility that an optimal policy could evolve in which certain register bits are used in more than one situation , thus requiring fewer bits in all .it is therefore not clear how large a bit - register is strictly necessary '' .however , in practice , register redundancy was found to be important and an 8-bit memory register was required within lcs to solve the maze optimally , with 2 and 4-bit registers achieving only 4 and 3.7 steps respectively ( ibid . ) .figure [ fig : ddgp - xcsf - woods102-all ] shows the performance of ddgp - xcsf in woods102 with the same parameters used in the prior experiment , however , here and .although a population size of 20,000 may seem disproportionate , a population of 2,000 classifiers was required for woods101 , representing a scale up of , which can be compared with the increase required by lcs with a memory register ( 800 to 6,000 , or ) , where the potential number of internal actions required rises from to ( ibid . ) , thus resources are clearly not increasing as quickly as the search space .optimality is observed after approximately 80,000 trials ( figure [ fig : ddgp - xcsf - woods102-perf ] ) , this is slower than lcs with an explicit 8-bit memory register ( ,000 trials , ) .however here the size of the memory did not need to be predetermined as it is inherent within the networks , and the action selection policy remains constant , with constant ga activity , unlike in .the number of macro - classifiers in the population converges to around 17,750 ( figure [ fig : ddgp - xcsf - woods102-sizemut ] ) .furthermore , the average number of nodes in the networks increases fractionally to 20.6 and the number of connections declines on average from 2.95 to 2.82 ( figure [ fig : ddgp - xcsf - woods102-topology ] ) .the mutation rate ( figure [ fig : ddgp - xcsf - woods102-sizemut ] ) declines rapidly over the first 40,000 trials from 32% to 5% and reaches its lowest point , 3.5% , at 100,000 trials .lastly , from figure [ fig : ddgp - xcsf - woods102-topology ] it can be seen that on average increases from 30 to 35 and from 17.5 to 20.5 . +continuous network models of genetic regulatory networks ( grn ) are an extension of boolean networks where nodes still represent genes , and the connections between them regulate the influence on gene expression .differential equations wherein gene interactions are incorporated as logical functions are a typical approach .there is a growing body of work exploring the evolution of different forms of such continuous - valued grn .for example , knabe _ et al ._ devised a model that allows the grouping of inputs to a node and is formally closer to a higher order recurrent neural network .this was later used to model the evolution of cellular differentiation and multicellular morphogenesis .another model is the dynamic recurrent gene network ( drgn ) which consist of a fully connected network of nodes , each with a continuous activation state in the range [ 0,1 ] , updated synchronously . herea distinction is made between structural nodes ( i.e. , nodes that specify the current state but have no regulatory output ) and regulatory nodes ( i.e. , nodes that only play a regulatory role ) .a single input node is used to specify the relative position of the cell in the lineage . to simulate the development of an organism , the node activations and the relative position inputare initialised .subsequently , cell division occurs through repeatedly duplicating the network , adjusting the relative positions in each network , and updating the states .the network weights are adapted through the use of an evolutionary algorithm .furthermore , dynamic bayesian networks ( dbn ) combine bayesian networks ( bn ) with features of hidden markov models , incorporating feedback cycles that simulate the temporal evolution of the network .dbn provide a stochastic model where both discrete and continuous states are possible .heuristics are used to learn the connectivity map and create additional hidden nodes .dbn have been shown to generalise many of the grn models including rbn ( see ) .fuzzy cellular automata ( fuzzy - ca ) are an extension of boolean cellular automata ( ca ) and consists of an array of cells ( lattice of nodes ) where the cells exist in real - valued states in the range [ 0,1 ] and ( typically ) update their states with synchronous parallelism in discrete time .traditionally , each cell calculates its next state depending upon its current state and the states of its closest neighbours .that is , fuzzy - ca may be seen as a graph with a ( typically ) restricted topology .since both transition and output functions are replaced by fuzzy relations , fuzzy - ca include deterministic and non - deterministic finite automata as special cases and were initially applied to pattern recognition and automatic control problems .following cattaneo _ , reiter investigated the affect of the fuzzy background on the dynamics of cellular automata with various fuzzy logic sets .they found that the choice of logic used leads to significantly different behaviours .for example , applying the various logical functions to create fuzzy versions of the game of life , it was noted that certain sets of logics generated fuzzy - ca that tended toward homogeneous fuzzy behaviour , whereas others were consistent with chaotic or complex behaviour .fuzzy set theory is a generalization of boolean logic wherein continuous variables can partially belong to sets .a fuzzy set is defined by a membership function , typically within the range [ 0,1 ] , that determines the degree of belonging to a value of that set .fuzzy set theory has been successfully applied to myriad engineering , medical , business , and natural science problems .genetic fuzzy systems ( gfs ) use gas to optimise a fuzzy rule based system composed of `` if - then '' rules , whose antecedents and consequents comprise fuzzy logic statements from fuzzy set theory . the first application of the ga - only , i.e. , pittsburgh , approach to learning a fuzzy rule base was by thrift .valenzuela - rendon provided the first use of the michigan approach for reinforcement learning with an evolving set of fuzzy rules .this was later extended to enable delayed - reward reinforcement learning , including continuous multi - step problems using continuous vector actions .fuzzy logic has been used in accuracy - based lcs for single - step reinforcement learning and for data mining on several uci data sets .in addition , fuzzy logic has been used under a lcs supervised learning scheme for data mining on uci data sets and for epidemiologic classification . aside from using lcs , alternative rule - like approaches have been applied such as who used a ga to modify a fuzzy relational matrix of a one - input , one - output fuzzy model . by combining fuzzy logic with neural networks , neurons can deal with imprecision . bull and ohara presented a form of fuzzy representation within lcs using radial basis function neural networks ( rbf ) to embody each condition - action rule .that is , a simple class of neural - fuzzy hybrid system .furthermore , su _ et al . _ explored a similar representation based on rbf within lcs .however , here the contribution of each rule is determined by its strength ( which is updated by a fuzzy bucket brigade algorithm ) as well as the extent to which the antecedent matches the environment .furthermore , in contrast to bull and ohara , each condition - action rule corresponds to a hidden node instead of a fully - connected network and rules are added incrementally instead of being evolved through the ga . to date , only the use of rbf has been explored as a neuro - fuzzy hybrid representation within lcs .fuzzy logic networks ( fln ) can be seen as both a generalization of fuzzy - ca and rbn , where the boolean functions from rbn are replaced with fuzzy logical functions from fuzzy set theory .thus , fln generalize rbn through a continuous representation and generalize fuzzy - ca through a less restricted graph topology .kok and wang explored 3-gene regulation networks using fln and found that not only were fln able to represent the varying degrees of gene expression but also that the dynamics of the networks were able to mimic a cell s irreversible changes into an invariant state or progress through a periodic cycle .fln are defined as , given a set of variables ( genes ) , ( i=1 , 2 , ... , n)\ ] ] index represents time ; and the variables are updated by means of dynamic equations , where is a randomly chosen fuzzy logical function .the total number of choices for fuzzy logical functions is decided only by the number of inputs .if a node has inputs , then there are different fuzzy logical functions . in the definition of fln ,each node , has inputs ( see figure [ fig : example - fln ] ) .the membership function is defined as a function ] , where 0 represents no input to be received on that connection .each integer in the list is subjected to mutation on reproduction at the self - adapting rate for that rule .hence , within the representation , evolution can select different fuzzy logic functions for each node within a given network rule , along with its connectivity map .the 2-d continuous gridworld environment is a two dimensional environment wherein the current state is a real valued coordinate ^ 2 $ ] .the agent is initially randomly placed within the grid and attempts to find the shortest path to the goal , located in the upper right corner ; more specifically , in this paper the goal is found when , at which point the agent is given a fixed reward of 1000 , otherwise 0 is given .any action that would take the system outside of the environment moves the system to the nearest boundary .a teletransportation mechanism is employed whereby a trial is reset if the agent has not reached the goal state within 500 movements . as actions ,the agent may choose one of four possible movements ( north , south , east , or west ) each of which is a step size , , of 0.05 .the optimal number of steps is thus 18.6 .the continuous state space , combined with the long sequence of actions required to reach the goal , make the continuous gridworld one of the most challenging multistep problems hitherto considered by lcs .figure [ fig : fdgp - grid - all ] shows the performance of fdgp - xcsf in the continuous gridworld environment using the same parameters used by . however , here , ( 2 inputs , 2 outputs , 1 match node ) . from figure[ fig : fdgp - grid - perf ] it can be seen that an optimal solution is learnt around 30,000 trials , which is slower than xcsf with interval - conditions ( ,000 trials , ) , however is similar in performance to an mlp - based neural - xcsf .the average mutation rate within the networks ( see figure [ fig : fdgp - grid - sizemut ] ) declines rapidly from 40% to 5% after 10,000 trials and then declines at a slower rate until reaching a bottom around 2.5% after 50,000 trials .the number of ( non - unique ) macro - classifiers ( also figure [ fig : fdgp - grid - sizemut ] ) initially grows rapidly , reaching a peak at 10,000 before declining to around 6,900 .furthermore , from figure [ fig : fdgp - grid - top ] it can be seen that the average number of nodes in the fuzzy logic networks increases from 5 to 7.1 and the average number of connections within the networks remains near static around 2 .additionally , the average value of remains static around 10 , while the value of increases slightly , on average , from 26 to 27 .+ the frog problem is a single - step problem with a non - linear continuous - valued payoff function in a continuous one - dimensional space .a frog is given the learning task of jumping to catch a fly that is at a distance , , from the frog , where .the frog receives a sensory input , , before jumping a chosen distance , , and receiving a reward based on its new distance from the fly , as given by : in the continuous - action case , the frog may select any continuous number in the range [ 0,1 ] and thus the optimal achievable performance is 100% .wilson presented a form of xcsf where the action was computed directly as a linear combination of the input state and a vector of action weights , and conducted experimentation on the continuous - action frog problem , selecting the classifier with the highest prediction for exploitation . subsequently extended this by adapting the action weights to the problem through the use of an evolution strategy ( es ) .in addition to the action weights , a vector of standard deviations is maintained for use as the mutation step size by the es . during exploration ,the es is applied to each member of [ a ] to evolve the action weights and standard deviations , where each rule functions as a single parent producing an offspring via mutation ; the offspring is then evaluated on the current environment state and its fitness updated and compared with the parent , if the offspring has a higher fitness it replaces the parent , otherwise it is discarded . moreover ,the exploration action selection policy was modified from purely random to selecting the action with the highest prediction . after reinforcement updates and running the es ,the ga is invoked using a combination of mixed crossover and mutation .they reported greater than 99% performance after an averaged number of 30,000 trials ( ) , which was superior to the performance reported by .more recently , ramirez - ruiz _ et al . _ applied a fuzzy - lcs with continuous vector actions , where the ga only evolved the action parts of the fuzzy systems , to the continuous - action frog problem , and achieved a lower error than q - learning ( discretized over 100 elements in and ) after 500,000 trials ( ) . to accommodate continuous - actions, the following modifications were made to fdgp - xcsf .firstly , the output nodes are no longer discretized , instead providing a real numbered output in the range [ 0,1 ] . after building [ m ] in the standard way , [ a ]is built by selecting a single classifier from [ m ] and adding matching classifiers whose actions are within a predetermined range of that rule s proposed action ( here the range , or window size , is set to ) .parameters are then updated and the ga executed as usual in [ a ] .exploitation functions by selecting the single ` best ' rule from [ m ] ; the following experiments compare the performance achieved using various criteria to select the best rule from the match set .the parameters used here are the same as used by and , i.e. , , , , , , , , .only one output node is required and thus .figure [ fig : fdgp - frog - all ] illustrates the performance of fdgp - xcsf in the continuous - action frog problem . from figure[ fig : fdgp - frog - asynch ] it can be seen that greater than 99% performance is achieved in fewer than 4,000 trials ( ) , which is faster than previously reported results ( % after 30,000 trials , ) ( % after 10,000 trials , ) , and with minimal changes resulting in none of the drawbacks ; i.e. , exploration is here conducted with roulette wheel on prediction instead of deterministically selecting the highest predicting rule , an approach more suitable for online learning .furthermore , in the action weights update component includes the evaluation of the offspring on the last input / payoff before being discarded if the mutant offspring is not more accurate than the parent ; therefore additional evaluations are performed which are not reflected in the number of trials reported . from figure[ fig : fdgp - frog - asynch - top ] it can be seen that the average number of ( non - unique ) macro - classifiers rapidly increases to approximately 1400 after 3,000 trials , before converging to around 150 ; this is more compact than xcsf with interval conditions ( ) , showing that fdgp - xcsf can provide strong generalisation .in addition , the networks grow , on average , from 3 nodes to 3.5 , and the average connectivity remains static around 1.9 . the average mutation rate declines from 50% to 2% over the first 15,000 trials before converging to around 1.2% and the average value of increases by from 28.5 to 31.5 .this paper has explored examples of a temporally dynamic graph - based representation updated with asynchronous parallelism ( dgp ) .the dgp syntax presented consists of each node receiving an arbitrary number of inputs from an unrestricted topology ( i.e. , recursive connections are permitted ) , and then performing an arbitrary function . the representation is evolved under a self - adaptive and open - ended scheme , allowing the topology to grow to any size to meet the demands of the problem space . in the discrete case , dgp is equivalent to a form of random boolean network ( rbn ) .it was shown that the xcsf learning classifier system is able to design ensembles of asynchronous rbn whose emergent behaviour can collectively solve discrete - valued computational tasks under a reinforcement learning scheme . in particular , it was shown possible to evolve and retrieve the content - addressable memory existing as locally stable limit points ( attractors ) within the asynchronously ( randomly ) updated networks when the final node states from the previous match processing cycle become the starting states for the next environmental input .furthermore , it was shown that the parameters controlling system sampling of the networks dynamical behaviour can be made to self - adapt to the temporal complexities of the target environment .the introduced system thus does not need prior knowledge of the dynamics of the solution networks necessary to represent the environment .in particular , the representation scheme was exploited to solve the woods102 non - markov maze ( i.e. , without extra mechanisms ) , a maze which has only previously been solved by lcs using an explicit 8-bit memory register .a significant advantage of the memory inherent within dgp is that each rule / network s short - term memory is variable - length and adaptive , i.e. , the networks can adjust the memory parameters , selecting within the limits of the capacity of the memory , what aspects of the input sequence are available for computing predictions .in addition , as the topology is variable - length , the maximum size of the short - term memory is open - ended , increasing as the number of nodes within the network grows .thus the maximum size of the content - addressable memory does not need to be predetermined .subsequently , the generality of the dgp scheme was further explored by replacing the selectable boolean functions with fuzzy logical functions , permitting the application to continuous - valued domains .specifically , the collective emergent behaviour of ensembles of asynchronous fuzzy logic networks were shown to be exploitable in solving continuous - valued input - output reinforcement learning problems , with similar performance to mlp - based neural - xcsf in the continuous - valued multi - step grid environment and superior performance to those reported previously in the frog problem .angeline , p.j . : an alternative to indexed memory for evolving programs with explicit state representations . in : proceedings of the 2nd annual conference on genetic programming .. 423430 .morgan kaufmann ( 1997 ) balan , g.c . , luke , s. : a demonstration of neural programming applied to non - markovian problems . in : proceedings of the 6th annual conference on genetic and evolutionary computation .gecco 04 , acm ( 2004 ) banzhaf , w. , nordin , p. , keller , r.e . ,francone , f.d .: genetic programming : an introduction : on the automatic evolution of computer programs and its applications .the morgan kaufmann series in artificial intelligence , morgan kaufmann ( 1997 ) bonarini , a. : fuzzy and crisp representations of real - valued input for learning classifier systems . in : learning classifier systems , from foundations to applications .lnai , vol .1813 , pp . 107124 .springer - verlag , berlin ( 2000 ) boyan , j. , moore , a. : generalization in reinforcement learning : safely approximating the value function . in : advances in neural information processing systems369376 . nips 1995 , mit press ( 1995 ) bull , l. : on using constructivism in neural classifier systems . in : merelo , j.j ., adamidis , p. , beyer , h.g .parallel problem solving from nature : ppsn vii , lecture notes in computer science , vol .2439 , pp .springer berlin / heidelberg ( 2002 ) bull , l. , hurst , j. , tomlinson , a. : self - adaptive mutation in classifier system controllers . in : meyer , j.a ., berthoz , a. , floreano , d. , roitblat , h. , wilson , s.w .( eds . ) from animals to animats 6 , proceedings of the sixth international conference on simulation of adaptive behavior .460468 . mit press ( 2000 ) bull , l. , hurst , j. : a neural learning classifier system with self - adaptive constructivism . in : evolutionary computation , 2003 .the ieee congress on .vol . 2 , pp .991997 . ieee press ( december 2003 ) bull , l. , ohara , t. : accuracy - based neuro and neuro - fuzzy classifier systems . in : proceedings of the genetic and evolutionary computation conference .gecco 02 , morgan kaufmann publishers inc ., san francisco , ca , usa ( 2002 ) bull , l. , preen , r.j . : on dynamical genetic programming : random boolean networks in learning classifier systems . in : proceedings of the 12th european conference on genetic programming .eurogp 09 , springer - verlag , berlin , heidelberg ( 2009 ) cao , y. , wang , p. , tokuta , a. : gene regulatory network modeling : a data driven approach . in : wang ,p. , ruan , d. , kerre , e. ( eds . ) fuzzy logic , studies in fuzziness and soft computing , vol .springer berlin / heidelberg ( 2007 ) di , j. , lala , p.k . : cellular array - based delay - insensitive asynchronous circuits design and test for nanocomputing systems .journal of electronic testing : theory and applications 23 , 175192 ( june 2007 ) fogel , d.b ., angeline , p.j . , fogel , d.b .: an evolutionary programming approach to self - adaptation on finite state machines . in : proceedings of the fourth annual conference on evolutionary programming .mit press ( 1995 ) fogel , l.j . ,owens , a.j . ,walsh , m.j . : artificial intelligence through a simulation of evolution . in : biophysics and cybernetic systems : proceedings of the 2nd cybernetic sciences symposium .. 131155 .spartan book co. , washington , d.c . , usa ( 1965 ) ghahramani , z. : learning dynamic bayesian networks . in : adaptive processing of sequences and data structures , international summer school on neural networks , `` e.r .caianiello''-tutorial lectures .. 168197 .springer - verlag , london , uk ( 1998 ) ghozeil , a. , fogel , d.b . : discovering patterns in spatial data using evolutionary programming . in : proceedings of the first annual conference on genetic programming .gecco 96 , mit press , cambridge , ma , usa ( 1996 ) hirasawa , k. , okubo , m. , katagiri , h. , hu , j. , murata , j. : comparison between genetic network programming ( gnp ) and genetic programming ( gp ) . in : evolutionary computation , 2001 .proceedings of the ieee congress on .vol . 2 , pp .ieee press ( 2001 ) ioannides , c. , browne , w. : investigating scaling of an abstracted lcs utilising ternary and s - expression alphabets . in : bacardit , j. , bernado - mansilla , e. , butz , m.v . , kovacs , t. , llora , x. , takadama , k. ( eds . ) learning classifier systems , pp .springer - verlag , berlin , heidelberg ( 2008 ) knabe , j.f ., schilstra , m.j . ,nehaniv , c.l . : evolution and morphogenesis of differentiated multicellular organisms : autonomously generated diffusion gradients for positional information . in : proceedings of the 7th german workshop on artificial life 2006 .gwal-7 , akademische verlagsgesellschaft aka ( 2006 ) knabe , j.f . ,schilstra , m.j . ,nehaniv , c.l . : evolution and morphogenesis of differentiated multicellular organisms : autonomously generated diffusion gradients for positional information . in : artificial life xi : proceedings of the eleventh international conference on the simulation and synthesis of living systems .321328 . mit press ( 2008 ) kok , t. , wang , p. : a study of 3-gene regulation networks using nk - boolean network model and fuzzy logic networking . in : kahraman , c. ( ed . )fuzzy applications in industrial engineering , studies in fuzziness and soft computing , vol .springer berlin / heidelberg ( 2006 ) lanzi , p.l . : mining interesting knowledge from data with the xcs classifier system . in : proceedings of the genetic and evolutionary computation conference .. 958965 .gecco 01 , morgan kaufmann ( 2001 ) lanzi , p.l . ,loiacono , d. , wilson , s.w . ,goldberg , d.e . : xcs with computed prediction in continuous multistep environments . in : evolutionary computation , 2005 .the 2005 ieee congress on .vol . 3 , pp .ieee press ( september 2005 ) lanzi , p.l . , perrucci , a. : extending the representation of classifier conditions part ii : from messy coding to s - expressions . in : proceedings of the genetic and evolutionary computation conference .gecco 99 , morgan kaufmann ( 1999 ) lanzi , p.l . , rocca , s. , sastry , k. , solari , s. : analysis of population evolution in classifier systems using symbolic representations . in : bacardit , j. , bernad - mansilla , e. ,butz , m.v ., kovacs , t. , llor , x. , takadama , k. ( eds . ) learning classifier systems , lecture notes in computer science , vol .4998 , pp .springer berlin / heidelberg ( 2008 ) lemke , n. , mombach , j.c.m ., bodmann , b.e.j . : a numerical investigation of adaptation in populations of random boolean networks .physica a : statistical mechanics and its applications 301 , 589600 ( 2001 ) loiacono , d. , lanzi , p.l . : computed prediction in binary multistep problems . in : evolutionary computation , 2008 .( ieee world congress on computational intelligence ) .ieee congress on .ieee press ( june 2008 ) miller , j.f . : an empirical study of the efficiency of learning boolean functions using a cartesian genetic programming approach . in : proceedings of the genetic and evolutionary computation conference .. 11351142 .gecco 99 , morgan kaufmann ( 1999 ) mozer , m.c . : neural net architectures for temporal sequence processing . in : weigend , a.s . ,gershenfeld , n.a .time series prediction : forecasting the future and understanding the past , pp .addison - wesley ( 1994 ) orriols - puig , a. , casillas , j. , bernad - mansilla , e. : fuzzy - ucs : preliminary results . in : proceedings of the 2007 gecco conference companion on genetic and evolutionary computation .. 28712874 .gecco 07 , acm , new york , ny , usa ( 2007 ) pearl , j. : bayesian networks : a model of self - activated memory for evidential reasoning, university of california , los angeles ( 1985 ) , http://ftp.cs.ucla.edu/tech-report/198_-reports/850021.pdf perkis , t. : stack - based genetic programming . in : evolutionary computation , 1994 .ieee world congress on computational intelligence ., proceedings of the first ieee conference on .. 148153 .ieee press ( june 1994 ) preen , r.j . , bull , l. : discrete dynamical genetic programming in xcs . in : proceedings of the 11th annual conference on genetic and evolutionary computation .. 12991306 .gecco 09 , acm , new york , ny , usa ( 2009 ) pujol , j.c.f . ,poli , r. : efficient evolution of asymmetric recurrent neural networks using a pdgp - inspired two - dimensional representation . in : proceedings of the first european workshop on genetic programming .. 130141 .springer - verlag , london , uk ( 1998 ) quick , t. , nehaniv , c. , dautenhahn , k. , roberts , g. : evolving embedded genetic regulatory network - driven control systems . in : proceedings of the seventh european artificial life conference .. 266277 .springer , heidelberg ( 2003 ) ramirez ruiz , j.a ., valenzuela - rendn , m. , terashima - marn , h. : qfcs : a fuzzy lcs in continuous multi - step environments with continuous vector actions . in : rudolph , g. , jansen , t. , lucas , s.m ., poloni , c. , beume , n. ( eds . ) parallel problem solving from nature : ppsn x. pp .springer - verlag , berlin , heidelberg ( 2008 ) schmidt , m. , lipson , h. : comparison of tree and graph encodings as function of problem complexity . in : proceedings of the 9th annual conference on genetic and evolutionary computation .. 16741679 .gecco 07 , acm , new york , ny , usa ( 2007 ) shirakawa , s. , ogino , s. , nagao , t. : graph structured program evolution . in : proceedings of the 9th annual conference on genetic and evolutionary computation .. 16861693 .gecco 07 , acm , new york , ny , usa ( 2007 ) teller , a. , veloso , m. : neural programming and an internal reinforcement policy . in : koza , j.r .( ed . ) late breaking papers at the genetic programming 1996 conference .. 186192 .stanford university ( 1996 ) teller , a. , veloso , m. : pado : a new learning architecture for object recognition . in : ikeuchi , k. , veloso , m. ( eds . ) symbolic visual learning , pp .oxford university press , inc ., new york , ny , usa ( 1997 ) tran , h.t . , sanza , c. , duthen , y. , nguyen , t.d . : xcsf with computed continuous action . in : proceedings of the 9th annual conference on genetic and evolutionary computation .. 18611869 .gecco 07 , acm , new york , ny , usa ( 2007 ) valenzuela - rendn , m. : the fuzzy classifier system : a classifier system for continuously varying variables . in : proceedings of the fourth international conference on genetic algorithms .. 346353 .morgan kaufmann publishers inc ., san francisco , ca , usa ( 1991 ) wilson , s.w .: classifier systems for continuous payoff environments . in : genetic and evolutionary computation gecco 2004 , lecture notes in computer science , vol . 3103 , pp .springer berlin / heidelberg ( 2004 ) wilson , s.w . : three architectures for continuous action . in : proceedings of the 2003 - 2005 international conference on learning classifier systems .. 239257 .iwlcs03 - 05 , springer - verlag , berlin , heidelberg ( 2007 ) wilson , s.w .: classifier conditions using gene expression programming . in : bacardit , j. , bernado - mansilla , e. , butz , m.v . , kovacs , t. , llora , x. , takadama , k. ( eds . ) learning classifier systems .. 206217 .springer - verlag , berlin , heidelberg ( 2008 ) wuensche , a. : basins of attraction in network dynamics : a conceptual framework for biomolecular networks . in : schlosser , g. , wagner , g.p .modularity in development and evolution , pp .chicago , university press ( 2004 )
|
a number of representation schemes have been presented for use within learning classifier systems , ranging from binary encodings to neural networks . this paper presents results from an investigation into using discrete and fuzzy dynamical system representations within the xcsf learning classifier system . in particular , asynchronous random boolean networks are used to represent the traditional condition - action production system rules in the discrete case and asynchronous fuzzy logic networks in the continuous - valued case . it is shown possible to use self - adaptive , open - ended evolution to design an ensemble of such dynamical systems within xcsf to solve a number of well - known test problems .
|
ontologies and knowledge bases such as wordnet or yago are extremely useful resources for query expansion , coreference resolution , question answering ( siri ) , information retrieval ( google knowledge graph ) , or generally providing inference over structured knowledge to users .much work has focused on extending existing knowledge bases using patterns or classifiers applied to large corpora .we introduce a model that can accurately learn to add additional facts to a database using only that database .this is achieved by representing each entity ( i.e. , each object or individual ) in the database by a vector that can capture facts and their certainty about that entity .each relation is defined by the parameters of a novel neural tensor network which can explicitly relate two entity vectors and is more powerful than a standard neural network layer .furthermore , our model allows us to ask whether even entities that were not in the database are in certain relationships by simply using distributional word vectors .these vectors are learned by a neural network model using unsupervised text corpora such as wikipedia .they capture syntactic and semantic information and allow us to extend the database without any manually designed rules or additional parsing of other textual resources .the model outperforms previously introduced related models such as that of bordes et al .we evaluate on a heldout set of relationships in wordnet .the accuracy for predicting unseen relations is 75.8% .we also evaluate in terms of ranking .for wordnet , there are 38,696 different entities and we use 11 relationship types . on average for each left entity there are 100 correct entities in a specific relationship . for instance , _ dog _ has many hundreds of hyponyms such as _ puppy , barker _ or _dachshund_. in 20.9% of the relationship triplets , the model ranks the correct test entity in the top 100 out of 38,696 possible entities .there is a vast amount of work extending knowledge bases using external corpora , among many others . in contrast , little work has been done in extensions based purely on the knowledge base itself .the work closest to ours is that by bordes et al .we implement their approach and compare to it directly .our model outperforms it by a significant margin in terms of both accuracy and ranking .both models can benefit from initialization with unsupervised word vectors .another related approach is that by sutskever et al . who use tensor factorization and bayesian clustering for learning relational structures . instead of clustering the entities in a nonparametric bayesian framework we rely purely on learned entity vectors .their computation of the truth of a relation can be seen as a special case of our proposed model . instead of using mcmc for inference, we use standard backpropagation which is modified for the neural tensor network .lastly , we do not require multiple embeddings for each entity .instead , we consider the subunits ( space separated words ) of entity names .this allows more statistical strength to be shared among entities .many methods that use knowledge bases as features such as could benefit from a method that maps the provided information into vector representations .we learn to modify unsupervised word representations via grounding in world knowledge .this essentially allows us to analyze word embeddings and query them for specific relations .furthermore , the resulting vectors could be used in other tasks such as ner or relation classification in natural language .lastly , ranzato et al . introduced a factored 3-way restricted boltzmann machine which is also parameterized by a tensor .in this section we describe the full neural tensor network .we begin by describing the representation of entities and continue with the model that learns entity relationships .we compare using both randomly initialized word vectors and pre - trained -dimensional word vectors from the unsupervised model of collobert and weston . using free wikipedia text, this model learns word vectors by predicting how likely it is for each word to occur in its context .the model uses both local context in the window around each word and global document context .similar to other local co - occurrence based vector space models , the resulting word vectors capture distributional syntactic and semantic information . for further details and evaluations of these embeddings ,see . for cases where the entity name has multiple words ,we simply average the word vectors .the neural tensor network ( ntn ) replaces the standard linear layer with a bilinear layer that directly relates the two entity vectors .let be the vector representations of the two entities .we can compute a score of how plausible they are in a certain relationship by the following ntn - based function : }_r e_2 + v_r \left [ \begin{matrix } e_1\\ e_2\\ \end{matrix } \right ] + b_r \right ) , \label{eq : ntn}\ ] ] where is a standard nonlinearity .we define } \in \mathbb{r}^{d \times d \times k}$ ] as a tensor and the bilinear tensor product results in a vector , where each entry is computed by one slice of the tensor : }e_2 .\label{eq : tensorhid}\ ] ] the remaining parameters for relation are the standard form of a neural network : and .the main advantage of this model is that it can directly relate the two inputs instead of only implicitly through the nonlinearity .the bilinear model for truth values in becomes a special case of this model with . in order to train the parameters , we minimize the following contrastive max - margin objective : where is the number of training triplets and we score the correct relation triplets higher than a corrupted one in which one of the entities was replaced with a random entity. for each correct triplet we sample random corrupted entities .the model is trained by taking gradients with respect to the five sets of parameters and using minibatched l - bfgs .in our experiments , we follow the data settings of wordnet in .there are a total of 38,696 different entities and 11 relations .we use 112,581 triplets for training , 2,609 for the development set and 10,544 for final testing .the wordnet relationships we consider are _ has instance , type of , member meronym , member holonym , part of , has part , subordinate instance of , domain region , synset domain region , similar to , domain topic_. we compare our model with two models in bordes et al . , which have the same goal as ours .the model of has the following scoring function : where .the model of also maps each relation type to an embedding and scores the relationships by : where . in the comparisons below, we call these two models the _ similarity model _ and the _ hadamard model _ respectively . while our function scores correct triplets highly , these two models score correct triplets lower .all models are trained in a contrastive max - margin objective functions .our goal is to predict `` correct '' relations in the testing data .we can compute a score for each triplet .we can consider either just a classification accuracy result as to whether the relation holds , or look at a ranking of , for considering relative confidence in particular relations holding .we use a different evaluation set from bordes et al . because it has became apparent to us and them that there were issues of overlap between their training and testing sets which impacted the quality and interpretability of their evaluation . for each triplet , we compute the score for all other entities in the knowledge base .we then sort values by decreasing order and report the rank of the correct entity . for wordnet the total number of entitiesis .some of the questions relating to triplets are of the form `` a is a type of ? '' or`` a has instance ? ''since these have multiple correct answers , we report the percentage of times that is ranked in the top of the list ( recall @ 100 ) .the higher this number , the more often the specific correct test entity has likely been correctly estimated .after cross - validation of the hyperparameters of both models on the development fold , our neural tensor net obtains a ranking recall score of 20.9% while the similarity model achieves 10.6% , and the hadamard model achieves only 7.4% .the best performance of the ntn with random initialization instead of the semantic vectors drops to 16.9% and the similarity model and the hadamard model only achieve 5.7% and 7.1% . in this experiment , we ask the model whether any arbitrary triplet of entities and relations is true or not . with the help of the large vocabulary of semantic word vectors, we can query whether certain wordnet relationships hold or not even for entities that were not originally in wordnet .we use the development fold to find a threshold for each relation such that if , the relation holds , otherwise it is considered false . in order to create negative examples ,we randomly switch entities and relations from correct testing triplets , resulting in a total of triplets .the final accuracy is based on how many of of triplets are classified correctly .the neural tensor network achieves an accuracy of 75.8% with semantically initialized entity vectors and 70.0% with randomly initialized ones . in comparison ,the similarity based model only achieve 66.7% and 51.6% , the hadamard model achieve 71.9% and 68.2% with the same setup .all models improve in performance if entities are represented as an average of their word vectors but we will leave experimentation with this setup to future work .we introduced a new model based on neural tensor networks . unlike previous models for predicting relationships purely using entity representations in knowledge bases ,our model allows direct interaction of entity vectors via a tensor .this architecture allows for much better performance in terms of both ranking correct answers out of tens of thousands of possible ones and predicting unseen relationships between entities .it enables the extension of databases even without external textual resources but can also benefit from unsupervised large corpora even without manually designed extraction rules .
|
knowledge bases provide applications with the benefit of easily accessible , systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations . much work has focused on building or extending them by finding patterns in large unannotated text corpora . in contrast , here we mainly aim to complete a knowledge base by predicting additional true relationships between entities , based on generalizations that can be discerned in the given knowledgebase . we introduce a neural tensor network ( ntn ) model which predicts new relationship entries that can be added to the database . this model can be improved by initializing entity representations with word vectors learned in an unsupervised fashion from text , and when doing this , existing relations can even be queried for entities that were not present in the database . our model generalizes and outperforms existing models for this problem , and can classify unseen relationships in wordnet with an accuracy of 75.8% .
|
the method of controlled lagrangians for stabilization of relative equilibria ( steady state motions ) originated in bloch , leonard , and marsden and was then developed in auckly , bloch , leonard , and marsden , bloch , chang , leonard , and marsden , and hamberg .a similar approach for hamiltonian controlled systems was introduced and further studied in the work of blankenstein , ortega , van der schaft , maschke , spong , and their collaborators ( see , e.g. , and related references ) .the two methods were shown to be equivalent in and a nonholonomic version was developed in , and . in the controlled lagrangian approach ,one considers a mechanical system with an uncontrolled ( free ) lagrangian equal to kinetic energy minus potential energy . to start with, one considers the case in which the lagrangian is invariant with respect to the action of a lie group on the configuration space . to stabilize a relative equilibrium of interest, the kinetic energy is modified to produce a _ controlled lagrangian _ which describes the dynamics of the controlled closed - loop system .the equations corresponding to this controlled lagrangian are the closed - loop equations and the new terms appearing in those equations corresponding to the directly controlled variables correspond to control inputs .the modifications to the lagrangian are chosen so that no new terms appear in the equations corresponding to the variables that are not directly controlled .this process of obtaining controlled euler lagrange equations by modifying the original lagrangian is referred to as _kinetic matching_. one advantage of this approach is that once the form of the control law is derived using the controlled lagrangian , the stability of a relative equilibrium of the closed - loop system can be determined by energy methods , using any available freedom in the choice of the parameters of the controlled lagrangian . to obtain asymptotic stabilization , dissipation - emulating termsare added to the control input .the method is extended in to the class of lagrangian mechanical systems with potential energy that may break symmetry , _ i.e. _ , there is still a symmetry group for the kinetic energy of the system but one may now have a potential energy that need not be -invariant .further , in order to define the controlled lagrangian , a modification to the potential energy is introduced that also breaks symmetry in the group variables . after adding the dissipation - emulating terms to the control input, this procedure allows one to achieve complete state - space asymptotic stabilization of an equilibrium of interest .the main objective of this paper is to develop the method of controlled lagrangians for discrete mechanical systems .the discretization is done in the spirit of discrete variational mechanics , as in . in particular , as the closed loop dynamics of a controlled lagrangian system is itself lagrangian , it is natural to adopt a variational discretization that exhibits good long - time numerical stability .this study is also motivated by the recent development of structure - preserving algorithms for the numerical simulation of discrete controlled systems , such as recent work on discrete optimization , such as in .the matching procedure is carried out explicitly for discrete systems with one shape and one group degree of freedom to avoid technical issues and to concentrate on the new phenomena that emerge in the discrete setting that have not been observed in the continuous - time theory . in particular, it leads one to either carefully select the momentum levels or introduce a new term in the controlled lagrangian to perform the discrete kinetic matching .further , when the potential shaping is carried out , it is necessary to introduce non - conservative forcing in the shape equation associated with the controlled lagrangian .it is also shown that once energetically stabilized , the ( relative ) equilibria of interest can be asymptotically stabilized by adding dissipation emulating terms .the separation of controlled dissipation from physical dissipation remains an interesting topic for future research ; even in the continuous theory there are interesting questions remaining , as discussed in .the theoretical analysis is validated by simulating the discrete cart - pendulum system on an incline . when dissipation is added , the inverted pendulum configuration is seen to be asymptotically stabilized , as predicted .the discrete controlled dynamics is used to construct a real - time model predictive controller with piecewise constant control inputs .this serves to illustrate how discrete mechanics can be naturally applied to yield digital controllers for mechanical systems .the paper is organized as follows : in sections [ discrete_mech.sec ] and [ matching.sec ] we review discrete mechanics and the method of controlled lagrangians for stabilization of equilibria of mechanical systems . the discrete version of the potential shaping procedure and related stability analysis are discussed in section [ discrete_matching.sec ] .the theory is illustrated with the discrete cart - pendulum system in section [ disc_cart_pendulum.sec ] .simulations and the construction of the digital controller are presented in sections [ simulations.sec ] and [ digital.sec ] . in a future publicationwe intend to treat discrete systems with nonabelian symmetries as well as systems with nonholonomic constraints .a discrete analogue of lagrangian mechanics can be obtained by considering a discretization of hamilton s principle ; this approach underlies the construction of variational integrators . see marsden and west , andreferences therein , for a more detailed discussion of discrete mechanics . consider a lagrangian mechanical system with configuration manifold and lagrangian .a key notion is that of a _ discrete lagrangian _ , which is a map that approximates the action integral along an exact solution of the euler lagrange equations joining the configurations , ,q ) } \int_0^h l(q,\dot q)\,dt,\ ] ] where ,q) ] with , , and denotes extremum . in the discrete setting ,the action integral of lagrangian mechanics is replaced by an action sum where , , is a finite sequence of points in the configuration space .the equations are obtained by the discrete hamilton principle , which extremizes the discrete action given fixed endpoints and .taking the extremum over gives the _ discrete euler lagrange equations _ for .this implicitly defines the update map , where and replaces the phase space of lagrangian mechanics .since we are concerned with control , we need to consider the effect of external forces on lagrangian systems . in the context of discrete mechanics ,this is addressed by introducing the _ discrete lagrange dalembert principle _( see kane , marsden , ortiz , and west ) , which states that for all variations of that vanish at the endpoints . here, denotes the vector of positions , and , where .the discrete one - form on approximates the impulse integral between the points and , just as the discrete lagrangian approximates the action integral .we define the maps by the relations the discrete lagrange dalembert principle may then be rewritten as = 0\end{gathered}\ ] ] for all variations of that vanish at the endpoints .this is equivalent to the _ forced discrete euler lagrange equationsthis paper focuses on systems with one shape and one group degree of freedom .it is further assumed that the configuration space is the direct product of a one - dimensional shape space and a one - dimensional lie group .the configuration variables are written as , with , and .the velocity phase space , , has coordinates .the lagrangian is the kinetic minus potential energy - v ( q),\ ] ] with -invariant kinetic energy .the corresponding controlled euler lagrange dynamics is where is the control input .assume that the potential energy is -invariant , _ , and that the _ relative equilibria _ , are unstable and given by non - degenerate critical points of . to stabilize the relative equilibria , with respect to , kinetic shaping is used .the controlled lagrangian in this case is defined by where .this velocity shift corresponds to a new choice of the horizontal space ( see for details ) .the dynamics is just the euler lagrange dynamics for controlled lagrangian , lagrangian satisfies the simplified matching conditions of when the kinetic energy metric coefficient in is constant .setting defines the control input , makes equations and identical , and results in controlled momentum conservation by dynamics and . setting makes equations and reduced on the controlled momentum level identical .a very interesting feature of systems , and , is that the _ reduced _ dynamics are the same on all momentum levels , which follows from the independence of equations and of the group velocity .we will see in section [ discrete_matching.sec ] that this property does not hold in the discrete setting , and one has to carefully select the momentum levels when performing discrete kinetic shaping .now , consider the case when the kinetic energy is group invariant , but the potential is not .consider the special case when the potential energy is with having a local non - degenerate maximum at , and the goal is to stabilize the _ , .as it becomes necessary to shape the potential energy as well , the controlled lagrangian is defined by the formula = 0em where is an arbitrary negative - definite function , and is the equilibrium of interest .below we assume that , which can be always accomplished by an appropriate choice of local coordinates for each ( relative ) equilibrium .in discretizing the method of controlled lagrangians , we combine formulae , , and . in the rest of this paper, we will adopt the notations this allows us to construct a _ second - order accurate _discrete lagrangian thus , for a system with one shape and one group degree of freedom the discrete lagrangian is given by the formula - h v ( q _ { k+1/2}).\!\!\end{gathered}\ ] ] the discrete dynamics is governed by the equations where is the control input . at first, it will be assumed that the potential energy is -invariant , _ , and that relative equilibria of and in the absence of control input are unstable .we will see that one needs to either appropriately select the momentum levels or introduce a new parameter into the controlled lagrangian to complete the matching procedure .motivated by the continuous - time matching procedure ( see section [ matching.sec ] ) , we define the discrete controlled lagrangian by the formula .\!\!\!\end{gathered}\ ] ] where is the continuous - time controlled lagrangian .the dynamics associated with is equation is equivalent to the _ discrete controlled momentum conservation : _ where setting makes equations and identical and allows one to represent the discrete momentum equation as the discrete momentum conservation law [ discrete_matching.thm ] _ the dynamics determined by equations and restricted to the momentum level is equivalent to the dynamics of equations and restricted to the momentum level if and only if the matching conditions hold . _ solve equations and for and substitute the solutions in equations and , respectively .this process is a simple version of discrete reduction .a computation shows that the equations obtained this way are equivalent if and only if = 0 .\end{gathered}\ ] ] since and generically , equations and are equivalent if and only if which is equivalent to note that the momentum levels and _ are not _ the same ._ remark ._ as , formulae and become and respectively .that is , as , one recovers the continuous - time control input and the continuous - time matching condition , .condition becomes redundant after taking the limit , _i.e. , _ the reduced dynamics can be matched on arbitrary momentum levels in the continuous - time case , which agrees with observations made in section [ matching.sec ] .we now discuss an alternative matching procedure .define the discrete controlled lagrangian by the formula .\end{gathered}\ ] ] the discrete dynamics associated with this lagrangian is the discrete controlled momentum is given by formula and equation is equivalent to the discrete momentum conservation .[ alternative_discrete_matching.thm ] _ the dynamics and restricted to the momentum level is equivalent to the dynamics and restricted to the same momentum level if and only if the matching conditions hold ._ similar to the proof of theorem [ discrete_matching.thm ] , solve equation for and substitute the solution in equations and , respectively .a computation shows that the equations obtained this way are equivalent if and only if = 0,\!\!\end{gathered}\ ] ] which implies .note that in this case we add an extra term to the controlled lagrangian which eliminates the need for adjusting the momentum level ._ remark ._ the ratio becomes as .that is , as we let the time step go to , we obtain the continuous - time controlled lagrangian modified by a term which is a derivative of the function with respect to time .it is well - known that adding such a derivative term to a lagrangian does not change the dynamics associated with this lagrangian .the stability properties of the relative equilibria , of equations and are now investigated . _the relative equilibria , of equations and , with defined by , are * spectrally stable * if _let , where ( see section [ matching.sec ] ) .the linearization of the reduced dynamics and at is computed to be observe that the value of does not affect the linearized dynamics .the linearized dynamics preserves the quadratic approximation of the discrete energy the equilibrium of is stable if and only if the function is negative - definite at . the latter requirement is equivalent to condition ._ remark ._ the stability condition is identical to the stability condition of the continuous - time cart - pendulum system , and it can be rewritten as the spectrum of the linear map defined by belongs to the unit circle .spectral stability in this situation is not sufficient to conclude nonlinear stability .we now modify the control input by adding the _ kinetic discrete dissipation - emulating term _ in order to achieve the asymptotic stabilization of the upward position of the pendulum . in the above , is a positive constant .the discrete momentum conservation law becomes straightforward calculation shows that the spectrum of the matrix of the linear map defined by the reduced discrete dynamics belongs to the open unit disc .this implies that the equilibrium is asymptotically stable .recall that the the discrete dynamics associated with discrete lagrangian is governed by equations and , where is the control input .the goal of the procedure developed in this section is to stabilize the equilibrium of and . motivated by, we define the second - order accurate discrete controlled lagrangian by the formula where .the dynamics associated with is amended by the term in the discrete shape equation : this term is important for matching systems , and , . _the presence of the terms represents an interesting ( but manageable ) departure from the continuous theory ._ let the following statement is proved by a straightforward calculation : [ discrete_potential_matching.thm ] _ the dynamics , is equivalent to the dynamics , if and only if and are given by _ \\\nonumber & \quad \ , - \frac{h}{2 \rho } \left [ v _ { \varepsilon } ' ( s _ { k+\frac12 } ) + v _ { \varepsilon } ' ( s _ { k-\frac12 } ) \right ] \\ & \quad \ , - \frac{\gamma \delta \phi _ k \tau ( \phi _ { k+1/2 } ) - \gamma \delta \phi _ { k-1 } \tau ( \phi _ { k-1/2})}{h } , \end{aligned}\ ] ] and \\ & \quad \ , + \tau ( \phi _ { k-\frac12 } ) \big [ \gamma \rho j _ { k-1 } + \frac{h}{2 } v ' _ { \varepsilon}(y _ { k-\frac12 } ) \big ] \\ & \quad \ , - \tau ' ( \phi _ { k+\frac12 } )j _ k \delta \phi _ k - \tau ' ( \phi _ { k-\frac12 } ) j _ { k - 1 } \delta \phi _ { k - 1 } \big ) , \end{aligned}\ ] ] where is obtained by substituting and in formula ._ remark ._ equations , define closed - loop dynamics when is given by formula .the terms vanish when as they become proportional to the left - hand side of equation .as in the case of kinetic shaping , the stability analysis is done by means of an analysis of the spectrum of the linearized discrete equations .we assume that the equilibrium to be stabilized is .[ linear_potential_stability.thm ] _ the equilibrium of equations and is * spectrally stable * if _the linearized discrete equations are where is the quadratic approximation of at the equilibrium ( _ i.e. _ , , , and in are replaced by , , and , respectively ) ._ note the absence of the term in equation ._ the linearized dynamics preserves the quadratic approximation of the discrete energy defined by where since is negative , the equilibrium of equations and is stable if the quadratic approximation of the discrete controlled energy is negative - definite .the latter requirement is equivalent to conditions .the spectrum of the linearized discrete dynamics in this case belongs to the unit circle ._ remarks ._ spectral stability in this situation is not sufficient to conclude nonlinear stability .the stability conditions are identical to the stability conditions of the corresponding continuous - time system .following , we now modify the control input by adding the _ potential discrete dissipation - emulating term _ in order to achieve the asymptotic stabilization of the equilibrium . in the above, is a constant .the linearized discrete dynamics becomes where is obtained by substituting and in formula .[ asymtotic_stability.thm ] _ the equilibrium of equations and is asymptotically stable if conditions are satisfied and is positive . _multiplying equations and by and , respectively , we obtain where is the quadratic approximation of the discrete energy .recall that is negative - definite ( see the proof of theorem [ linear_potential_stability.thm ] ) .it is possible to show that , in some neighborhood of , the quantity along a solution of equations and unless this solution is the equilibrium .therefore , increases along non - equilibrium solutions of and . since equations and are linear , this is only possible if the spectrum of and is inside the open unit disk , which implies asymptotic stability of the equilibrium of both linear system and and nonlinear system and with potenital discrete dissipation - emulating term added to .a basic example treated in earlier papers in the smooth setting is the _ pendulum on a cart_. let denote the position of the cart on the -axis , denote the angle of the pendulum with the upright vertical , and denote the elevation angle of the incline , as in figure [ cart.figure ] .cart_inclined_plane ( 94,6) ( 94,19,8) ( 79,80) ( 76,55) ( 80,38) ( 13,75) ( 60.5,60) ( 67,6.76) the configuration space for this system is , with the first factor being the pendulum angle and the second factor being the cart position .the symmetry group of the kinetic energy of the pendulum - cart system is that of translation in the variable , so .the length of the pendulum is , the mass of the pendulum is and that of the cart is . for the cart - pendulum system , , , ,are given by the potential energy is , where note that , and that the potential energy becomes -invariant when the plane is horizontal , _ i.e. , _ when .since the lagrangian for the cart - pendulum system is of the form , the discrete control laws and stabilize the upward vertical equilibrium of the _ pendulum_. as in the continuous - time setting , the _ cart _ is stabilized by symmetry - breaking controller and is not stabilized by symmetry - preserving controller .simulations of the discrete cart - pendulum system are shown in the next section .simulating the behavior of the discrete controlled lagrangian system involves viewing equations and as an implict update map .this presupposes that the initial conditions are given in the form ; however it is generally preferable to specify the initial conditions as .this is achieved by solving the boundary condition for .once the initial conditions are expressed in the form , the discrete evolution can be obtained using the implicit update map .we first consider the case of kinetic shaping on a level surface ( with ) , when is twice the critical value , and without dissipation .here , , , , and .as shown in figure [ fig : discrete_kinetic_nodiss ] , the dynamics is stabilized , but since there is no dissipation , the oscillations are sustained .the dynamics exhibits both a drift and oscillations , as potential shaping is necessary to stabilize the translational dynamics .when dissipation is added , the dynamics is asymptotically stabilized , as shown in figure [ fig : discrete_kinetic_diss ] .however , even though the oscillations are damped , the dynamics retains a drift motion , as expected .we next consider the case of potential shaping on an inclined surface ( with ) without dissipation , with the other physical parameters as before . here , our goal is to regulate the cart at and the pendulum at .we set .the control gains are chosen to be , , and .it is worth noting that the discrete dynamics remain bounded near the desired equilibrium , and this behavior persists even for significantly longer simulation runs involving time - steps . to more clearly visualize the dynamics, we only include a 4000 time - step segment of this computation in figure [ fig : discrete_potential_nodiss ] .the exceptional stability of the discrete controlled trajectory can presumably be understood in terms of the bounded energy oscillations characteristic of symplectic and variational integrators .[ scale=.51 ] discrete_kinetic_nodiss ( 28.3,10.55 ) 0 discrete controlled dynamics with kinetic shaping and without dissipation . the discrete controlled system stabilizes the motion about the equilibrium , but the dynamics is not stabilized ; since there is no dissipation , the oscillations are sustained.,title="fig : " ] ( 31.5,31.67 ) 180 discrete controlled dynamics with kinetic shaping and without dissipation. the discrete controlled system stabilizes the motion about the equilibrium , but the dynamics is not stabilized ; since there is no dissipation , the oscillations are sustained.,title="fig : " ] [ scale=.51 ] discrete_kinetic_diss ( 20.7,28.4 ) -152.3 discrete controlled dynamics with kinetic shaping and dissipation .the discrete controlled system asymptotically stabilizes the motion about the equilibrium ; since there is no potential shaping , the dynamics is not stabilized , and there is a slow drift in .,title="fig : " ] ( 29,11.4 ) -1 discrete controlled dynamics with kinetic shaping and dissipation .the discrete controlled system asymptotically stabilizes the motion about the equilibrium ; since there is no potential shaping , the dynamics is not stabilized , and there is a slow drift in .,title="fig : " ] ( 82.5,22.69 ) 180 discrete controlled dynamics with kinetic shaping and dissipation .the discrete controlled system asymptotically stabilizes the motion about the equilibrium ; since there is no potential shaping , the dynamics is not stabilized , and there is a slow drift in .,title="fig : " ] ( 60.75,27.65 ) 104 discrete controlled dynamics with kinetic shaping and dissipation .the discrete controlled system asymptotically stabilizes the motion about the equilibrium ; since there is no potential shaping , the dynamics is not stabilized , and there is a slow drift in .,title="fig : " ] [ scale=.51 ] cdc06_discrete_nodiss_short ( 25.8,10.55 ) 0 discrete controlled dynamics with potential shaping and without dissipation .the discrete controlled system stabilizes the motion about the equilibrium ; since there is no dissipation , the oscillations are sustained ., title="fig : " ] ( 29,31.67 ) 180 discrete controlled dynamics with potential shaping and without dissipation. the discrete controlled system stabilizes the motion about the equilibrium ; since there is no dissipation , the oscillations are sustained ., title="fig : " ] when dissipation is added , we obtain an asymptotically stabilizing control law , as illustrated in figure [ diss ] .this is consistent with the stability analysis of section [ disc_cart_pendulum.sec ] .[ scale=.51 ] cdc06_discrete ( 28,26.45 ) 170 and .,title="fig : " ] ( 22.1,8.64 ) -10 and .,title="fig : " ] ( 70,19.85 ) -1 and .,title="fig : " ] ( 88.13,19.5 ) 90 and .,title="fig : " ]we now explore the use of the forced discrete euler lagrange equations as the model in a real - time model predictive controller , with piecewise constant control forces .algorithm 1 below describes the details of the procedure . * sense * * sense * * solve * * solve * * actuate * for ] * sense * * solve * + * solve * + * actuate * for ] .this allows it to compute a symmetric finite difference approximation to the continuous control force at using the approximation where the overbar indicates that the position variable is being estimated by the numerical model .this control is then applied as a constant control input for the time interval ] while the controller senses the initial states , and computes the appropriate control forces .consequently , a combination of the forced and unforced discrete euler lagrange equations are used to predict the initial evolution of the system .we present the numerical simulation results for the digital controller in both the case of kinetic shaping ( figure [ digital_control_kinetic ] ) and potential shaping ( figure [ digital_control_potential ] ) .we see that in the case of kinetic shaping , the system is asymptotically stabilized in only the variable , and the dynamics exhibits a drift , whereas in the case of potential shaping , the system is asymptotically stabilized in both the and variables .notice that the use of a piecewise constant control introduces dissipation - like effects , which are reduced as the time - step is decreased .[ scale=.51 ] digital_kinetic ( 20.7,27.9 ) -155.3 to zero , but not .,title="fig : " ] ( 31.8,11.3 ) 7 to zero , but not .,title="fig : " ] ( 82.5,20.45 ) 180 to zero , but not .,title="fig : " ] ( 62.25,29.15 ) 102 to zero , but not .,title="fig : " ] [ scale=.51 ] cdc06_digital ( 22.7,29.9 ) -140.3 and to zero.,title="fig : " ] ( 31.8,12.6 ) 28.5 and to zero.,title="fig : " ] ( 70,11.52 ) -9.3 and to zero.,title="fig : " ] ( 87,21.15 ) 100 and to zero.,title="fig : " ]in this paper we have introduced potential shaping techniques for discrete systems and have shown that these lead to an effective numerical implementation for stabilization in the case of the discrete cart - pendulum model .the method in this paper is related to other discrete methods in control that have a long history ; recent papers that use discrete mechanics in the context of optimal control and celestial navigation are , , and .the method of discrete controlled lagrangians for systems with higher - dimensional configuration space and with non - commutative symmetry will be developed in a forthcoming paper .the research of amb was supported by nsf grants dms-0305837 , dms-0604307 , and cms-0408542 .the research of ml was partially supported by nsf grant dms-0504747 and a university of michigan rackham faculty grant .the research of jem was partially supported by afosr contract fa9550 - 05 - 1 - 0343 .the research of dvz was partially supported by nsf grants dms-0306017 and dms-0604108 .chang , d - e . , a.m. bloch , n.e .leonard , j.e .marsden , & c. woolsey , the equivalence of controlled lagrangian and controlled hamiltonian systems , _ control and the calculus of variations ( special issue dedicated to j.l .lions ) _ * 8 * , 2002 , 393422 .ortega , r. , m.w .spong , f. gmez - estern , & g. blankenstein , stabilization of a class of underactuated mechanical systems via interconnection and damping assignment , _ ieee trans .. control _ * 47 * , 2002 , 12181233 .sanyal , a. , j. shen , n.h .mcclamroch , & a.m. bloch , stability and stabilization of relative equilibria of the dumbbell satellite in central gravity , 2006 , _ journal of the american institute of aeronautics and astronautics , _( to appear ) .woolsey , c. , c. k. reddy , a. m. bloch , d. e. chang , n. e. leonard and j. e. marsden , controlled lagrangian systems with gyroscopic forcing and dissipation , _european journal of control _ , * 10 * , number 5 , 2004 .
|
controlled lagrangian and matching techniques are developed for the stabilization of relative equilibria and equilibria of discrete mechanical systems with symmetry as well as broken symmetry . interesting new phenomena arise in the controlled lagrangian approach in the discrete context that are not present in the continuous theory . in particular , to make the discrete theory effective , one can make an appropriate selection of momentum levels or , alternatively , introduce a new parameter into the controlled lagrangian to complete the kinetic matching procedure . specifically , new terms in the controlled shape equation that are necessary for potential matching in the discrete setting are introduced . the theory is illustrated with the problem of stabilization of the cart - pendulum system on an incline . the paper also discusses digital and model predictive controlers .
|
to robustly handle liquids , such as pouring a certain amount of water into a bowl , a robot must be able to perceive and reason about liquids in a way that allows for closed - loop control .liquids present many challenges compared to solid objects .for example , liquids can not be interacted with directly by a robot , instead the robot must use a tool or container ; often containers containing some amount of liquid are opaque , obstructing the robot s view of the liquid and forcing it to remember the liquid in the container , rather than re - perceiving it at each timestep ; and finally liquids are frequently transparent , making simply distinguishing them from the background a difficult task .taken together , these challenges make perceiving and manipulating liquids highly non - trivial .recent advances in deep learning have enabled a leap in performance not only on visual recognition tasks , but also in areas ranging from playing atari games to end - to - end policy training in robotics . in this paper , we investigate how deep learning techniques can be used for perceiving liquids during pouring tasks . we develop a method for generating large amounts of labeled pouring data for training and testing using a realistic liquid simulation and rendering engine , which we use to generate a data set with 10,122 pouring sequences , each 15 seconds long , for a total of 2,531 minutes of video or over 4.5 million labeled images . using this dataset, we evaluate multiple deep learning network architectures on the tasks of detecting liquid in an image and tracking the location of liquid even when occluded .our results show that deep networks able to detect and track liquid in a simulated environment with a reasonable degree of robustness .we also have preliminary results that show that these networks perform well in real environments .to the best of our knowledge , no prior work has investigated directly perceiving and reasoning about liquids . existing work relating to liquids either uses coarse simulations that are disconnected to real liquid perception and dynamics or constrained task spaces that bypass the need to perceive or reason directly about liquids . while some of this work has dealt with pouring , none of it has attempted to directly perceive liquids from raw sensory data .in contrast , in this work we directly approach this problem . similarly , investigated ways to detect pools of water from an unmanned ground vehicle navigating rough terrain .they detected water based on simple color features or sky reflections , and did nt reason about the dynamics of the water , instead treating it as a static obstacle . learned to categorize objects based on their interactions with running water , although the robot did not detect or reason about the water itself , rather it used the water as a means to learn about the objects .in contrast to , we use vision to directly detect the liquid itself , and unlike , we treat the liquid as dynamic and reason about it . in order to perceive liquids at the pixel level , we make use of fully - convolutional neural networks ( fcn ) .fcns have been successfully applied to the task of image segmentation in the past and are a natural fit for pixel - wise classification .in addition to fcns , we utilize long short - term memory ( lstm ) recurrent cells to reason about the temporal evolution of liquids .lstms are preferable over more standard recurrent networks for long - term memory as they overcome many of the numerical issues during training such as exploding gradients .lstm - based cnns have been successfully applied to many temporal memory tasks by previous work , and in fact lstms have even been combined with fcns by replacing the standard fully - connected layers of their lstms with convolution layers .we use a similar method in this paper .in order to train neural networks to perceive and reason about liquids , we must first have labeled data to train on .getting pixel - wise labels for real - world data can be difficult , so in this paper we opt to use a realistic liquid simulator . in this way we can acquire ground truth pixel labels while generating images that appear as realistic as possible .we train three different types of convolutional neural networks ( cnns ) on this generated data to detect and track the liquid : single - frame cnn , multi - frame cnn , and lstm - cnn .( 6.5,6.5 ) ( 0.0,0.0 ) we generate data using the 3d - modeling application blender and the library elbeem for liquid simulation , which is based on the lattice - boltzmann method for efficient , physically accurate liquid simulations .we separate the data generation process into two steps : simulation and rendering . during simulation ,the liquid simulator calculates the trajectory of the surface mesh of the liquid as the cup pours the liquid into the bowl .we vary 4 variables during simulation : the type of cup ( cup , bottle , mug ) , the type of bowl ( bowl , dog dish , fruit bowl ) , the initial amount of liquid ( 30% full , 60% full , 90% full ) , and the pouring trajectory ( slow , fast , partial ) , for a total of 81 simulations .each simulation lasts exactly 15 seconds for a total of 450 frames ( 30 frames per second ) .next we render each simulation .we separate simulation from rendering because it allows us to vary other variables that do nt affect the trajectory of the liquid mesh ( e.g. , camera viewpoint ) , which provides a significant speedup as liquid simulation is much more computationally intensive than rendering . in order to approximate realistic reflections , we mapped a 3d photo sphere image taken in our lab to the inside of a sphere , which we place in the scene surrounding all the objects . to prevent overfitting to a static background, we also add a plane in the image in front of the camera and behind the objects that plays a video of activity in our lab that approximately matches with that location in the background sphere .this setup is shown in fig .[ fig : blender_scene ] .the liquid is always rendered as 100% transparent , with only reflections , refractions , and specularities differentiating it from the background .for each simulation , we vary 6 variables : camera viewpoint ( 48 preset viewpoints ) , background video ( 8 videos ) , cup and bowl textures ( 6 textures each ) , liquid reflectivity ( normal , none ) , and liquid index - of - refraction ( air - like , low - water , normal - water ) .the 48 camera viewpoints were generated by varying the camera elevation ( level with the table and looking down at a 45 degree angle ) , camera distance ( 8 m , 10 m , and 12 m ) , and the camera azimuth ( the 8 points of the compass , with north , southwest , and south shown in the top , middle , and bottoms rows of fig .[ fig : data_gen ] ) respectively .we also generate negative examples without liquid . in total , this yields 165,888 possible renders for each simulation .it is infeasible to render them all , so we randomly sample variable values to render .the labels are generated for each object ( liquid , cup , bowl ) as follows .first , all other objects in the scene are set to render as invisible .next , the material for the object is set to render as a specific , solid color , ignoring lighting. the sequence is then rendered , yielding a class label for the object for each pixel .an example of labeled data ( right column ) and its corresponding rendered image ( left column ) is shown in fig .[ fig : data_gen ] .the cup , bowl , and liquid are rendered as red , green and blue respectively .note that this method allows each pixel to have multiple labels , e.g. , some of the pixels in the cup are labeled as both cup and liquid ( magenta in the right column of fig .[ fig : data_gen ] ) .to determine which of the objects , if any , is visible at each pixel , we render the sequence once more with all objects set to render as their respective colors , and we use the alpha channel in the ground truth images to encode the visible class label .( 12.0,7.0 ) ( 0.0,0.0 ) ( 3.0,0.0 ) ( 6.0,0.0 ) ( 9.0,0.0 ) ( 0.0,2.25 ) ( 3.0,2.25 ) ( 6.0,2.25 ) ( 9.0,2.25 ) ( 0.0,4.5 ) ( 3.0,4.5 ) ( 6.0,4.5 ) ( 9.0,4.5 ) ( 1.0,6.85)*rgb * ( 3.8,6.85)*detection * ( 6.8,6.85)*tracking * ( 9.8,6.85)*labels * to evaluate our learning architectures , we generated 10,122 pouring sequences by randomly selecting render variables as described above as well as generating negative sequences ( i.e. , sequences without any water ) , for a total of 4,554,900 training images . both the model files generated by blender and the rendered images for the entire dataset are available for download at the following link : http://rse - lab.cs.washington.edu / lpd/. we test three network layouts for the tasks of detecting and tracking liquids : cnn , mf - cnn , and lstm - cnn .all of our networks are fully - convolutional , that is , there are no fully - connected layers . in place of fully - connected layers used in more standard cnns , we use convolutional layers , which have a similar effect but prevent the explosion of parameters that normally occurs .we use the caffe deep learning framework to implement our networks . 12.0 cm( in3 ) at ( 0.0,4.0 ) convolution * layers is followed by a rectified linear layer .refer to figure 1 of for more details on the lstm layer.,title="fig:",width=75 ] ; ( conv1 ) at ( 2.2,3.975 ) ; ( conv2 ) at ( 3.1,3.975 ) ; ( conv3 ) at ( 4.0,3.975 ) ; ( conv4 ) at ( 4.9,3.975 ) ; ( conv5 ) at ( 5.8,3.975 ) ; ( fc_conv1 ) at ( 6.9,3.975 ) [ fill = blue!60 ] ; ( fc_conv2 ) at ( 7.9,3.975 ) [ fill = blue!60 ] ; ( deconv ) at ( 9.0,3.975 ) [ fill = orange!60 ] ; ( out1 ) at ( 10.2,3.975 ) convolution * layers is followed by a rectified linear layer .refer to figure 1 of for more details on the lstm layer.,title="fig:",width=75 ] ; ( in3 ) ( conv1 ) ; ( conv1 ) ( conv2 ) ; ( conv2 ) ( conv3 ) ; ( conv3 ) ( conv4 ) ; ( conv4 ) ( conv5 ) ; ( conv5 ) ( fc_conv1 ) ; ( fc_conv1 ) ( fc_conv2 ) ; ( fc_conv2 ) ( deconv ) ; ( deconv ) ( out1 ) ; 12.0 cm ( in3 ) at ( 0.0,4.0 ) convolution * layers is followed by a rectified linear layer .refer to figure 1 of for more details on the lstm layer.,title="fig:",width=75 ] ; ( rec1 ) at ( 0.0,2.225 ) convolution * layers is followed by a rectified linear layer .refer to figure 1 of for more details on the lstm layer.,title="fig:",width=75 ] ; ( conv1 ) at ( 2.2,3.975 ) ; ( conv2 ) at ( 3.1,3.975 ) ; ( conv3 ) at ( 4.0,3.975 ) ; ( conv4 ) at ( 4.9,3.975 ) ; ( conv5 ) at ( 5.8,3.975 ) ; ( rec_conv1 ) at ( 2.2,2.2 ) ; ( rec_conv2 ) at ( 3.1,2.2 ) ; ( rec_conv3 ) at ( 4.0,2.2 ) ; ( lstm1 ) at ( 7.0,3.0 ) [ fill = green!60 ] ; ( fc_conv1 ) at ( 8.4,3.0 ) [ fill = blue!60 ] ; ( deconv ) at ( 9.3,3.0 ) [ fill = orange!60 ] ; ( out1 ) at ( 10.2,3.0 ) convolution * layers is followed by a rectified linear layer .refer to figure 1 of for more details on the lstm layer.,title="fig:",width=75 ] ; ( rec_in2 ) at ( 6.0 , 2.0 ) recurrent + state ; ( rec_in3 ) at ( 7.45 , 1.7 ) cell + state ; ( rec_out2 ) at ( 8.7 , 4.5 ) recurrent + state ; ( rec_out3 ) at ( 7.45 , 4.3 ) cell + state ; ( in3 ) ( conv1 ) ; ( conv1 ) ( conv2 ) ; ( conv2 ) ( conv3 ) ; ( conv3 ) ( conv4 ) ; ( conv4 ) ( conv5 ) ; ( rec1 ) ( rec_conv1 ) ; ( rec_conv1 ) ( rec_conv2 ) ; ( rec_conv2 ) ( rec_conv3 ) ; ( conv5.east ) ( lstm1.west ) ; ( rec_conv3.east ) ( lstm1.west ) ; ( lstm1 ) ( fc_conv1 ) ; ( fc_conv1 ) ( deconv ) ; ( deconv ) ( out1 ) ; ( rec_in2 ) ( lstm1.west ) ; ( rec_in3 ) ( lstm1.south ) ; ( lstm1.east ) ( rec_out2.215 ) ; ( lstm1.north ) ( rec_out3 ) ; the first layout is a standard convolutional neural network ( cnn ) .it takes in an image and outputs probabilities for each class label at each pixel .it has a fixed number of convolutional layers , each followed by a rectified linear layer and a max pooling layer . in place of fully - connected layers ,we use two convolutional layers , each followed by a rectified linear layer .the last layer of the network is a deconvolutional layer that upsamples the output of the convolutional layers to be the same size as the input image .this network is shown in fig .[ fig : network - cnn ] .the second layout is a multi - frame cnn . instead of taking in a single frame, it takes as input multiple consecutive frames and predicts the probability of each class label for each pixel at the last frame .it is similar to the single - frame cnn network shown in fig .[ fig : network - cnn ] except each frame is convolved independently through the first 5 convolution layers , and then the output for each frame is concatenated together channel - wise .this is fed to the two convolutional layers , each followed by a rectified linear layer , and finally a deconvolutional layer .we fix the number of input frames for this layout to 32 for this paper , i.e. , approximately 1 second s worth of data ( 30 frames per second ) , which we empirically determined strikes the best balance between window size and memory utilization .the third layout is similar to the single frame cnn layout , with the first convolutional layer replaced with a lstm layer ( see figure 1 of for a detailed layout of the lstm layer ) .we replace the fully - connected layers of a standard lstm with convolutional layers .the lstm takes as recurrent input the cell state from the previous timestep , its output from the previous timestep , and the output of the network from the previous timestep processed through 3 convolutional layers ( each followed by a rectified linear and max pooling layer ) . during training ,when unrolling the lstm - cnn , we initialize this last recurrent input with the ground truth at the first timestep , but during testing we use the standard recurrent network technique of initializing it with all zeros . fig .[ fig : network - lstm ] shows the layout of the lstm - cnn .we evaluated our networks on 4 experiments : fixed - viewpoint detection , multi - viewpoint detection , fixed - viewpoint tracking , and combined detection & tracking .we define the detection task as , given raw color images , determine where the _ visible _ liquid in the images is .we define the tracking task as , given segmented images ( i.e. , images that have already been run through a detector ) , determine where _ all _ liquid ( visible and occluded ) is in the image .intuitively , detection corresponds to perceiving the liquid , while tracking corresponds to reasoning about where the liquid is given what is ( and has been ) visible .every network was trained using the mini - batch gradient descent method adam with a learning rate of 0.0001 and default momentum values .each network was trained for 61,000 iterations , at which point performance tended to plateau .all single - frame networks were trained using a batch size of 32 ; all multi - frame networks with a window of 32 and batch size of 1 ; and all lstm networks with a batch size of 5 .for all experiments except the third ( fixed - viewpoint tracking ) , the input images were scaled to resolution .the error signal was computed using the softmax with loss layer built into caffe .we empirically determined , however , that naively training a network in this setup results in it predicting no liquid present in any scene at all due to the significant positive - negative class imbalance ( most of the pixels in each image are non - liquid pixels ) . to counteract this we employed two strategiesthe first was to pre - train the network on crops of the image around liquid pixels .since our networks are fully - convolutional , they can have variable sized inputs and outputs , which means a network pre - trained in this manner can be immediately trained on full images without needing any modification .the second strategy was to weight the gradients from the error signal based on the class of the ground truth pixel : 1.0 for positive pixels and 0.1 for negative pixels .this decreases the effect of the non - liquid pixels and prevents the network from predicting no liquid in the scene .we report the precision and recall of each network on a hold - out test set , evaluated on pixel - wise classifications .we also report the precision and recall for various amounts of `` slack , '' i.e. , we count a pixel labeled as liquid correct if it is within pixels of a ground truth liquid pixel , where is the amount of slack .this better evaluates the network in cases where it s predictions are only a few pixels off , which is a relatively small error given the resolution of the images .we evaluated all three network types on a fixed - viewpoint detection task .we define fixed - viewpoint in this context to mean data generated as described in section [ sec : data_gen ] for which the camera elevation is level with the table and the azimuth is either north ( as shown in the top row of fig . [ fig : data_gen ] ) or south ( 180 degrees opposite ) .the networks were given the full rendered rgb image as input ( similar to the left column in fig .[ fig : data_gen ] ) and the output was a classification at each pixel as liquid or not liquid . to counteract the class imbalance , we employed visible liquid image crop pre - training for each network ( we initialized the image crop lstm - cnn with the trained weights of the image crop single - frame cnn ) .we then trained the final network for each type on full images initializing it with the weights of the image crop network . during training , the lstm - cnn was unrolled for 32 timesteps .for the second experiment , we expanded the data used to include all 48 viewpoints , presenting a non - trivial increase in difficulty for the networks .our goal was to test the generalizability of the networks across a much wider variation in viewpoints .for this reason , we focused only on testing the best performing network , the lstm - cnn ( see section [ sec : results1 ] for results from experiment 1 ) . also to test generalizability , we only trained the network on a subset of the 48 viewpoints , and tested on the remaining .we used all data generated using the 8 m and 12 m camera viewpoint distances for training and data generated using the 10 m camera distance for testing .we also employed the gradient weighting scheme described above to counteract the class imbalance .the lstm - cnn was trained in the same manner as in experiment 1 . for tracking only, the networks were given pre - segmented input images , with the goal being to track the liquid when it is not visible .an example of this input is shown in the first row of the right column from fig .[ fig : data_gen ] , with the exception that the occluded liquid ( magenta and cyan ) were not shown .because these input images are more structured , we lowered the resolution to .the output was the pixel - wise classification of liquid or not liquid , including pixels where the liquid was occluded by other objects in the scene . during training , the lstm - cnn was unrolled for 160 timesteps .we reduced the number of initial convolution layers on the input from 5 to 3 for each of the three networks . due to the structured nature of the input , each networkwas trained directly on full images with gaussian - random weight initialization .we used the data from the same viewpoints ( level with the table and azimuth at north or south ) as in experiment 1 .for the last experiment , we combine detection and tracking into a single task , i.e. , given raw color images , determine where _ all _ liquid in the scene is ( visible and occluded ) .our goal is to determine if it is possible to do both tasks with one network , and for this reason , we evaluate only the lstm - cnn .we initialized the network with the weights of the trained lstm - cnn from experiment 1 and trained it on full images . as in experiment 2 , we employed the gradient weighting scheme described above to counteract the class imbalance .we used the data from the same viewpoints as in experiment 1 and 3 .( 10.0,7.7 ) ( 0.0,0.0 ) ( 2.0,0.0 ) ( 4.0,0.0 ) ( 6.0,0.0 ) ( 8.0,0.0 ) ( 0.0,1.55 ) ( 2.0,1.55 ) ( 4.0,1.55 ) ( 6.0,1.55 ) ( 8.0,1.55 ) ( 0.0,3.1 ) ( 2.0,3.1 ) ( 4.0,3.1 ) ( 6.0,3.1 ) ( 8.0,3.1 ) ( 0.0,4.65 ) ( 2.0,4.650 ) ( 4.0,4.65 ) ( 6.0,4.65 ) ( 8.0,4.65 ) ( 0.0,6.2 ) ( 2.0,6.2 ) ( 4.0,6.2 ) ( 6.0,6.2 ) ( 8.0,6.2 ) ( 0.5,7.9)*input * ( 2.4,7.9)*labels * ( 4.6,7.9)*cnn * ( 6.25,7.9)*mf - cnn * ( 8.05,7.9)*lstm - cnn * 3.0 cm ( 3.0,2.5 ) ( 0.0,0.0 ) 3.0 cm ( 3.0,2.5 ) ( 0.0,0.0 ) 3.0 cm ( 3.0,2.5 ) ( 0.0,0.0 ) 3.0 cm ( 3.0,2.5 ) ( 0.0,0.0 ) fig .[ fig : results ] shows qualitative results for the three networks on the liquid detection task .the frames in this figure were randomly selected from the training set , and it is clear from the figure that all three networks detect the liquid at least to some degree . show a quantitative comparison between the three networks . as expected, the multi - frame cnn outperforms the single - frame .surprisingly , the lstm - cnn performs much better than both by a significant margin .these results strongly suggest that detecting transparent liquid must be done over a series of frames , rather than a single frame .[ fig : results_multiview_detection ] shows the results from multi - viewpoint detection for the lstm - cnn .as expected , the 8-fold increase in number of viewpoints leads to lower performance as compared to fig .[ fig : results_detection_lstm ] , but overall it is clearly still able to detect the liquid reasonably well .interestingly , there is less spread between the various levels of slack than in fig .[ fig : results_detection_lstm ] , meaning the network benefits less from increased slack , suggesting that it is less precise than the fixed - view lstm - cnn , which makes sense given the much larger variation in viewpoints . 3.0 cm ( 3.0,2.7 ) ( 0.0,0.0 ), the graphs indicate the precision and recall for each of the three networks and the colored lines indicate the variation in the number of slack pixels we allowed for prediction.,title="fig:",width=113 ] 3.0 cm ( 3.0,2.7 ) ( 0.0,0.0 ) , the graphs indicate the precision and recall for each of the three networks and the colored lines indicate the variation in the number of slack pixels we allowed for prediction.,title="fig:",width=113 ] 3.0 cm ( 3.0,2.7 ) ( 0.0,0.0 ) , the graphs indicate the precision and recall for each of the three networks and the colored lines indicate the variation in the number of slack pixels we allowed for prediction.,title="fig:",width=113 ] 3.0 cm ( 3.0,2.7 ) ( 0.0,0.0 ) , the graphs indicate the precision and recall for each of the three networks and the colored lines indicate the variation in the number of slack pixels we allowed for prediction.,title="fig:",width=113 ] for tracking , we evaluated the performance of the networks on locating both visible and invisible liquid , given segmented input ( i.e. , each pixel classified as liquid , cup , bowl , or background ) . because the viewpoint was fixed level with the bowl , the only visible liquid the network was given was liquid as it passed from cup to bowl .show the performance of each of the three networks . as expected , the lstm - cnn has the best performance .interestingly , the multi - frame cnn performs better than expected , given that it only sees approximately 1 second s worth of data and has no memory capability .[ fig : results_detection_tracking ] shows the results of combined detection and tracking for the lstm - cnn .given a raw color image , the network predicted where both the visible and occluded liquid was . comparing this to the rest of fig .[ fig : results_tracking ] , it is clear that the network was able to do quite well , despite using raw , unstructured input , unlike the other networks in that figure .this strongly suggests that lstm - cnns are best suited not only for detecting liquids , but also tracking them .5.5 cm ( 5.5,4.7 ) ( 0.6,1.7 ) ( 3.3,2.2 ) ( -0.2,0.0 ) ( 3.3,0.0 ) ( 1.0,1.8)(0.8cm,1.6 cm ) ( 0,0 ) ; ( 0.0,3.0)*thermal * ( 2.1,3.4)(0.0cm,0.0 cm ) ( 1.1cm,0.0 cm ) ; ( 2.2,3.55)*rgb * ( 2.3,0.8)(0.0cm,0.0 cm ) ( 0.9cm,0.0 cm ) ; ( 1.8,0.95)*threshold * 6.0 cm ( 6.0,4.7 ) ( 0.0,0.0 ) ( 2.0,0.0 ) ( 4.0,0.0 ) ( 0.0,1.55 ) ( 2.0,1.55 ) ( 4.0,1.55 ) ( 0.0,3.1 ) ( 2.0,3.1 ) ( 4.0,3.1 ) ( 0.5,4.7)*input * ( 2.4,4.7)*labels * ( 4.0,4.7)*lstm - cnn * fig .[ fig : real_results ] shows qualitative results of the lstm - cnn trained on a small dataset collected on a real robot in our lab .we used a thermal infrared camera calibrated to our rgb camera in combination with heated water to acquire ground truth labeling for data collected using a real robot .the advantage of this method is that heated water appears identical to room temperature water on a standard color camera , but is easily distinguishable on a thermal camera .this allows us to label the `` hot '' pixels as liquid and all other pixels as not liquid .[ fig : real_setup ] shows our robot setup with the thermal and rgb cameras .it is clear from fig .[ fig : real_results ] that our methods , to at least a limited degree , apply to real world data and not just data generated by a liquid simulator .the results in section [ sec : results ] show that it is possible for deep learning to detect and track liquids in a scene , both independently and combined , and also over a wide variation in viewpoints . unlike prior work on image segmentation, these results clearly show that single images are not sufficient to reliably perceive liquids .intuitively , this makes sense , as a transparent liquid can only be perceived through its refractions , reflections , and specularities , which vary significantly from frame to frame , thus necessitating aggregating information over multiple frames .we also found that lstm - based cnns are best suited to not only aggregate this information , but also to track the liquid as it moves between containers .lstms work best , due to not only their ability to perform short term data integration ( just like the mf - cnn ) , but also to remember states , which is crucial for tracking the presence of liquids even when they re invisible . from the results shown in fig .[ fig : results ] and in the video , it is clear that the lstm cnn can at least roughly detect and track liquids .nevertheless , unlike the task of image segmentation , our ultimate goal is not to perfectly estimate the potential location of liquids , but to perceive and reason about the liquid such that it is possible to manipulate it using raw sensory data . for this ,a rough sense of where the liquid is in a scene and how it is moving might suffice .neural networks , then , have the potential to be a key component for enabling robots to handle liquids using robust , closed - loop controllers .we are currently on expanding the real robot results from section [ sec : real_results ] . as stated in section [ sec : methodology ] , it can be difficult to get the ground truth pixel labels for real data , which is why we chose to use a realistic liquid simulator in this paper .however , our method of combing a thermal camera with heated water to get the ground truth makes it feasible to apply the techniques in this paper to data collected on a real robot . for future workwe plan to collect more data on the real robot using this technique and do a thorough analysis of the results .another avenue of future work we are currently pursuing is extending these techniques to control problems .the results here clearly show that deep neural networks can effectively be used to detect and , to some extent at least , reason about liquids .the next logical step is to utilize neural networks to manipulate liquids via a robot .one potential algorithm to accomplish this is guided policy search ( gps ) , which learns a control policy for a task from raw sensory data .the advantage of an algorithm like gps is that it works well on high - dimensional sensory input where collecting large amounts of data may be infeasible ( as is often the case on a real robotic system ) . in future workwe plan to apply a similar algorithm to the problem of robotic liquid control from raw sensory data .this work was funded in part by the national science foundation under contract number nsf - nri-1525251 and by the intel science and technology center for pervasive computing ( istc - pc ) .
|
recent advances in ai and robotics have claimed many incredible results with deep learning , yet no work to date has applied deep learning to the problem of liquid perception and reasoning . in this paper , we apply fully - convolutional deep neural networks to the tasks of detecting and tracking liquids . we evaluate three models : a single - frame network , multi - frame network , and a lstm recurrent network . our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the lstm network outperforms the other two in both tasks . this suggests that lstm - based neural networks have the potential to be a key component for enabling robots to handle liquids using robust , closed - loop controllers . perception , deep learning , liquids , manipulation
|
consider the problem of testing the two hypothesis against which are given in ( [ principal null hypothesis ] ) and ( [ principal alternative hypothesis ] ) respectively and corresponding to the stochastic model ( [ modelprincipal ] ) .we assume that the lan ( [ lan ] ) of the model ( [ modelprincipal ] ) is established , for example refer to .+ let a -consistent estimate of the parameter where purpose is to construct another estimate of the parameter such that the following fundamental equality is fulfilled where is a specified bounded random function . in the sequel ,the functions + and are assumed to be twice differentiable .our goal , is to find an estimate satisfying ( [ eqaaa1 ] ) pertaining to the tangent space , such that , for , the following equation holds where and the script denotes the inner product .+ with the connection with the equality ( [ eqaaa1 ] ) , the new estimate is then given by imposing that the value satisfied the following identity clearly , the equation ( [ contrainte ] ) has unknown values , so it has an infinity of solutions , after modification of the -th component of the first estimate , we shall propose an element in tangent space which satisfies the equality ( [ contrainte ] ) .we obtain then a new estimate of the unknown parameter , where and such that : for s , + the use of the notation explains that we obtain the new estimate of the parameter when we change in the expression of the estimate the component with respect to the first estimate corresponding to the step of the estimation .it follows from the equality ( [ eqaaa1 ] ) combined with the constraint ( [ contrainte ] ) that by imposing the following condition and with the use of the equality ( [ contrainte ] ) combined with ( [ gradient 1 ] ) , we deduce that in summary , we define the modified estimate by with a same reasoning as the previous case and after modifying the -th component with respect to the second estimate , we shall define a new estimate such that for we obtain under the following condition it follows from the equality ( [ contrainte ] ) combined with ( [ gradient 2 ] ) , that in summary , we obtain the modified estimate the estimate ( respectively , ) is called a modified estimate in -th component with respect to the first estimate ( respectively , in -th component with respect to second estimate ) , we denote this estimate by ( m.e . ) . for each step of the estimation corresponding a value of the position or of the component where the estimate was modified .throughout , is a -consistent estimate of the unknown parameter the conditions ( [ gradient 1 ] ) and ( [ gradient 2 ] ) are not sufficient to get the consistency of the modified estimate ( m.e . ) . in order to get its consistency , we need to resort to one of the following additional conditions . 1 . 2 . where and are two constantes , such that and our first result concerning the consistency of the proposed estimate is summarized in the following proposition .[ consistence ] under ( [ gradient 1 ] ) and ( ( [ gradient 2 ] ) and , respectively ) , the estimate ( , respectively ) is a -consistent estimator of the unknown parameter . in practice , it is not easy to verify the condition ( respectively , ) , in the case when the unknown parameter is univariate , a sufficient condition will be stated in lemma ( [ sufficientcondition ] ) , in this case , we need the following assumption : 1 .: for all real sequence with values in the interval ] therefore we choose the function in order to get this condition .for instance , we shall choose where * with this choice of the function , the condition remains satisfied , in fact , we can remark that then for all u with we have therefore , we shall choose consider the following time series model with conditional heteroscedasticity it is assumed that the model ( [ model with conditional heteroscedasticity ] ) is ergodic and stationary . it will be assumed that the conditions ( b.1 ) , ( b.2 ) and ( b.3 ) are satisfied , where * ( b.1 ) : the fourth order moment of the stationary distributions of ( [ model with conditional heteroscedasticity ] ) exists . * ( b.2 ) : there exists a positive constants and such that for all with + , * ( b.3 ) : for a location family , there exists a square integrable function , and a strictly positive real , where , such that , where and are two positive integers such that we consider the problem of testing the null hypothesis against the alternative hypothesis such that remark that , correspond to ( linearity of ( [ model with conditional heteroscedasticity ] ) ) and ( non linearity of ( [ model with conditional heteroscedasticity ] ) ) with the comparison to the equality ( [ modelprincipal ] ) , we have note that when is large , we have under the conditions ( a.1 ) , ( b.1 ) , ( b.2 ) , and ( b.3 ) , the lan was established in ( * ? ? ?* theorem 4 ) , an efficient test is obtained and its power function is derived . in this case , the central sequence is given by the following equality such that under where the proposed test is then given by by the subsisting by its -consistent estimator in the expression of the central sequence , we shall state the following proposition : [ archecartsecond]suppose that the conditions , , and hold and s are centered i.i.d . and .we have where .\end{aligned}\ ] ] throughout , and are the statistics test and the constant respectively obtained with the subsisting of the unspecified parameter by its modified estimate in the expression of the test ( [ lastexpresion ] ) and the constant appearing in the expression of the log likelihood ratio ( [ lan ] ) respectively . + we assume in the problem of testing the two hypothesis against that the lan of the the model ( [ modelprincipal ] ) is established , in order to prove the optimality of the proposed test . to this end ,we need the following assumption : 1 . there exists a -estimate of the unknown parameter and a random bounded function , such that it is now obvious from the previous definitions that we can state the following theorem : [ optimality ] under lan and the conditions ( [ gradient 1 ] ) ( respectively , ( [ gradient 2 ] ) ) , ( , respectively ) and the asymptotic power of under is equal to to furthermore , is asymptotically optimal .we shall now apply this last theorem in order to conduct simulations corresponding to the representation of the derived asymptotic power function .the concerned model is the nonlinear time series contiguous to ar(1 ) processes with an extension to arch processes .in this section , we assume that s are centered i.i.d . and in this case , we have and we treat the case when the unknown parameter under , the considering time series model can also rewritten to evaluate the performance of our estimator , we provide simulations with comment in this section . in the casewhen the parameter is known , the test is optimal and its power is asymptotically equal to for more details see ( * ? ? ?* theorem 3 ) . in a general case ,when the parameter is unspecified , firstly , we estimate it with the least square estimates secondly , with the use of the ( m.e . ) under the conditions ( [ gradient 1 ] ) and , the modified estimate exists and remains -consistent , making use of ( [ perturbation1 ] ) in connection with the proposition ( [ link between the central sequences in ar1 ] ) it follows: the substitution of the parameter by its estimator in ( [ lastexpresion ] ) , we obtain the following statistics test it follows from theorem ( [ optimality ] ) that is optimal with an asymptotic power function equal to + we choose the function like this + in our simulations , the true value of the parameter is fixed at and the sample sizes are fixed at and for a level , the power relative for each test estimated upon replicates , we represent simultaneously the power test with a true parameter the empirical power test which is obtained with the replacing the true value by its estimate ( m.e . ) corresponding to the equality ( [ me ] ) , and the empirical power test which is obtained with the subsisting the true value by its least square estimator lse ( an estimator with no correction ) , we remark that , the two representations with the true value and the modified estimate m.e .are close for large . with the substitution of the parameter by its modified estimate in ( [ lastexpresion ] ), we obtain the following test such that in our simulations , the true value of the parameter is fixed at and the sample sizes are fixed at and for a level , the power relative for each test estimated upon replicates .we choose the functions and like this + we represent simultaneously the power test with a true parameter and the empirical power test which is obtained with the subsisting the true value by its estimate ( m.e . ) corresponding to the equality ( [ me]),we represent simultaneously the power test with a true parameter the empirical power test which is obtained with the subsisting the true value by its estimate ( m.e . ) corresponding to the equality ( [ me ] ) , and the empirical power test which is obtained with the subsisting the true value by its least square estimator lse ( estimator with no correction ) , we remark that , when is large , we have a similar conclusion as the previous case .we mention that the limiting distributions appearing in proposition ( [ ar1ecartfirst ] ) and proposition ( [ archecartsecond ] ) depend on the unknown quantity , i.e. , in practice is not specified , in general . to circumvent this difficulty, we use the efron s bootstrap in order to evaluate , more precisely , the interested reader may refer to the following references : for the description of the bootstrap methods , , for the bootstrap methods in ar(1 ) time series models and for the arch models .consider the following fundamental decomposition: firstly , we have , secondly we can deduce from ( [ perturbation1 ] ) that: is bounded , we can remark that from , there exists some constante such that from ( [ gradient 1 ] ) and since the function is continuous on it follows that the random variable then the couple converges in probability to the couple since the function is continuous on it result from ( [ dephasage ] ) , that the random variable therefore consider again the equality ( [ fundamental decomposition of estimate ] ) , since the function is continuous on it results from ( [ convergence of the error ] ) that converges in probability to as + notice that the last previous convergences in probability follow immediately with the use of the continuous mapping theorem , for more details , see or . by following the same previous reasoning , we shall prove the consistency of the estimate note that is -consistent estimate of the parameter and where is bounded in probability in in fact , it follows from ( [ fundamental decomposition of estimate ] ) that since and using the condition , it results that + where is bounded in probability in + we deduce that notice that with a similar argument and with changing , and ( [ gradient 1 ] ) by , and ( [ gradient 2 ] ) respectively , we obtain in order to prove lemma [ sufficientcondition ] , we need to stated the following classical lemmas : + [ probabilitycomparaison ] let be a sequence of a positive random variables on the probability space a sequence of a positive ( strictly ) reals such that , then we have , for each firstly , we remark that , we have in fact , we suppose there exists then for each we have which implies that hence a contradiction . with the use of the -additivity , we obtain in this case we denote by the -consistent estimator of + let , from the triangle inequality combined with the lemma ( [ probabilitycomparaison]),we obtain : firstly , we have secondly , we have where is a point between and then there exists a sequence with values in the interval $ ] , such that this implies that + this last inequality enable us to concluded that is -consistency estimator of , it follows from applied on the equality ( [ boundedconssitency second derivative ] ) that thus we obtain [ lemme convergence in probability ] let be a probability space , is a sequence of real random variables on .if converges in probability to a constant , then , there exists a sequence of random variable , with , such that , converges in probability to . s are centered i.i.d . and making use of the results of ( * ? ?* theorem 2 ) , we have the estimated central sequence is by taylor expansion with order we have : where is a point between and and note that since the estimator is -consistent and with the use of lemma ( [ bounded in probability ] ) , it results that from the assumption it follows that finally we deduce that , this implies that is between and and is the second derivative of from the assumption , we have since the estimator is -consistent , it result that this implies that with the use of ( [ sqrtsequences ] ) , the equality ( [ firstboundedequality ] ) can also rewritten follows from the assumption combined with the ergodicity and the stationarity of the model that , the random variable converges in probability to the constant , as , where ,\ ] ] therefore from the lemma ( [ lemme convergence in probability ] ) , there exists a random variable such that we deduce from the equality ( [ lastconsistencyerodictheorem ] ) and the -consistence of the estimator , that where recall that the second derivative is equal to this implies that the assumption is satisfied .the assumption remains satisfied and the proof is similar as the proof of proposition ( [ ar1ecartfirst ] ) , in this case , for all we have by a simple calculus and since the the function is the density of the standard normal distribution , it is easy to prove that the quantity is bounded , therefore , there exists a positive constant such that , then with the choice with , it results that by the use of the ergodicity of the model and since the model is with finite second moments , it follows that the random variable where is some constant , this implies that the condition is straightforward . from the conditions ( [ gradient 1 ] ) ( ( [ gradient 2 ] ) , respectively ) , ( , respectively ) , it results the existence and the -consistency of the modified estimate estimate corresponding to the equation ( [ perturbation1 ] ) ( ( [ perturbation2 ] ) , respectively ) .the combinaison of the condition and the proposition ( [ m.e ] ) enable us to get under the following equality last equation implies that with the estimate central and central sequences are equivalent , in the expression of the test ( [ test ] ) , the replacing of the central sequence by the estimate central sequence has no effect .lan implies the contiguity of the two hypothesis ( see , ( * ? ? ?* corrolary 4.3 ) ) , by le cam third lemma s ( see for instance , ( * ? ? ?* theorem 2 ) ) , under , we have it follows from the convergence in probability of the estimate to , the continuity of the function and the application of the continuous mapping theorem see , for instance ( ) or , that asymptotically , the power of the test is not effected when we replace the unspecified parameter by it s estimate , hence the optimality of the test .the power function of the test is asymptotically equal to the proof is similar as ( * ? ? ?* theorem 3 ) .le cam , l. ( 1960 ) .locally asymptotically normal families of distributions .certain approximations to families of distributions and their use in the theory of estimation and testing hypotheses . , * 3 * , 3798 .
|
the main purpose of this paper is to provide an asymptotically optimal test . the proposed statistic is of neyman - pearson - type when the parameters are estimated with a particular kind of estimators . it is shown that the proposed estimators enable us to achieve this end . two particular cases , ar(1 ) and arch models were studied and the asymptotic power function was derived .
|
there are a variety of ways to model the presence of secrecy in a communication system .shannon considered the availability of secret key shared between the transmitter ( alice ) and receiver ( bob ) , using it to apply a one - time pad to the message .wyner introduced the idea of physical - layer security with the wiretap channel and secrecy capacity , exploiting the difference in the channels to bob and eve ( the eavesdropper ) .maurer derived secrecy by assuming that alice , bob , and eve have access to correlated random variables .in such models , the measure of security is usually the conditional entropy , or `` equivocation '' , of the message ; maximum equivocation corresponds to perfect secrecy . in this work ,we replace equivocation with an operationally motivated measure of secrecy .we want to design our coding and encryption schemes so that if eve tries to reproduce the source sequence , she will suffer a certain level of distortion .more precisely , the measure of secrecy is the minimum average distortion attained by the cleverest ( worst - case ) eavesdropper .occasionally , we will refer to eve s minimum average distortion as the payoff : alice and bob want to maximize the payoff over all code designs .[ node distance=1cm , minimum height=7mm , minimum width=14mm , arw/.style=->,>=stealth ] ( source ) ; ( alice ) [ right = 9 mm of source ] alice ; ( ch ) [ right = 9 mm of alice ] ; ( bob ) [ right = of ch , yshift=8 mm ] bob ; ( eve ) [ right = of ch , yshift=-8 mm ] eve ; ( shat ) [ right = of bob ] ; ( t ) [ right = of eve ] ; at ( [ xshift=4mm , yshift=8.5 mm ] bob.center ) ; ( source ) to node[midway , above , yshift=-1mm] ( alice ) ; ( alice ) to node[midway , above , yshift=-1mm] ( ch ) ; ( ch.15 ) to node[pos=0.3,above , yshift=-1mm] ( bob.west ) ; ( ch.345 ) to node[pos=0.3,below] ( eve.west ) ; ( bob ) to node[midway , above , yshift=-.5mm] ( shat ) ; ( eve ) to node[midway , above , yshift=-1mm] ( t ) ; ( bob ) to node [ midway , right , xshift=-2.5mm , yshift=0.5 mm ] ( eve ) ; ( [ xshift=-1cm , yshift=-5 mm ] eve.center ) rectangle ( [ xshift=1.8cm , yshift=6 mm ] bob.center ) ; this measure of secrecy has been considered previously by yamamoto in , but our setup differs from in a few ways , the most salient of which is `` causal disclosure '' .we assume bob , the legitimate receiver , is producing actions that are revealed publicly ( in particular , to eve ) in a causal manner .we might view eve as an adversary who is trying to predict bob s current and future actions based on both the actions that she has already witnessed and the output of her wiretap , and subsequently act upon her predictions ( see figure [ source_channel_fig ] ) . to further motivate the causal disclosure feature of our model , consider the effect of removing it ;that is , consider yamamoto s problem in .alice must communicate a source sequence losslessly to bob over a noiseless channel and the secrecy resource is shared secret key .eve observes the message but is _ not _ given causal access to bob s reproduction sequence .as shown in , the solution to this problem is that any positive rate of secret key is enough to cause eve unconditionally maximal distortion we see that secrecy is alarmingly inexpensive .similarly , if the secrecy resource is physical - layer security instead of shared secret key , it can be shown that a positive secrecy capacity is enough to force maximal distortion . however , with causal disclosure in play a tradeoff between secrecy capacity and payoff emerges . as a side remark , one might observe that causal disclosure is consistent with the spirit of kerckhoffs s principle .in addition to assuming causal disclosure , we further diverge from by considering the presence of a wiretap channel instead of shared secret key ; the problem of shared secret key with causal disclosure was solved in . in , it was found that the optimal tradeoff between the rate of secret key and payoff earned is achieved by constructing a message that effectively consists of a fully public part and a fully secure part .more specifically , the optimal encoder publicly reveals a distorted version of the source sequence and uses the secret key to apply a one - time pad to the supplement . with this insight, we use the broadcast channel to effectively create two separate channels one public and one secure as is done in .however , we modify by removing the requirement that eve must decode the public message , thereby rendering the message public only in intention , not in reality .we show that freely giving the `` public '' message away dramatically decreases eve s distortion ; thus , her equivocation of the public message becomes important .upon using channel coding to transform the broadcast channel into effective public and secure channels , we show that the weak secrecy provided by the channel encoder allows us to use the source code from to link the source and channel coding operations together digitally .it turns out that a strong secrecy guarantee would not improve the results .separation allows us to obtain a lower bound on the achievable payoff .we also provide an example of the lower bound and obtain an upper bound .most of the proofs are omitted . before proceeding , we briefly juxtapose our measure of secrecy with equivocation. equivocation does not give much insight into the structure of eve s knowledge and does not depend on the actions that eve makes .in contrast , looking at eve s distortion tells us something about the quality of her reconstruction if her aim is to replicate a source sequence and produce actions accordingly .there are other instances in the literature where an operational definition of security is used .for example , in , merhav et al . looked at the expected number of guesses needed to correctly identify a message .the system under consideration , shown in figure [ source_channel_fig ] , operates on blocks of length .all alphabets are finite , and both the source and channel are memoryless .the communication flow begins with alice , who observes a sequence , i.i.d . according to , and produces an input to the broadcast channel .the memoryless broadcast channel is characterized by , but since the relevant calculations only involve the marginals and , we need only consider . at one end of the channel , bob receives and generates a sequence of actions , , with the requirement that the block error probability be small . at the other terminal of the channel , eve receives and generates ; in generating , she has access to the full sequence and the past actions of bob , .in essence , we can view bob and eve as playing a public game that commences after they receive the channel outputs . in each move, they are allowed to see each other s past moves and produce an estimate of the next source symbol accordingly .since bob s reproduction must be almost lossless , his moves are restricted and he does not benefit from knowing eve s past actions . for similar reasons ,revealing to eve at step has exactly the same consequences as revealing ; henceforth , we consider the causal disclosure to be . a more general version of the game would allow for distortion in bob s estimate ( see ) , but in this work we focus on lossless communication . in the next two definitions ,refer to figure [ source_channel_fig ] for an illustration of the setup . for blocklength ,a source - channel code consists of an encoder and a decoder : l f:^n^n p_x^n|s^n + g : ^n^n .note that we do not restrict the encoder to be deterministic . for any source - channel code, we can calculate the probability of block error and the payoff earned against the worst - case adversary , as defined in the following : fix a value function ( or , distortion measure ) .we say that a payoff is achievable if there exists a sequence of source - channel codes such that =0\ ] ] and \geq\pi.\ ] ] eve s average distortion , i.e. the lhs of ( [ paydefn ] ) , is defined exactly as in rate - distortion theory for separable distortion measures .although it is not explicit in ( [ paydefn ] ) , we assume that eve has full knowledge of the source - channel code and the source distribution .the first result is a lower bound on the maximum achievable payoff .[ innerbnd ] fix , , and .a payoff is achievable if the inequalities rcl i(s;u ) & < & i(v;y ) + h(s|u ) & < & i(w;y|v)-i(w;z|v ) + & & _ t(u)[(s , t(u ) ) ] hold for some distribution . theorem [ innerbnd ] is obtained in part by transforming ( via channel coding ) the noisy broadcast channel into noiseless public and secure channels .however , the result can be strengthened considerably by taking into account eve s equivocation of the public message .the source - channel code used to achieve theorem [ innerbnd ] remains the same ; only the analysis is strengthened .we illustrate and discuss this further in section [ example ] , where we give a brief proof sketch . our main result is the following theorem . [improved ] fix , , and .a payoff is achievable if the inequalities rcl i(s;u ) & < & i(v;y ) + h(s|u ) & < & i(w;y|v)-i(w;z|v ) + & & _ + ( 1- ) _t(u)[(s , t(u ) ) ] hold for some distribution , where \ ] ] and ^+}{i(s;u)}.\ ] ] we obtain the lower bound in theorem [ improved ] by concatenating a source code and a channel code and matching the rates of the two codes .we first describe what constitutes a good channel code , and the secrecy guarantees that come with it .the channel code is made up of an encoder and decoder as shown in figure [ channel_fig ] .[ node distance=1cm , minimum height=7mm , minimum width=14mm , arw/.style=->,>=stealth ] ( fc ) ; ( mp ) [ left = of fc , yshift=2 mm ] ; ( ms ) [ left = of fc , yshift=-2 mm ] ; ( ch ) [ right = of fc ] ; ( bob ) [ right = of ch , yshift=6 mm ] ; ( eve ) [ right = of ch , yshift=-6 mm ] ; ( mphat ) [ right = of bob , yshift=2 mm ] ; ( mshat ) [ right = of bob , yshift=-2 mm ] ; ( mp ) to node[midway , above , yshift=-1mm] ( mp -| fc.west ) ; ( ms ) to node[midway , below , yshift=1mm] ( ms -| fc.west ) ; ( fc ) to node[midway , above , yshift=-1mm] ( ch ) ; ( ch.15 ) to node[midway , above , yshift=-2mm] ( bob.west ) ; ( ch.345 ) to node[midway , above , yshift=-.8mm] ( eve.west ) ; ( bob.east ( mphat ) ; ( bob.east |- mshat ) to node[midway , below , yshift=1mm] ( mshat ) ; the input to the encoder is a pair of messages destined for the channel decoder , with representing a public message and a secure message .the channel decoder outputs the pair .we allow the channel encoder to use private randomization .a channel code consists of a channel encoder and channel decoder : l f_c:_p_s^n p_x^n|m_p , m_s + g_c:^n _ p_s , where and . keeping in mind criteria ( [ errdefn ] ) and ( [ paydefn ] ) and our desire to form public and private channels, we might ask : what constitutes a good channel code ?first , the legitimate channel decoder must recover and with vanishing probability of error .second , we need a guarantee that we have indeed created a private channel .ideally , we want to guarantee that the _ a priori _ distribution on matches the _ a posteriori _ distribution given both and .if this holds , we are assured that even if the adversary is able to view the public channel perfectly ( i.e. , recover ) , his optimal strategy for determining is to choose a random message according to the a priori distribution .later we will exploit the adversary s inability to exactly recover , but for now we suppose that it is freely available .we turn to the notion of secrecy capacity and cast our requirement in terms of entropy : we want , or . more precisely , the _ normalized _ mutual information should vanish in a good channel code .although this measure of secrecy is so - called `` weak secrecy '' , it turns out that having strong secrecy would not improve the payoff for the source encoder that we use .we make a further technical requirement ( c.f . ) that good channel codes must satisfy for our purposes , considering the particular source encoder that we employ .the channel code must work not only for independent and uniformly distributed , but more generally in the case that , conditioned on , is almost uniform . to be precise ,we require }{{\mathbb{p}}[m_s = m_s'|m_p = m_p]}\leq 2^{n\cdot\delta_n}\ ] ] to hold for some such that as .the source encoder we employ will produce message pairs that satisfy this condition , regardless of the source distribution .the pair of rates is achievable if , for all satisfying ( [ condunif ] ) for every , there exists a sequence of channel codes such that =0\ ] ] and the following theorem gives an achievable region , the proof of which comes from modifying the work done in .the idea is to include enough private randomness in the channel encoder so that the adversary effectively uses his full decoding capabilities to resolve the randomness , leaving no room to additionally decode part of the secret message .the amount of randomness required is the mutual information provided by the adversary s channel .[ bcc ] the pair is achievable if rcl r_p & < & i(v;y ) , + r_s & < & i(w;y|v)-i(w;z|v ) for some .a source code consists of a source encoder and decoder .the encoder observes a memoryless source and produces a pair of messages , , with representing a public message and a secure message .we allow the source encoder to use private randomization .a source code consists of an encoder and a decoder : l f_s:^n_p_s p_m_p , m_s|s^n + g_s:_p_s ^n , where and .as shown in figure [ source_fig ] , the output of the source encoder is effectively passed through a channel .[ node distance=1cm , minimum height=7mm , minimum width=14mm , arw/.style=->,>=stealth ] ( source ) ; ( fs ) [ right = of source ] ; ( dummy1 ) [ right = 1.5 cm of fs.345 ] ; ( dummy2 ) [ right = 1.8 cm of fs.15 ] ; ( gs ) [ right = 2.5 cm of fs ] ; ( shat ) [ right = of gs ] ; ( ch ) [ below = 0.5 cm of gs ] ; ( zn ) [ right = 8.5 mm of ch ] ; ( source ) to node[midway , above , yshift=-1mm] ( fs ) ; ( fs.15 ) to node[near start , above , yshift=-1mm] ( fs.15 -| gs.165 ) ; ( fs.345 ) to node[near start , below , yshift=1mm] ( fs.345 -| gs.195 ) ; ( dummy1 ) |- ( ch.190 ) ; ( dummy2 ) |- ( ch.170 ) ; ( gs ) to node[midway , above , yshift=-.5mm] ( shat ) ; ( ch ) to node[midway , above , yshift=-1mm] ( zn ) ; in light of the previous subsection , we want to consider sequences of channels that provide weak secrecy when the output of the source encoder satifies ( [ condunif ] ) .define to be the set of such that , for all satisfying ( [ condunif ] ) for every , we can view as the resource of physical - layer security .notice that a sequence of good channel codes yields a sequence of channels in to the adversary .we now consider what payoff can be achieved if rates are imposed on the messages , and is imposed . by considering the availability of a structure and a noiseless channel from alice to bob , we are effectively divorcing the goals of source and channel coding so that each can be considered separately .fix and .the triple is achievable if there exists a sequence of source codes such that =0,\ ] ] and , for all , \geq\pi.\ ] ] we give a region of achievable . in , the regionis characterized when the secrecy resource is shared secret key instead of physical - layer security .[ sourcethm ] fix and . then is achievable if the inequalities rcl r_p & > & i(s;u ) + r_s & > & h(s|u ) + & & _ t(u)[(s , t(u ) ) ] hold for some .the lower bound in theorem [ innerbnd ] follows from theorems [ bcc ] and [ sourcethm ] .the main idea in the proof of theorem [ sourcethm ] is to use the public message to specify a sequence that is correlated with , and use the secure message to encode the supplement that is needed to fully specify the source sequence .the source encoder is defined in such a way that , conditioned on the public message , the adversary views the source as if it were generated by passing through a memoryless channel . with this perspective, the past will no longer help the adversary ; eve s best strategy is to choose a function that maps to .although we omit the full proof of theorem [ sourcethm ] , we provide the crucial lemma that shows how the weak secrecy provided by a good channel code is used in analyzing the payoff .the result of lemma [ connect ] ( below ) is that we can view eve as having full knowledge of and and no knowledge of , which fulfills our goal of creating a secure channel and a public channel . in other words , using channel coding to create physical - layer security in the form of a structure allows us to show that , from eve s perspective , knowledge of is no more helpful than in easing the distortion . to parse the statement of the lemma, simply look at the arguments of . [ connect ] if satisfies ( [ condunif ] ) for every , and + , then for all , rcl + & & _ t(i , s^i-1,m_p)- ( ) for sufficiently large , where as .let .introduce the random variable + ] solves and is as in theorem [ sourceex ] .when we established the lower bound in theorem [ innerbnd ] , we did so with the structure of a public channel and secure channel in mind ; however , as mentioned , the public channel may not be truly public .we made the assumption that the adversary is freely given ; indeed , the proof of lemma [ connect ] illustrates this in ( [ connect_discussion ] ) , where we suffer a loss in our payoff analysis by including as an input to the adversary s strategy .we can strengthen the analysis of theorem [ innerbnd ] by taking into account the equivocation of the public message . for blocklength , the equivocation of the public message vanishes at a certain time due to the adversary s ongoing accumulation of past source symbols .before time , the payoff is $ ] ( i.e. , the unconditional payoff ) .after time , the payoff is as in theorem [ innerbnd ] . denoting this payoff by , we can now achieve ( theorem [ improved ] ) the ratio is found to be ^+}{i(s;u)}.\ ] ] figure [ secrecy_plot ] shows the difference between theorem [ innerbnd ] and theorem [ improved ] , as well as a comparison to the curves that correspond to no encoding and unconditional payoff .unconditional payoff refers to the distortion that eve suffers if her only knowledge is the source distribution , and no encoding refers to simply taking the source as the input to the channel and bypassing the encoder .the example is for a bernoulli source with bias .if we assume that eve has full knowledge of the public message , then we see that for , say , , the distortion guaranteed by the weaker theorem is even worse than if no encoding was used .this illustrates the importance of eve s equivocation of the public message .the upper bound is established by using ideas from the converses in and .[ outerbnd ] fix , , and . if a payoff is achievable , then the inequalities rcl h(s ) & & i(w;y ) + h(s|u ) & & [ i(w;y|v)-i(w;z|v)]^+ + & & _ t(u)[(s , t(u ) ) ] must hold for some distribution + .by considering source and channel operations separately , we have given results on how well communication systems can perform against a worst - case adversary when the secrecy resource is physical - layer security and the adversary has causal access to the source .we have seen that a guarantee of weak secrecy can be used in conjunction with our operationally - relevant measure of secrecy .this research was supported in part by the national science foundation under grants ccf-1016671 , ccf-1116013 , and ccf-1017431 , and also by the air force office of scientific research under grant fa9550 - 12 - 1 - 0196 .
|
imperfect secrecy in communication systems is investigated . instead of using equivocation as a measure of secrecy , the distortion that an eavesdropper incurs in producing an estimate of the source sequence is examined . the communication system consists of a source and a broadcast ( wiretap ) channel , and lossless reproduction of the source sequence at the legitimate receiver is required . a key aspect of this model is that the eavesdropper s actions are allowed to depend on the past behavior of the system . achievability results are obtained by studying the performance of source and channel coding operations separately , and then linking them together digitally . although the problem addressed here has been solved when the secrecy resource is shared secret key , it is found that substituting secret key for a wiretap channel brings new insights and challenges : the notion of weak secrecy provides just as much distortion at the eavesdropper as strong secrecy , and revealing public messages freely is detrimental .
|
this paper studies incentive schemes to drive self - interested users toward the system objective .the operation of networks by non - cooperative , self - interested users in general leads to a suboptimal performance . as a result, different forms of incentive schemes to improve the performance have been investigated in the literature .one form of incentive schemes widely studied in economics and engineering is pricing ( or more generally , transfer of utilities ) .pricing can induce efficient use of network resources by aligning private incentives with social objectives .although pricing has a solid theoretical foundation , implementing a pricing scheme can be impractical or cumbersome in some cases .let us consider a wireless internet service as an example .a service provider can limit access to its network resources by charging an access fee .however , charging an access fee requires a secure and reliable method to process payments , which creates burden on both sides of users and service providers .there also arises the issue of allocative fairness when a service provider charges for the internet service . in the presence of the income effect, uniform pricing will bias the allocation of network resources towards users with high incomes .because the internet can play the role of an information equalizer , it has been argued in a public policy debate that access to the internet should be provided as a public good by a public authority rather than as a private good in a market .another method to provide incentives is to use repeated interaction .repeated interaction can encourage cooperative behavior by adjusting future payoffs depending on current behavior . a repeated game strategy can form a basis of an incentive scheme in which monitoring and punishment burden is decentralized to users ( see , for example , ) .however , implementing a repeated game strategy requires repeated interaction among users , which may not be available .for example , users interacting in a mobile network change frequently in nature . in this paper, we study an alternative form of incentive schemes based on intervention , which was proposed in our previous work . in an incentive scheme based on intervention, a network is augmented with an intervention device that is able to monitor the actions of users and to take an action that affects the payoffs of users .intervention directly affects the network usage of users , unlike pricing which uses an outside instrument to affect the payoffs of users .thus , an incentive scheme based on intervention can provide an effective and robust method to provide incentives in that users can not avoid intervention as long as they use network resources .moreover , it does not require long - term relationship among users , which makes it applicable to networks with a dynamically changing user population . as a first step toward the study of incentive schemes based on intervention , we focus in this paper on the case of perfect monitoring , where the intervention device can immediately observe the actions chosen by users without errors .we derive analytical results assuming that there exist actions of the intervention device that are most and least preferred by all the users and the intervention device , regardless of the actions of users .we then illustrate our results with an example based on the cournot model .we consider a network where users and an intervention device interact .the set of the users is denoted by .the action space of user is denoted by , and the action of user is denoted by , for all .an action profile is represented by a vector .an action profile of the users other than user is written as so that can be expressed as .the intervention device observes the actions chosen by the users immediately , and then it chooses its own action .the action space of the intervention device is denoted by , and its action is denoted by . for convenience , we sometimes call the intervention device user 0 . the set of the users and the intervention device is denoted by . the actions of the intervention device and the users jointly determine their payoffs. the payoff function of user is denoted by .that is , represents the payoff that user receives when the intervention device chooses action and the users choose an action profile .in particular , the payoff of the intervention device , , can be interpreted as the system objective .since the intervention device can choose its action knowing the actions chosen by the users , a strategy for it can be represented by a function , which is called an intervention rule .the set of all possible intervention rules is denoted by .suppose that there is a network manager who determines the intervention rule used by the intervention device .we assume that the manager can commit to an intervention rule , for example , by using a protocol embedded in the intervention device .the game played by the manager and the users is called an intervention game .the sequence of events in an intervention game can be listed as follows .the manager chooses an intervention rule .the users choose their actions , knowing the intervention rule chosen by the manager .the intervention device observes the action profile and takes an action .the payoff function of user provided that the manager has chosen an intervention rule is given by , where an intervention rule induces a simultaneous game played by the users , whose normal form representation is given by we can predict actions chosen by the users given an intervention rule by applying the solution concept of nash equilibrium to the induced game . an intervention rule _ sustains _ an action profile if is a nash equilibrium of the game , i.e. , an action profile is _ sustainable _ if there exists an intervention rule that sustains .let be the set of action profiles sustained by .then the set of all sustainable action profiles is given by .a pair of an intervention rule and an action profile is said to be attainable if sustains .the manager s problem is to find an attainable pair that maximizes the payoff of the intervention device among all attainable pairs .[ def : ie ] is an _ intervention equilibrium _ if and for all such that . is an _ optimal intervention rule _ if there exists an action profile such that is an intervention equilibrium .intervention equilibrium is a solution concept for intervention games , based on a backward induction argument .an intervention equilibrium can be considered as a subgame perfect equilibrium applied to an intervention game , since the induced game is a subgame of an intervention game .it is implicitly assumed that the manager can induce the users to choose the best nash equilibrium for the system in case of multiple nash equilibria .one possible explanation for this is that the manager recommends to the users an action profile sustained by the intervention rule he chooses so that the action profile becomes a focal point .the manager s problem of finding an optimal intervention rule can be expressed as this section , we derive analytical results about sustainable action profiles and intervention equilibria imposing the following assumption . [ass : permon ] there exist such that for all , and can be interpreted as the minimal and maximal intervention actions of the intervention device , respectively . for given ,the users and the intervention device receive the highest ( resp .lowest ) payoff when the intervention device takes the minimal ( resp .maximal ) intervention action .this allows the intervention device to reward or punish all the users at the same time .we first characterize the set of sustainable action profiles , .the following class of intervention rules is useful to characterize . is an _ extreme intervention rule with target action profile _ if note that an extreme intervention rule uses only the two extreme points of . with an extreme intervention rule ,the intervention device chooses the most preferred action for the users when they follow the target action profile while choosing the least preferred action when they deviate .hence , an extreme intervention rule provides the strongest incentive for sustaining a given target action profile , which leads us to the following lemma . if , then .suppose that .then there exists an intervention rule such that for all , for all .then we obtain for all , for all , where the first and the third inequalities follow from .let be the set of all extreme intervention rules , i.e. , . also , define . by applying lemma 1 ,we can obtain the following results .[ prop : char ] ( i ) if and only if for all , for all .+ ( ii ) .+ ( iii ) if is an intervention equilibrium , then is also an intervention equilibrium .\(i ) suppose that for all , for all .then sustains , and thus .the converse follows from lemma 1 .\(ii ) follows from , while follows from lemma 1 .\(iii ) suppose that is an intervention equilibrium. then by definition [ def : ie ] , sustains , and for all such that . since , by lemma 1hence , . on the other hand , since , we have by . therefore , , and thus for all such that .this proves that is an intervention equilibrium .theorem 1 shows that there is no loss of generality in three senses when we restrict attention to extreme intervention rules .first , in order to test whether there exists an intervention rule that sustains a given action profile , it suffices to consider only the extreme intervention rule having the action profile as its target action profile .second , the set of action profiles that can be sustained by an intervention rule remains the same when we consider only extreme intervention rules .third , if there exists an optimal intervention rule , we can find an optimal intervention rule among extreme intervention rules .note that the role of extreme intervention rules is analogous to that of trigger strategies in repeated games with perfect monitoring . to generate the set of equilibrium payoffs, it suffices to consider trigger strategies that trigger the most severe punishment in case of a deviation . under assumption[ ass : permon ] , the maximal intervention action plays a similar role to mutual minmaxing in that it provides the strongest threat to deter a deviation .the next theorem provides a necessary and sufficient condition under which an extreme intervention rule together with its target action profile constitutes an intervention equilibrium .[ prop : charie ] is an intervention equilibrium if and only if and for all .suppose that is an intervention equilibrium. then sustains , and thus .also , for all such that .choose any .then by lemma 1 , sustains , and thus .suppose that and for all . to prove that is an intervention equilibrium , we need to show ( i ) sustains , and ( ii ) for all such that .since , ( i ) follows from lemma 1 . to prove ( ii ) , choose any such that .then , where the first inequality follows from .theorem [ prop : charie ] implies that if we obtain an action profile such that , we can use it to construct an intervention equilibrium and thus an optimal intervention rule .in this section , we discuss an example to illustrate the results in section 3 . consider a wireless network with two users and an intervention device interfering with each other .the action of user is its usage level , where ] .the payoff of user is given by the product of the quality received and its usage level , the system objective is given by social welfare , which is defined as the sum of the payoffs of the users , note that if there is no intervention device ( i.e. , if is held fixed at 0 ) , the example is identical to the cournot duopoly model with a linear demand function and zero production cost .the corresponding cournot duopoly game achieves the symmetric social optimum at while it has the unique cournot - nash equilibrium at , as depicted in figure [ fig : cournot ] .hence , the goal of the manager is to improve upon the inefficient outcome by introducing the intervention device in the network . given the structure of the intervention game in this example , the capability of the intervention device is determined by its maximum intervention level . in the following ,we investigate sustainable action profiles and those that constitute an intervention equilibrium as we vary . if the intervention device can not affect the payoffs of the users ( ) , the non - cooperative outcome is the only sustainable action profile that is consistent with the self - interest of the users . on the other hand ,if the intervention device can apply a sufficiently high intervention level ( ) , it has the ability to degrade the quality to zero no matter what action profile the users choose .since the payoffs of the users are non - negative , the punishment from using is strong enough to make every action profile sustainable . we can also find a condition on that enables to sustain the symmetric social optimum . with , is sustainable and thus is an intervention equilibrium by theorem 2 .figure [ fig : a0vary ] plots the set for six different values of with parameters , and .we can see that expands as increases , starting from a single point when to the entire space when .when , only the action profile that is closest to among those in constitutes an intervention equilibrium .when , the action profiles in that satisfies constitute an intervention equilibrium , as all of them yield the maximum social welfare .
|
this paper studies a class of incentive schemes based on intervention , where there exists an intervention device that is able to monitor the actions of users and to take an action that affects the payoffs of users . we consider the case of perfect monitoring , where the intervention device can immediately observe the actions of users without errors . we also assume that there exist actions of the intervention device that are most and least preferred by all the users and the intervention device , regardless of the actions of users . we derive analytical results about the outcomes achievable with intervention , and illustrate our results with an example based on the cournot model .
|
in the context of geobiochemical models , so - called production - destruction equations are frequently encountered .these models describe the time - evolution of non - negative quantities and often take into account some type of mass conservation .the underlying ode systems describing the time - evolution of non - negative quantities can usually be written in the form for , with production terms and destruction terms such that for all and .usually , we also have for all and some mass conservation property such as . in vector form, we write with matrix - valued functions such that for all . setting ,this is written more shortly in the standard quasi - linear form .while many interesting biochemical reactions fit into this framework , it also includes certain space - discretized partial differential equations , e.g. the heat equation discretized by second - order differences and the first - order upwind - discretized advection equation . in the context of shallow water flows discretized by the discontinuous galerkin method , a production - destruction approach as in guarantees non - negativity of the water height for any time step size while still preserving conservativity . in that work ,the production - destruction equations where specifically formulated in order to account for the production and destruction terms which influence the cell - wise water volume .generally , numerical methods discretizing ( [ eq : pq1 ] ) are supposed to be positivity preserving , conservative and of sufficiently high order . while positivity preservation and conservativity may be directly carried over from the context of odes to that of pdes ,the issue of consistency and convergence is more subtle for pdes . in this work ,we hence take a closer look at the local discretization error of patankar - type methods applied to systems arising from linear pdes .the forward euler method applied to ( [ eq : pq1 ] ) will obviously be positivity preserving if we have , but this requires a very severe time step restriction on for stiff systems . to avoid this ,a variant was proposed by patankar , originally in the context of source terms in heat transfer .this method given by is unconditionally positivity preserving but not mass conserving .in addition , while ( [ eq : eulpata ] ) is of order one in the ode sense , consistency is lost for stiff problems such as the discretized heat equation .in fact , the semi - discrete 1d heat equation for , with spatial periodicity , i.e. and fits in the form ( [ eq : pq1 ] ) with diagonal destruction matrix . the patankar - euler scheme ( [ eq : eulpata ] ) , written out per component , now reads .this scheme is unconditionally positivity preserving as well as unconditionally contractive in the maximum norm .however , inserting exact pde solution values in the scheme , we obtain . hence , taylor development shows that for small and the leading term in these local truncation errors is given by .it follows that the scheme will only be convergent in case of a very severe time step restriction of . in order to obtain an unconditionally positive and additionally mass conservative scheme for production - destruction equations, the modification has been proposed in . for linear problems with constant matrix , such as the linear heat equation, this modified method now reduces to the implicit euler method , so consistency in pde sense is not a problem there . in ,also a second - order method has been proposed .we will refer to this method as mpark2 .this scheme does not fit directly in the vector production - loss formulation and thus has to be written per component , starting with the quasi - linear form .the mpark2 method is then based on the trapezoidal rule with an euler - type prediction to provide the internal stage value and reads as shown in , this scheme is unconditionally positivity preserving and mass conserving , and the order is two in the ode sense .however , it is unknown whether there will be order reduction for stiff problems , in particular for semi - discrete problems obtained from pdes after space discretization . regarding the local discretization error , consistency of can be proven for sufficiently smooth exact solutions .this is dealt with in the next section .we will study error recursions for the mpark2 method applied to linear problems with constant coefficients .these are naturally non - linear for this method , even for linear equations . for a linear problem with , we will first write ( [ eq : mprk ] ) in vector form by introducing the diagonal matrix . then( [ eq : mprk ] ) can be written compactly as along with this , we also consider the scheme with the exact solution inserted , where and .subtraction of ( [ eq : mprkv ] ) from ( [ eq : pmprkv ] ) gives a recursion for the global discretization errors of the form , with amplification matrix and local errors given by with the matrix given by .the difference between and its counterpart resulting from the implicit trapezoidal rule can be determined from thus , the term represents the difference in local errors between the implicit trapezoidal rule and the mpark2 scheme . for the diagonal matrices, we have and .in addition , if we assume then is bounded for , i.e. .semi - discrete linear advection and linear diffusion a reasonable assumption in the case of the semi - discrete linear advection with ] as shown in fig .[ fig : smooth ] and table [ tab : loc_err ] . as the diagonal matrix in the definition of can be bounded by , a smoothness condition on the exact solution of the type guarantees and hence a local error of third order . the left part of fig .[ fig : smooth ] depicts the situation for an initial solution which satisfy the smoothness condition . here , the quantities approximate 1 for with .consequently , the difference between the quantities and is small leading to basically constant smoothness indicators as shown in table [ tab : loc_err ] .this table also lists the local error in the first mpark2 step . in accordance with the designed order of convergence , this local error behaves as . on the other hand , for an initial solution of , table [ tab : loc_err ] shows an order reduction to about in addition to an increasing value of the indicator . as depicted in fig .[ fig : smooth ] , this behavior is due the fact that for , we also have .values at nearby grid points will tend to 1 for while zeros of remain unchanged .this leads to the boundary layer effect for visible in the right part of fig .[ fig : smooth ] as well as a locally large difference in curvature between and ..effect of the smoothness condition ( [ eq : cond_smooth ] ) on the error of consistency for the mpark2 scheme .[ cols="<,^,^,^,^,^,^ " , ] ) .,width=283,height=170 ] ) .,width=283,height=170 ] so far , these investigations show a local discretization error of order for sufficiently smooth and positive solutions .this positivity requirement includes thin - layer approaches for the shallow water equations , where a thin film of water is retained also in regions marked as dry .however , we should remark that for a full convergence analysis , stability has to be proven as well .this necessitates boundedness of products of amplification matrices which seems to be quite difficult to prove due to the non - linearity of the method .finally , we also consider a modification to the mpark2 scheme which follows more closely the approach in .this modification is based on a direct correction of the explicit part of the implicit trapezoidal rule and reads as with determined by the correction to the quantity which may have negative components .more precisely , is given by if and otherwise .we will denote this scheme by mpark2ex . due to this switch in case of vanishing components, we can not expect an overall second order of convergence as the update reduces to two steps of the implicit euler scheme if .however , for a test case of an advected wave mimicking wetting and drying , i.e. advection of the initial condition , this method behaves much better than mpark2 as shown in fig .[ fig : patswe ] .a comparison of the patankar - type schemes is carried out for the upwind - discretized linear advection on grid points using spatial periodicity up to a final time of . as shown on the left of fig .[ fig : patswe ] , using a time step of corresponding to a courant number of does not exhibit significant differences of the schemes mpark2 and mpark2ex , also in comparison to the implicit trapezoidal rule .however , a larger time step of corresponding to a courant number of shows the drawback of mpark2 on the right part of fig .[ fig : patswe ] .while mpark2 does not account for the vanishing solution in the interval $ ] and the implicit trapezoidal rule clearly yields negative values , the modified scheme mpark2ex seems to combine the best features of both methods .the solution is non - negative in the whole computational domain and very accurate in the almost dry regions .hence , this method seems quite promising and should be further investigated , in particular with respect to its stability .( left ) and ( right ) .comparison of mpark2 to mpark2ex.,width=283,height=173 ] ( left ) and ( right ) .comparison of mpark2 to mpark2ex.,width=283,height=173 ]s. ortleb gratefully acknowledges the hospitality of the centrum wiskunde & informatica where this research was partly carried out during her visit in the period of february / march 2016 .burchard , h. , deleersnijder , e. , meister , a. : a high - order conservative patankar - type discretisation for stiff systems of production - destruction equations .numer . math . *47 * , 130 ( 2003 ) patankar , s. v. : _ numerical heat transfer and fluid flow_. series in computational methods in mechanics and thermal sciences , hemisphere pub .new york , washington , 1980 .meister , a. , ortleb , s. : on unconditionally positive implicit time integration for the dg scheme applied to shallow water flows , international journal for numerical methods in fluids * 76 * , 6994 ( 2014 )
|
we study the local discretization error of patankar - type runge - kutta methods applied to semi - discrete pdes . for a known two - stage patankar - type scheme the local error in pde sense for linear advection or diffusion is shown to be of the maximal order for sufficiently smooth and positive exact solutions . however , in a test case mimicking a wetting - drying situation as in the context of shallow water flows , this scheme yields large errors in the drying region . a more realistic approximation is obtained by a modification of the patankar approach incorporating an explicit testing stage into the implicit trapezoidal rule . patankar - type runge - kutta schemes for linear pdes + s. ortleb , w. hundsdorfer
|
weather forecasting is usually based on outputs of numerical weather prediction ( nwp ) models that represent the dynamical and physical behaviour of the atmosphere . based on actual weather conditions ,the equations are employed to extrapolate the state of the atmosphere , but may strongly depend on initial conditions and other uncertainties of the nwp model .a methodology that accounts for such shortcomings is the use of ensemble forecasts by running the nwp model several times with different initial conditions and/or model formulations .forecast ensembles play an important role when it desired to develop methods that transfer from deterministic to probabilistic forecasting , since information about forecast mean and variance may be extracted from several individual forecasts . in practise , however , ensemble prediction systems are not able to capture all sources of uncertainty , thus they often exhibit dispersion errors ( underdispersion ) and biases .statistical postprocessing models correct the forecasts in accordance with recent forecast errors and observations and yield full predictive probability distributions ( see e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?there are two widely used approaches in statistical postprocessing that yield full predictive distributions based on a forecast ensemble and verifying observations . in the bayesian model averaging ( bma , ) approach each ensemble member is associated with a kernel density ( after suitable bias correction ) and the individual densities are combined in a mixture distribution with weights that express the skill of the individual ensemble members .ensemble model output statistics ( emos , ) combines the ensemble members in a multiple linear regression approach with a single predictive distribution . and apply the postprocessing methods to weather quantities , where a normal distribution can be assumed as underlying model , such as temperature and pressure . for the application to other weather quantitiesalternative distributions are required .an overview on existing variants of emos and bma can for example be found in and . in cases where various probabilistic forecasts from different sources are available ,combining these forecasts can improve the predictive ability further .the individual forecasts might for example ( as in our application ) come from different competing statistical postprocessing models .the most widely used method to combine the individual predictive distributions is the linear pool ( lp ) , see for example , and references therein for reviews on the topic . show that the linear pool results in an overdispersed predictive distribution , regardless whether the individual components are calibrated or not .a more flexible and non - linear approach is the spread - adjusted linear pool ( slp ) .for example , and empirically observed overdispersion of the linear combined forecasts in case of approximately neutrally dispersed gaussian components and they introduced a non - linear spread - adjusted combination approach , that was generalized and discussed by .further , and propose another flexible non - linear aggregation method , the beta - transformed linear pool ( blp ) , resulting in highly improved dispersion properties .although weather prediction implies temporal structures and dependencies , we were unable to find explicit applications of time series methods in the context of ensemble postprocessing methods . on the contrary , there are several approaches to apply time series models directly on observations of a weather quantity or other environmental and climatological quantities .to name only a few , apply an model to transformed wind speed data in order to simulate and predict wind speed and wind power observations . investigate the problems when fitting arma models to meteorological time series and , as an example , present an application to time series of the palmer drought index . fit specific arma models to surface pressure data . apply the box - jenkins modelling technique to monthly activity of temperature inversions .we propose to apply time series models in the context of ensemble postprocessing . to account for an autoregressive ( ar ) structure in timewe construct a predictive distribution based on an ar - adjusted forecast ensemble ( local ar - emos ) .as the standard local emos predictive distribution shows signs of underdispersion and our local ar - emos distribution on the contrary clearly exhibits overdispersion we propose to combine ar - emos and emos with a spread - adjusted linear pool .the structure of the paper is the following : section [ sec : methods ] presents our proposed ar modification of the forecast ensemble and briefly reviews the emos model and the spread - adjusted linear combination of probabilistic forecasts .sections [ sec : applicationtemp ] and [ sec : applicationsingle ] illustrate some possible applications of our basic ar - modification method in a case study with temperature forecasts of the european center for medium range weather forecasts ( ecmwf ) .we end the paper with some concluding remarks and a discussion of further extensions of our method . introduced the ensemble model output statistics ( emos ) model for the case , that a normal distribution can be assumed as model for the weather quantity . in all following sections we use a notation depending on the time index , to stress the fact that we explicitly model temporal dependencies through our autoregressive adjustment approach .the predictive distribution is obtained by fitting the following multiple linear regression model to the observation of the weather quantity and the forecast ensemble : where are real valued regression coefficients that can be interpreted as bias - correction coefficients , and is a normally distributed error term with . for convenience we define , yielding the representation of the emos model . in case of an exchangeable ensemble ,the multiplicative bias correction parameters are chosen to be equal , that is , for . in this case , the emos model is given as where and we can define . the variance of is parameterized as linear function of the ensemble variance to account for dispersion errors in the raw ensemble : where and are nonnegative coefficients .models and both yield the following predictive emos distribution : the parameters of the distribution are estimated from a training period preceding the forecast time and re - estimated for each forecast time using a rolling training period of fixed length . to estimate the parameters by optimizing the continuous ranked probability score ( crps , ) over the training data .this estimation procedure is implemented in the r package ensemblemos .two variants of estimating the emos parameters exist , the global and the local emos approach . for global emos only one set of parameters is estimated , while for global emos a separate set is estimated at each station . for our case study we consider a station - wise approach and therefore employ the local emos variant .+ let again denote an ensemble of forecasts for a univariate ( normally distributed ) weather quantity at a fixed location .let denote a deterministic - style forecast of with corresponding forecast error simple examples for are provided by any ensemble member itself , the raw ensemble mean , or the raw ensemble median denoted by .if is a one - step - ahead forecast made at origin , then may be seen as appropriate , if the time series can be viewed as a white noise process .if there is some indication that this property is violated , one may readily assume that the series follows a weakly stationary ar process , i.e. + \varepsilon(t)\ ; , \ ] ] where is white noise .combing ( [ e1 ] ) and ( [ e2 ] ) gives , where \ ] ] can be seen as an ar modified forecast based on the actual forecast and past values and , .the coefficients , , , may be obtained by fitting an process to the observed error series from a training period , where the order of the process can automatically be chosen by applying an appropriate criterion .this includes the incidence , in which case is a simple bias correction of . for the actualfitting we employ yule - walker estimation as carried out by the r function ar , see also ( * ? ? ?* section 3.6 ) .order selection is done by the aic criterion , despite the circumstance that the estimated coefficients are not the maximum likelihood estimates .the described approach differs from the times series methods introduced e.g. by in the sense that it does _ not _ aim at directly modelling a weather quantity itself .the basic ar modification method can be employed for different purposes in ensemble postprocessing , depending on the need of the user .for example it can simply be used to obtain an ar - modified raw ensemble , or to build a postprocessed predictive distribution based on the modified ensemble .the following sections illustrate some possible applications of ar modified ensemble forecasts by means of a given data set , where we analyze the aggregated predictive performance and the performance at a single station .an ar - emos predictive distribution for temperature is introduced and shown to improve upon the well - established local emos method with respect to certain verification scores .see e.g. as a reference for applied statistical methods for atmospheric sciences and for a comprehensive review of probabilistic forecasting .let denote the class of non - random cumulative distribution functions ( cdfs ) that admit a lebesque density , have support on and are strictly increasing on .further , denote the considered cdfs with lebesgue densities , where is an arbitrary but finite integer value. then the spread - adjusted linear pool ( slp ) combined predictive distribution with spread - parameter has cdf ( see ) and density where are non - negative weights with , is the spread - adjustment parameter and is the unique median of . further and are defined via the relationship and , respectively . note that for neutrally dispersed or overdispersed components , a value of may be appropriate , while for underdispersed components , a value is suggested .a value of corresponds to the standard linear pool .typically a common spread parameter is used for all components , although the method can be generalized to have spread parameters varying with the components . however, this may be appropriate only in case the dispersion properties of the components differ to a high extent . +the data set for our case study comprises an ensemble with members ( and one control forecast not used here ) of the european centre for medium - range weather forecasts ( ecmwf , see e.g. ) .initialized at 00 utc , the forecasts are issued on a grid with 31 km resolution , and they are available for forecast horizons in 3 hour steps up to 144 hours . in germany 00 utc corresponds to 1 am local time , and to 2 am local time during the daylight saving period . for our analysiswe consider -h ahead forecasts for 2-m surface temperature in germany along with the verifying observations at different stations in the time period ranging from 2010 - 02 - 02 to 2011 - 04 - 30 .although there is a total of 518 stations in the full data set , only stations with complete observations for the variables were retained . to use the forecasts in combination with globally distributed surface synoptic observations ( synop ) data ,they are bilinearly interpolated from the four surrounding grid points to the locations that correspond to actual observation stations .the observation data from stations in germany were provided by the german weather service ( dwd ) .figure [ fstations ] shows the locations of the 383 stations within germany , where the station frankfurt a.m. is marked as a filled circle . for each of the 383 stations we compute the series of forecast errors of the ensemble mean , where ranges over the whole time period . to check for independence ,we apply the ljung - box test based on lag 1 , which is available in r as function box.test .all 383 computed p - values are not greater than 0.046 ( the largest occurring value ) , indicating substantial autocorrelation in the forecast error series for each station .figure [ f3 ] further illustrates this point by showing the series of temperature observations together with the ensemble mean , the corresponding forecast errors , and the autocorrelation function ( acf ) of the series of forecast errors for the randomly chosen station ruppertsecken in rhineland - palatinate . to account for autocorrelation in a series of forecast errors ,the following subsections present an approach to obtain a predictive distribution based on the ar modification method ( ar - emos ) , while section [ sec : applicationsingle ] discusses ar - emos for the station frankfurt a.m. with regard to a much longer than before verification period of 3650 days .when the mean of the raw ensemble is considered as a deterministic - style forecast , a corresponding ar modified forecast can be generated according to the procedure described in section [ sec : ar ] . for computing the order and the coefficients of the ar process, a training period of days previous to the forecast day is considered . for each day out of the remaining days from the forecast ( verification ) period , the ar coefficients are newly estimated , where the training period is shifted accordingly ( rolling training period ) . when choosing an appropriate length of training period , there is usually a trade - off .longer training periods yield more stable estimates , but may fail to account for temporal changes , while shorter periods can adapt more successfully to changes over time . to learn about an appropriate training length , a variety of possible values , namely , were considered in a preliminary analysis , and for each station the mean absolute error of the ar modified forecast has been computed .as it turned out , a training length of days provided a favorable performance with respect to the mae averaged over the considered stations .this number of days will be used as training length for fitting the ar process in all further analyses ..maes for days averaged over 383 stations . [cols="<,>,>,>,>,>,>",options="header " , ] figure [ f2 ] shows the verification rank histogram of the raw ensemble and the pit histograms of the three methods over 3650 days , behaving similar to the pit histograms computed over all 383 stations in figure [ f1 ] . particularly the raw ensemble and local emos exhibit an additional forecast bias .while the raw ensemble tends to underestimate the temperatures at frankfurt a.m. , local emos tends to overestimate it .the hump shape of the local ar - emos predictive distribution already seen in figure [ f1 ] , is even more pronounced when investigating the pit values of a single station . as in figure[ f1 ] , our slp combination is closest to uniformity , although the first bin is more occupied than in the overall pit histogram of figure [ f1 ] .these observations are in line with the results of table [ t5 ] , showing the variance of the pit values and the root mean variance as a measure of sharpness .similar to the situation presented in the left panel , the variance of the local emos pit values is larger than , indicating underdispersion , while the variance of the local ar - emos pit values is smaller than , indicating overdispersion . when considering the pit values only for the station frankfurt a.m. , the variance of the pit values is even smaller than in the overall case ( table [ t4 ] ) , corresponding to the more pronounced hump - shape .the variance of the pit values of the slp combination differs only slightly from , indicating that the pit histogram is close to uniformity .the sharpness properties of the predictive distribution directly correspond to those in table [ t4 ] . while local emos is sharpest and local ar - emos is least sharpest , our slp combination has a sharpness value between the other two .the diebold - mariano test statistic introduced in section [ sec : dmtest ] for comparing the predictive performance of local emos with our derived slp combination ( here applied to the crps series at the station frankfurt ) is for , thereby indicating a highly significant nonequal predictive performance of the two methods .in this work , we propose a basic ar - modification method that accounts for potential autoregressive structures in forecast errors of the raw ensemble .the ar - modification is straightforward to compute by standard ` r ` functions to fit ar processes and can be utilized in the context of ensemble forecasts in different ways .it can simply be employed to obtain an adjusted raw ensemble of forecasts that is corrected for autoregressive structures or it can be used to construct different types of predictive distributions . to this end , our proposed modification is simple and yet effective . in our case studywe suggest to built an emos - like predictive distribution based on the ar - adjusted ensemble and in a second step obtain an aggregated predictive distribution that comprises of the state - of - the - art emos predictive distribution and our ar - emos variant .as emos is a well - established standard postprocessing procedure that is easy to compute , our proposed extension can be easily constructed .further , the approach is neither restricted to a specific postprocessing model nor to temperature forecasts .a modification of the approach allowing to combine the method with other standard postprocessing models such as bma should be straightforward .our ar - modification approach allows for a flexible bias - correction based on fitting ar process to the error series .the method identifies cases where a correction for autoregressive structures is indicated and performs a simple bias - correction based on past values in cases where no substantial autocorrelation is present . in the considered casestudy all our derived variants based on the ar - modification approach improve on the standard emos method .while the improvement of the ar - emos predictive distribution on the standard emos distribution is only small and ar - emos still lacks calibration , the slp combined predictive distribution based on emos and ar - emos shows a pit histogram close to uniformity and the improvement on standard emos with respect to the verification scores is highly significant . in line with the recently increased interest in multivariate postprocessing models that yield physically coherent forecasts , it should be of interest to extend our method to this field of research . an approach that allows to retain dependence structures with low computational costis the ensemble copula coupling ( ecc ) method introduced by .ecc is able to recover temporal , spatial and inter - variable dependencies present in the raw ensemble by reordering samples from the predictive distributions according to the rank structure of the raw ensemble .it is a flexible and computationally efficient method , as one needs to compute simply the rank order structure of the raw ensemble .a combination e.g. of our slp predictive distribution with ecc would be straightforward and easy to compute .this procedure can account for spatial dependence structures between the stations not considered by our station - wise approach and it may even be able to recover additional temporal dependencies from the raw ensemble , that are not explicitly modelled by our ar - approach that only considers the autoregressive structure in the errors . while ecc is a nonparametric method that can recover different types of multivariate structures simultaneously ,there are several parametric approaches to incorporate spatial or inter - variable dependencies . and for example propose spatially adaptive extensions of the basic bma method , while , and investigate different ways of extending emos to incorporate spatial dependencies .there is also an interest to investigate inter - variable dependencies . for exampledevelop a bivariate emos model for wind vectors and a bivariate bma model for temperature and wind speed .further , investigate a general multivariate setting that allows to combine arbitrary univariate postprocessing distributions within a gaussian copula framework .the development of multivariate postprocessing models is a very active area of research , and an extension of our proposed ar - modification that incorporates spatial or inter - variable dependencies as well should be highly beneficial .we are grateful to the european centre for medium - range weather forecasts ( ecmwf ) and the german weather service ( dwd ) for providing forecast and observation data , respectively .we wish to thank tilmann gneiting for useful discussions and helpful comments . gneiting t , raftery ae , westveld iii ah , goldman t. 2005 . calibrated probabilistic forecasting using ensemble model output statistics and minimum crps estimation ._ monthly weather review _ * 133 * : 10981118 .grimit ep , gneiting t , berrocal v , johnson na .the continuous ranked probability score for circular variables and its application to mesoscale forecast ensemble verification . _ quarterly journal of the royal meteorological society _ * 132 * : 29252942 . hemri s , scheuerer m , pappenberger f , bogner k , haiden t. 2014 .trends in the predictive performance of raw ensemble weather forecasts . _ geophysical research letters _ * 41 * : 91979205 , doi : 10.1002/2014gl062472 .kleiber w , raftery ae , baars j , gneiting t , mass c , grimit ep . 2011 .locally calibrated probabilistic temperature foreasting using geostatistical model averaging and local bayesian model averaging ._ monthly weather review _ * 139 * : 26302649 .milionis ae , davies td .box - jenkins univariate modelling for climatological time series analysis : an application to the monthly activity of temperature inversions ._ international journal of climatology _ * 14 * : 569579 .mller a , lenkoski a , thorarinsdottir tl .multivariate probabilistic forecasting using ensemble bayesian model averaging and copulas ._ quarterly journal of the royal meteorological society _ * 139 * : 982991 .scheuerer m , knig g. 2014 .gridded , locally calibrated , probabilistic temperature forecasts based on ensemble model output statistics . _ quarterly journal of the royal meteorological society _ * 140 * : 25822590 .
|
to address the uncertainty in outputs of numerical weather prediction ( nwp ) models , ensembles of forecasts are used . to obtain such an ensemble of forecasts the nwp model is run multiple times , each time with different formulations and/or initial or boundary conditions . to correct for possible biases and dispersion errors in the ensemble , statistical postprocessing models are frequently employed . these statistical models yield full predictive probability distributions for a weather quantity of interest and thus allow for a more accurate assessment of forecast uncertainty . this paper proposes to combine the state of the art ensemble model output statistics ( emos ) with an ensemble that is adjusted by an ar process fitted to the respective error series by a spread - adjusted linear pool ( slp ) in case of temperature forecasts . the basic ensemble modification technique we introduce may be used to simply adjust the ensemble itself as well as to obtain a full predictive distribution for the weather quantity . as demonstrated for temperature forecasts of the european centre for medium - range weather forecasts ( ecmwf ) ensemble , the proposed procedure gives rise to improved results upon the basic ( local ) emos method .
|
one of the characteristic features of geophysical flows ( see for instance ) is stratification ( the other one is rotation ) . in this manuscript, we study some problems related to suspensions of heavy particles in incompressible slightly compressible- fluids .our aim is a better understanding of mixing phenomena between the two phases , the fluid and solid one .we especially study this problem because ( turbulent ) mixing with stratification plays a fundamental role in the dynamics of both oceanic and atmospheric flows . in this study , we perform the analysis of some models related to the transport of heavy dilute particles , with special emphasis on their mixing . observe that mixing is very relevant near the surface and the bottom of the ocean , near topographic features , near polar and marginal seas , as well as near the equatorial zones . especially in coastal waters , precise analysis of transport and dispersion is needed to study biological species , coastal discharges , and also transport of contaminants .the other main motivation of our study is a better understanding of transport of particles ( e.g. dust and pollution ) in the air .this happens -for instance- in volcanic eruptions or more generally by natural and/or human generation of jets / plumes of particles in the atmosphere . following , in the physical regimes we will consider , it is appropriate to use the eulerian approach , that is the solid - phase ( the particles ) will be modeled as a continuum .this choice is motivated by the presence of a huge number of particles and because we are analyzing the so called `` fine particle '' regime ( that is the stokes number is much smaller than one ) . in this regime, a lagrangian approach could be computationally expensive , and the eulerian approach may offer more computationally efficient alternatives .we will explain the precise assumptions that make this _ ansatz _ physically representative and we will also study numerically the resulting models , with and without large scales further approximation .in particular , we will model the particles as dust , investigating a model related to _dusty gases _ , and which belongs to the hierarchy of reduced multiphase models , as reviewed by balachandar .these models represent a good approximation when the number of fine particles to be traced is very large and a direct numerical simulation ( dns ) of the fluid with a lagrangian tracer for each particle would be too expensive . as well explained in ,the point - like eulerian approach for multiphase fluid - particle dynamics becomes even more efficient in the case of large eddy simulations ( les ) , because the physical diameter of the particles has to be compared with the large eddy length - scale and not with the smaller kolmogorov one .we will use the dusty gas model in a physical configuration that is very close to that modeled by the boussinesq system , and this explains why we compare our numerical results with those reported in .observe that the dusty gas model reduces to the boussinesq system with a large prandtl number if : a ) the fluid velocity is divergence - free ; and b ) the relative ratio of solid and fluid bulk densities is very small ( see sec .[ sec : models ] and eq . ) .the approach we will use for multiphase fluids is well - described in marble .more precisely , when the _ stokes time _which is the characteristic time of relaxation of the particle velocity with respect to the surrounding fluid is small enough and the number of particles is very large , it could be reasonable to use the eulerian approach ( instead of the lagrangian ) . in eulerian models both the carrier and the dispersed phaseare treated as interpenetrating fluid media , and consequently both the particulate solid - phase and fluid - phase properties are expressed by a continuous field representation .originally we started studying these models in order to simulate ash plumes coming from volcanic eruptions , see , but here we will show that the same approach could be also used to study some problems coming from other geophysical situations , at least for certain ranges of physical parameters . our model is evaluated in a two dimensional _ dam - break problem _ , also known as the _ lock - exchange problem_. this problem , despite being concerned with a ) a simple domain ; b ) nice initial and boundary conditions ; and c ) smooth gravity external forcing , contains shear - driven mixing , internal waves , interactions with boundaries , and convective motions .the dam - break problem setup has long served as a paradigm configuration for studying the space - time evolution of gravity currents ( cf .consequently , we set up a canonical benchmark problem , for which an extensive literature is available : the vertical barrier separating fluid and fluid with particles is abruptly removed , and counter - propagating gravity currents initiate mixing. the time evolution can be quite complex , showing shear - driven mixing , internal waves interacting with the velocity , and gravitationally - unstable transients .this benchmark problem has been investigated experimentally and numerically for instance in .both the impressive amount of data and the physical relevance of the problem make it an appropriate benchmark and a natural first step in the thorough assessment of any approximate model to study stratification .the results we obtain validate the proposed model as appropriate to simulate dilute suspensions of ash in the air . in addition , we found that new peculiar phenomena appear , which are generated by compressibility .even if the behavior of the simulations is qualitatively very close to that of the incompressible case , the ( even very slightly ) compressible character of the fluid produces a more complex behavior , especially in the first part of the simulations . to better investigate the efficiency and limitations of the numerical solver, the numerical tests will be performed by using both dns and les .complete discussion of the numerical results will be given in section [ sec : numerical_results ] .* plan of the paper : * in section [ sec : models ] we present the reduced multiphase model we will consider , with particular attention to the correct evaluation of physical parameters that make the approximation effective . in section [ sec : numerical_results ] we present the setting of the numerical experiments we performed .particular emphasis is posed on the initial conditions and on the interpretation and comparison of the results with those available in the literature .in order to study multiphase flows and especially ( even compressible ) flows with particles , some approximate and reduced models have been proposed in the literature . in the case of dilute suspensions ,a complete hierarchy of approximate models is available ( see ) on the basis of two critical parameters determining the level of interaction between the liquid and solid phase : the fractional volume occupied by the dispersed - phase and the mass loading , ( that is the ratio of mass of the dispersed to carrier phase ) . when they are both small , the dominant effect on the dynamics of the dispersed - phase is that of the turbulent carrier flow ( _ one - way coupled _ ) .when the mass of the dispersed - phase is comparable with that of the carrier - phase , the back - influence of the dispersed - phase on the carrier - phase dynamics becomes relevant ( _ two - way coupled _ ) .when the fractional volume occupied by the dispersed - phase increases , interactions between particles become more important , requiring a _ four - way coupling_. in the extreme limit of very large concentration , we encounter the granular flow regime . here , we consider rather heavy particles such that ( air ) , or ( liquid ) , where in the sequel the subscript `` '' stands for solid , while `` '' stands for fluid . herea hat denotes material densities ( as opposed to bulk densities ) : in particular , we suppose . a rather small particle / volume concentration must be assumed ( to have dilute suspensions ) , that is where is the volume occupied by the particles over the total volume . when is smaller than , particle - particle collisions and interactions can be neglected and the particle - phase can be considered as a pressure - less and non - viscous continuum . in this situationthe particles move approximately with the same velocity of the surrounding fluid , and the theory has been developed by carrier ( see a review in marble ) . with these assumptionsthe bulk densities and are of the same order of magnitude , about in the case dust - in - air ( two - way coupling ) . in the case of water with particlesthe ratio is of the order of , hence particles behave very similarly to passive tracers ( almost one - way coupling ) .another assumption required by marble s analysis is that particles can be considered _ point - like _ , if their typical diameter is smaller than the smallest scale of the problem under analysis , that is the kolmogorov length ( dns ) , or the smallest resolved les length - scale ( les ) . to describe the gas / fluid - particle drag, we observe that it depends in a strong nonlinear way on the local flow variables and especially on the relative reynolds number : where is the gas dynamic viscosity coefficient and and are the fluid and solid phase velocity field , respectively . on the other hand , for a point - like single particle and in the hypothesis of small velocities difference ( ), the drag force ( per volume unit ) acting on a single particle depends just linearly on the difference of velocities : where is the _ particle relaxation time or stokes time _ , which is the time needed to a particle to equilibrate to a change of fluid velocity , and is the fluid kinematic viscosity .in particular , in the case of water with particles we have , while in the case of a gas and hence in order to measure the lack of equilibrium between the two phases , we have to compare with the smallest time of the dynamics . in the turbulent regime ,the smallest time is the kolmogorov smallest eddy s turnover time ( dns ) ( cf .frisch ) or analogously ( les ) .it is possible to characterize this situation by using as non - dimensional parameter the stokes number which is defined by comparing the stokes time with the fastest time - scale of the problem under analysis .if ( the `` fine particle regime '' ) , we say that we have _ kinematic equilibrium _ between the two phases and so we can use in a consistent way the dusty gas model . in order to have also_ thermal equilibrium _ between the two phases , one has to assume that the _ thermal relaxation time _ ) is small , that is : comparing the kinetic and thermal relaxation times , we get the stokes thermal time i.e. , the particle prandtl number , where is the solid - phase specific heat - capacity at constant volume and is its thermal conductivity . to ensure that the dusty gas model is physically reasonable ,both kinematic and thermal equilibrium must hold , that is , both stokes numbers should be less than .this implies that we have a single velocity for both phases and also a single temperature field . to check that our assumptions are fulfilled, we first show that if the stokes number is small , then also the thermal stokes number remains small . indeed , using the typical value of the dynamic viscosity ( water ) or ( air ) , specific heat capacity and thermal conductivity , we can evaluate the particle prandtl number in both cases : hence formula shows that .summarizing , we used the following assumptions : 1 .continuum assumption for both the gaseous and solid phase ; 2 . the solid - phase is dispersed ( ) , thus it is pressure - less and non - interacting ; 3 .the relative reynolds number between the solid and gaseous phases is smaller than one so that it is appropriate to use the stokes law for drag ; 4 . the stokes number is smaller than one so that the eulerian approach is appropriate ; 5 .all the phases , either solid or gaseous , have the same velocity and temperature fields ( local thermal and kinematic equilibrium ) .we showed that this assumption is accurate if the stokes number is much smaller than one . in this regime ,the equations for the balance of mass , momentum , and energy are : \ , , \end{aligned } \right.\ ] ] where is the mixture density , is the internal mixture energy , and is the gravity acceleration pointing in the downward vertical direction .the stress - tensor is ,\ ] ] with the dynamic viscosity , possibly depending on the temperature , and the spatial dimension of the problem .the fourier law for the heat transfer assumes , where is the fluid thermal conductivity .we denote by and the fluid and solid phase specific heat - capacity at constant volume , respectively .system is completed by using the constitutive law . in the case of air and particles ( the one for which we will present the simulations ) , where is the air gas constant . +* remark 1 .* the correct law would be , but in our dilute setting is very small , which justifies the approximation .a different constitutive law must be used in the presence of water or other fluids . + * remark 2 . *note that the constant particle pressure is justified by the lack of particle - particle forces .note that in the case of uniform particle distribution ( ) , the equations reduce to the compressible navier - stokes equations , with density multiplied by a factor .some numerical experiments ( with ) were performed in , where the dusty gas model was applied to volcanic eruptions , i.e. a flow with vanishing initial solid density and particles injected into the atmosphere from the volcanic vent . denoting by s the solid - phase mass - fraction, we can rewrite the system with just one flow variable ( ) as follows : \ , , \end{aligned } \right.\ ] ] in the following , we will also assume to have an iso - entropic flow with a perfect gas ( which is a reasonable approximation for the air , see for example ) .we can thus substitute the energy equation ( [ eq : equilibriumeulerian2bis]-d ) by the constitutive law where and is the streamline starting at for ( we have not been able to find this expression in the literature ; for its full derivation see ) . in particular , a simple calculation shows that .moreover , since ( where and are motivated by the small density variations compared with a constant temperature ) , we can consequently study the following system ( with ; and are constants determined from the initial conditions ) : here the iso - entropic assumption is justified . indeed , since the reynolds number is typically much greater than 1 , and the prandtl number is of the order of , the two dissipation terms and ( corresponding to the conduction of heat and its dissipation by mechanical energy ) can be neglected . moreover , since and the temperature fluctuations are small , we can disregard the heat transfer from solid to fluid phase .observe that if , , and if we use the boussinesq approximation , we get from the following system : which is exactly the boussinesq equations , except that there is no diffusion for the density perturbation ( i.e. , infinite prandtl number ) .thus , numerical results concerning are comparable with results from the classical boussinesq equations , see .to validate the eulerian model for multiphase flows , we use it to perform both dns and les of a dam - break ( lock - exchange ) problem .since we want to compare our results with accurate results available in the literature , we use a setting which is very close to that in , in terms of both equations and initial conditions .in particular , we consider a two dimensional rectangular domain and with an aspect ratio large enough ( ) in order to obtain high shear across the interface , and to create kelvin - helmholtz ( kh ) instability .we use this setting because in a domain with large aspect ratio , the density interface has more space to tilt and stretch . for this test case ,the typical velocity magnitude is ( for further details see e.g. ) , where is the layer thickness and the volumetric fraction of denser material times . from now on , with a slight abuse of notation ,we denote by the modulus of the gravity acceleration . in our simulationwe set , from which we get we use the characteristic length - scale to non - dimensionalize all the equations in . in order to have when , we need to set .moreover , we choose a dimensional system where the initial solid bulk density is , which yields .the froude number is for all the simulations , so we are free to choose a such that .we set . in these non - dimensional units ,the reynolds number is , we set the dynamic viscosity such that the maximum reynolds number we consider is one of the inherent time - scales in the system is the ( brunt - visl ) buoyancy period which is the natural time related to gravity waves . in order to have a quasi - incompressible flow , we set . using our non - dimensional variables , the perfect gas relationship is and the speed of sound is .we want and , so we set and .experiments are performed at different resolutions ( from about , up to about grid cells ) , see the next section for details .the initial condition is a state of rest , in which the fluid with particles on the left is separated from the fluid ( without particles ) on the right by a sharp transition layer .since the tilting of the density interface puts the system gradually into motion , the system can be started from a state of rest . due to the ( slight ) compressibility of the fluid some peculiar phenomenaoccur close to the initial time .these effects are not present in the incompressible case , cf . the discussion below .we consider the isolated problem , so that the iso - entropic approximation is valid and consequently we supplement system with the following boundary conditions : the boundary condition for the density perturbation is no - flux , while free - slip for the velocity : where is the unit outward normal vector , while is a tangential unit vector on . in the two dimensional setting we use for the numerical simulation ( the two dimensional rectangular domain -l/2,l/2[\times]0,h[$ ] ) the boundary conditionsbecome : we considered as initial datum the classical situation used in the dam - break problem , with all particles confined in the left half of the physical domain ( with uniform distribution ) , while a uniform fluid fills the whole domain .moreover , we have an initial uniform temperature and pressure distribution . suddenly the wall dividing the two phases is removed and we observe the evolution .even if our numerical code is compressible , we started with this setting , widely used to study incompressible cases , since we are in the physical regime of quasi - incompressibility .the compressibility is mostly measured by the mach number . for air we have a typical velocity , hence the mach number of air in this condition is around 0.01 , as we choose for our simulations . on the other hand , for water we would obtain and . nevertheless , as we will see especially in fig .[ fig : bge ] , even this very small perturbation creates a new instability and new phenomena for times very close to .in particular , new effects appear for .these effects seem limited to the beginning of the evolution .the characteristic time of the stratification ( for a dns ) is defined as ( see ) where is the density difference between the ground level and the height of the upper boundary wall . in particular , we know that for the gaseous - phase , the stable solution is the barotropic stratification , due to the gravity acceleration : and in the case of perfect gases we recover the fact that the typical stratification height for the atmosphere ( ) is while for water in the iso - thermal case we would obtain since is small in both cases , we can use the following approximation : for a domain with volume and mass , in the incompressible case the stable stationary configuration is with vanishing velocity and . on the contrary , in our slightly compressible case , the stable stationary configuration is : the length has to be compared with the height of the domain , in order to evaluate the importance of stratification .for instance , if we use realistic values of density , pressure , and gravity acceleration for air ( to come back to dimensional variables ) we get that the height of the domain is , while for water we get . in the case of airwe obtain density variations due to gravity which are of the order of , while for water they should be of the order of .this explains that in the case of water , the dominant variations of density , which are of the order of , are those imposed by the initial configuration of particles . on the other hand , in the case of particles in air , the one we are mostly interested to, the two phenomena create fluctuations which are comparable in magnitude , and this can be seen in fig .[ fig : bge_irreversible ] .in particular , in fig . [fig : bge_irreversible ] , one can see that the fluctuations created by the non - stratified initial condition affect the behavior of the background potential energy defined below . in the case of air, we have that , and thus the effects of these instabilities ( due to the initial heterogeneity ) will be observed before the mixing effects , which are dominant in the rest of the evolution . on the other hand , this effect can not be seen by analyzing just the mixed fraction , see fig .[ fig : mass - fraction ] and the discussion below. we will also compare the results obtained from dns with those obtained by different les models , as discussed later on .the accuracy of the les models is evaluated through _ a posteriori _ testing .the main measure used is the background / reference potential energy ( rpe ) , which represents an appropriate measure for mixing in an enclosed system .rpe is the minimum potential energy that can be obtained through an adiabatic redistribution of the masses . to compute rpe ,we use directly the approach in , since the problem is two - dimensional and computations do not require too much time where is the height of fluid of density in the minimum potential energy state . to evaluate , we use the following formula : where is the heaviside function .it is convenient to use the non - dimensional background potential energy which shows the relative increase of the rpe with respect to the initial state by mixing .further discussion of the energetics of the dam - break problem can be found in . with these considerations we are now able to compute the maximum particle diameter fulfilling our hypothesis ( ) .first , we must evaluate the smallest time - scale of the dynamics .as described in tab .[ tab : resolutions ] , we used three different resolutions .the ultra - res resolution can be considered as a dns , so the smallest time - scale of the simulation is the kolmogorov time , while the smallest length - scale is .the other two resolutions have been used for les : we have and for the mid - res and low - res resolutions , respectively . by using the relationship , we found and , respectively . in tab .[ tab : diameter ] we report the dimensional maximum particle diameter for which the dusty gas hypothesis is fulfilled ( cf . eqs . ) at various resolutions ..the dimensional maximum particle diameter fulfilling the dusty gas hypothesis . [ cols="^,^,^,^ " , ] together with the dns simulation done on the ultra - res mesh and the four les done on low - res and mid - res meshes , we also performed two under - resolved simulations without sgs model , denoted by low - res dns * and mid - res dns*. to illustrate the complexity of the mixing process that we investigate , in fig . [fig : one ] we present snapshots of dns for the density of particles concentration at different times ( it is represented in a linear color scale for ) .we notice that the results are similar to those obtained in .thus , the dns time evolution of the density perturbation will be used as benchmark for other numerical simulations , since ( as in ) the number of grid points is large enough to resolve all the relevant scales and to consider simulations at ultra - res as a dns . , b ) , c ) , d ) , in ultra - res dns at . ] , b ) , c ) , d ) , in ultra - res dns at . ] , b ) , c ) , d ) , in ultra - res dns at . ] , b ) , c ) , d ) , in ultra - res dns at . ]we study this problem varying both the mesh resolution ( cf .table [ tab : resolutions ] ) and the sgs les model ( smagorinsky and one equation eddy model ) . fig .[ fig : mesh ] displays snapshots of the solid - phase bulk densities at time for the three different mesh resolutions : dns at ultra - res , dns * at mid - res , and dns * at low - res . fig .[ fig : lesmodel ] displays snapshots of the solid - phase bulk density at time . to generate the plots in fig .[ fig : lesmodel ] , we use two les models ( the smagorinsky and the one equation eddy model ) at two coarse resolutions ( mid - res and low - res ) . to assess the quality of the les results, we used the dns at ultra - res as benchmark .[ fig : lesmodel ] shows that the les models yield similar results . from fig .[ fig : lesmodel ] we can deduce that , even if the overall qualitative behavior is reproduced in four les simulations , the results obtained at low - res are rather poor and only the bigger vortices are reproduced . on the other hand ,the les results at mid - res are in good agreement with the dns and the one equation eddy model seems to be better performing when looking at the smaller vortices .the two les models required a comparable computational time and a comparison based on more quantitative arguments will be discussed later on , see fig .[ fig : mixed - mass ] and [ fig : bge ] and discussion therein . evaluated with different resolutions , ( a ) low - res dns * , ( b ) mid - res dns * , ( c ) ultra - res dns . ] evaluated with different resolutions , ( a ) low - res dns * , ( b ) mid - res dns * , ( c ) ultra - res dns . ] evaluated with different resolutions , ( a ) low - res dns * , ( b ) mid - res dns * , ( c ) ultra - res dns . ] evaluated with different les models : ( a ) low - res smagorinsky , ( b ) low - res one eq .eddy , ( c ) mid - res smagorinsky , ( d ) mid - res one eq .eddy , ( e ) ultra - res dns . ] evaluated with different les models : ( a ) low - res smagorinsky , ( b ) low - res one eq .eddy , ( c ) mid - res smagorinsky , ( d ) mid - res one eq .eddy , ( e ) ultra - res dns . ] evaluated with different les models : ( a ) low - res smagorinsky , ( b ) low - res one eq .eddy , ( c ) mid - res smagorinsky , ( d ) mid - res one eq .eddy , ( e ) ultra - res dns . ] evaluated with different les models : ( a ) low - res smagorinsky , ( b ) low - res one eq .eddy , ( c ) mid - res smagorinsky , ( d ) mid - res one eq .eddy , ( e ) ultra - res dns . ] evaluated with different les models : ( a ) low - res smagorinsky , ( b ) low - res one eq .eddy , ( c ) mid - res smagorinsky , ( d ) mid - res one eq .eddy , ( e ) ultra - res dns . ] figs .[ fig : one]-[fig : lesmodel ] show that , just as in the case of the boussinesq equations , the system rapidly generates the kelvin - helmholtz billows along the interface of gravity waves , which are counter - propagating .these waves are reflected by the side walls and gradually both billows grow by entraining the surrounding fluid . later the mixing increases so much that individual billows can not be seen anymore . in order to check whether our dns results are an appropriate benchmark for the les results , we compare our ultra - res dns results with those in . since we chose analogous initial conditions and since our two - phase model is comparable with the boussinesq equations ( cf ., we expect similar qualitative results for all the flow variables . in fig .[ fig : mass - fraction ] we compare our ultra - res dns results with those from using the mixed mass fraction , which is a quantity measuring the mixing .the mixed mass fraction is defined as the fraction of volume were the density perturbation is partially mixed . in particular , in our simulations with homogeneous meshes , it is obtained evaluating the percentage of cells such that ( cf .the plots in fig .[ fig : mass - fraction ] show that the two simulations yield similar results , as expected .the main difference is in the time interval , where our simulation seems to mix slightly more than the simulation from . as we will discuss later ,this is probably due to the mixing induced by the creation of stratification . for the various low resolution les models .the dns results ( solid ) serve as benchmark . ] in fig .[ fig : mixed - mass ] we plot the evolution of the mixed mass fraction for all our simulations .[ fig : mixed - mass ] yields the following conclusions : at the low - res , the one equation eddy model performs the best , followed by the dns * , and the smagorinsky model ( in this order ) . at the mid - res , the smagorinsky model performs the best , followed by the one equation eddy model , and the dns * ( in this order ) .the main measure used in the assessment of the accuracy of the models employed to predict mixing in the dam - break problem is the non - dimensional background potential energy rpe * defined in , cf . . ) , for the les models at various resolutions .the dns results ( solid ) serve as benchmark .the time is normalized with . ]figure [ fig : bge ] plots the background energy of the various les models .the dns results serve as benchmark . fig .[ fig : bge ] yields the following conclusions : at the low - res , the one equation eddy model performs the best , followed by the dns * , and the smagorinsky model ( in this order ) . at the mid - res , the one equation eddy model again performs the best , followed by the dns * , and the smagorinsky model ( in this order ) .apart from the above les model assessment , we also observe that new important phenomena appear in the compressible case : while in the incompressible case the rpe is monotonically increasing , in our investigation it is initially decreasing , it then reaches a minimum , and it finally starts to increase monotonically , as expected . in order to better understand this phenomenon, we have to compare the background energy of the homogeneous initial condition with that of the stratified initial condition .evaluating the initial potential energy ( ) , the available energy ( ) , and the background energy ( ) for the homogeneous initial density of the solid - phase , we get : if we consider the initial distribution of fluid and particles in the stratified case , with , and , we get where is the heaviside step function . evaluating the same energies ( as those in ) for the stratified density distribution considered and using and , we get and also these analytical computations show that the rpe of the stratified state is smaller than that of the homogeneous state . in the next section we will discuss this issue in more detail . in this section ,we compare the results of the previous sections with some low - res simulations obtained from the same test case , by using system , i.e. without the assumption of a barotropic fluid .the simulations with model are more time - consuming and so we performed them only at low - res ( simulations with finer mesh resolution are in preparation and their results will appear in the forthcoming report ) .the barotropic assumption is based on the fact that the thermal and kinematic diffusion ( and ) in eq .are negligible , so that the entropy of the system is constant along streamlines , i.e. ( cf . for the one - phase case and for the multiphase case ) : this is a reversibility assumption .indeed , the background energy can be considered as a sort of entropy , measuring the potential energy dispersed in the mixing .the fact that the transformation is reversible allows the background energy to decrease . on the contrary , if we remove this assumption , coming back to the full multiphase model ( including the energy equation ) , we find that the background energy becomes monotone , see fig .[ fig : bge_irreversible ] .this figure suggests that the barotropic assumption may be not completely justified during the initial time - interval needed to adjust from the homogeneous to the stratified condition ( probably this transformation can not be considered fully iso - entropic ) .nevertheless , the barotropic assumption seems justified after the time .moreover , the stratified initial condition makes the simulation more stable and accurate , but also less diffusive , even at low - res .the rpe * is monotonically increasing when using model ( low - res irreversible ) and , starting with the stratified initial condition , decreases the mixing and brings it closer to that of the dns . ''represents rpe * starting from the initial condition , while the line with `` - -- - '' represents the same quantity starting from the homogeneous initial state . the solid line and the line with `` - -- - '' are the rpe * obtained with the barotropic model with homogeneous initial state , with the ultra - res dns and the low - res dns * , respectively .] note that the low - res dns * irreversible with homogeneous initial data and the ultra - res dns start from the same datum .even if the low - res dns * is under - resolved , the behavior of the rpe * is correct and it is monotonically increasing . the behavior , at the beginning of the evolution , is closer to the dns than the behavior of the les described in fig . [fig : bge ] , obtained from the barotropic model . on the other hand , after this transient time the behavior becomes comparable with that of the previous low - res barotropic simulation ( low - res dns * vs. low - res dns * irreversible and homogeneous ) .the comparison of the results obtained at various resolutions and with different les models for the barotropic and non - barotropic equations deserves further investigation and we plan to perform it in the near future .we examined a two - dimensional dam - break problem were the instability is due to the presence of a dilute suspension of particles in half of the domain .the reynolds number based on the typical gravity wave velocity and on the semi - height of the domain is , the froude number is , the mach number is , and the prandtl number is .the particle concentration is , and the stokes number is smaller than ( fine particles ) .the importance of stratification , measured as the density gradient times the domain height ( ) , is about a few percent ( ) .even if the problem is quasi - incompressible and quasi - isothermal , we used a full compressible code , with a barotropic constitutive law .we employed a homogeneous and orthogonal mesh with three different grid refinements ranging from to cells . _a posteriori _ tests confirm that the finer grid can resolve all the scales of the problem .the code that we used was derived from the openfoam ` c++ ` libraries .we compared our quasi - isothermal two - phase simulations with the analogous mono - phase problem , where the mixing occurs between the same fluid at two different temperatures , as reported in . as we showed in section[ sec : models ] , this is possible since the two physical problems become mathematically equivalent in the regimes under study .as expected , we found a good agreement between the two sets of numerical results .we reported the evolution of the background ( or reference ) potential energy ( rpe ) , a scalar quantity measuring the mixing between the two fluids .the main contributions of this report are the following : we implemented a multiphase eulerian model ( that can be used in more complex physical situations , with more than two phases , and also involving chemical reactions between species , as in volcanic eruptions ) .we also showed the effectiveness of the numerical results obtained programming with an open - source code .more importantly , we discovered that peculiar effects due to compressibility influence the mixing . in the literature we found that the mono - phase , incompressible boussinesq test case has a monotonically increasing rpe . on the other hand , in our numerical experiments with slightly compressible two - phase flow, we found that the rpe initially decreases because of the stratification instability , and then it increases monotonically because of the mixing between the particles and the surrounding fluid . indeed ,even if the flow is quasi - incompressible ( ) , it turns out that stratification effects are not negligible .we reported the preliminary results in the two - dimensional case .we plan to perform three - dimensional numerical simulations of the same problem in a future study . #10=0=0 0 by1pt#1 # 10= m. cerminara , l.c .berselli , t. esposti ongaro , and m.v .direct numerical simulation of a compressible multiphase flow through the eulerian approach . in _ direct and large - eddy simulation ix , _ vol .12 of _ ercoftac series_. springer , 2013 . at press .t. esposti ongaro , c. cavazzoni , g. erbacci , a. neri , and m.v .salvetti . a parallel multiphase flow code for the 3d simulation of explosive volcanic eruptions ._ parallel comput ._ , 33(7 - 8):541560 , 2007 .t. zgkmen , t. iliescu , p. fischer , a. srinivasan , and j. duan .large eddy simulation of stratified mixing in two - dimensional dam - break problem in a rectangular enclosed domain ._ ocean modelling _ , 16:106140 , 2007 .
|
in this paper we study the motion of a fluid with several dispersed particles whose concentration is very small ( smaller than ) , with possible applications to problems coming from geophysics , meteorology , and oceanography . we consider a very dilute suspension of heavy particles in a quasi - incompressible fluid ( low mach number ) . in our case the stokes number is small and as pointed out in the theory of multiphase turbulence we can use an eulerian model instead of a lagrangian one . the assumption of low concentration allows us to disregard particle particle interactions , but we take into account the effect of particles on the fluid ( two - way coupling ) . in this way we can study the physical effect of particles inertia ( and not only passive tracers ) , with a model similar to the boussinesq equations . the resulting model is used in both direct numerical simulations and large eddy simulations of a dam - break ( lock - exchange ) problem , which is a well - known academic test case . + * keywords : * dilute suspensions , eulerian models , direct and large eddy simulations , slightly compressible flows , dam - break ( lock - exchange ) problem . + * msc 2010 classification : * primary : 76t15 ; secondary : 86 - 08 , 86a04 , 35q35 .
|
clarifying a line of argumentation by references , citations as a legacy mapping and orientation tool have been in use by knowledge organization for a long time .their respective importance has led to the birth of new fields of study like scientometrics and altmetrics , permeating funding decisions and ranking efforts . at the same time , citations embody scholarly courtesy as well as a form of social behavior , maintaining or violating norms . due to this , as is often the case when individual and social patterns of action are contrasted , one can suspect that factors not revealed to the observer of a single individual may point at underlying group norms when communities of individuals are scrutinized . to understand our own behavior as a species , it is important to detect any such influence . lately , the idea that multiple versions of probabilities do exist brought new ideas to the foreground .eventually the testing of a second probability alternative has made it clear that by its use , rules that were known to apply to the subatomic world of quantum mechanics only start making sense in the atomic world too .examples include decision theory and cognition , economy , biology , and language . with the above unexpected development in the history of science , and departing from earlier work in social network research , we turned to citation studies to find supporting evidence for signs of quantum - likeness in co - author behaviour , captured by longitudinal datasets .our working hypothesis was that in citation patterns , a more fundamental layer would correspond to research based on shared interest between the author and her / his predecessors called _ latent homophily _ , whereas a more ephemeral second layer would link in current trends in science . due to this , e.g. for a funding agency to find citation patterns going back to latent homophily as a single source would amount to better founded decisions , with such a pattern playing the role of a knowledge nugget .consequently , ruling out latent homophily would correspond to a sieve filtering out cases where correlations in the data go back to more than latent homophily , one important step in an anticipated workflow to dig for such nuggets by stratification in citations .the notion of the citation network was famously developed by and since then it has evolved in many different directions .incidentally , had already proposed the use of `` network charts '' of papers for the study of the history of science , but see also and for a newfound interest in algorithmic historiography .although fruitful for analysis at a less aggregated level , these maps provide the possibility to visualize the network structure of single citing / cited papers of up to , say , the lower hundreds of papers before becoming too complex to overview . to remedy this ,aggregated forms of citation networks have been developed , most notably bibliographic coupling , ` co - mentions ' of literary authors , and the more established concept of ` co - citation ' of papers .eventually , over time these aggregated forms of measurement were extended to analyse network structures of authors . by today, possibilities include the coverage of source titles and , for bibliographic coupling to reveal the networks based on address data such as department , institution and country , are limited only to the kind of structured data available in the database used for sampling .common for many of these efforts is that the network structure is used to map or represent bibliometric data for descriptive purposes in visualization , while attempts at analyzing the relationships dynamically in more causal ways have not been considered to the same extent .a notable exception is for an overview of a third mode of aggregated co - studies , namely co - authorship studies that incorporate complex systems research and social network analysis . to address a different subject area ,graphical models capture the qualitative structure of the relationships among a set of random variables .the conditional independence implied by the graph allows a sparse description of the probability distribution .therefore by combining co - authorship and citation data we propose to view co - author and citation graphs as examples of such graphical models .however , not all random variables can always be observed in a graphical model : there can be hidden variables .ruling these out is a major challenge .take , for instance , obesity , which was claimed to be socially contagious .is it not possible that a latent variable was at play that caused both effects : becoming friends and obesity the above assumption of latent homophily , asks whether there is a limit to the amount of correlation between friends , at the same time being separable from other sources different from friendship . or ,do some smokers become connected because they had always smoked , or because copying an example may bring social rewards ? to cite a methodological parallel , in quantum physics , the study of nonlocal correlations also focuses on classes of entanglement that can not be explained by local hidden variable models these are known as bell scenarios , initially stated as a paradox by einstein , podolsky and rosen in their so - called epr paper . as is well known , the epr paper proposed a thought experiment which presented then newborn quantum theory with a choice : either supraluminal speed for signaling is part of nature but not part of physics , or quantum mechanics is incomplete .thirty years later , in a modified version of the same thought experiment , bell s theorem suggested that two hypothetical observers , now commonly referred to as alice and bob , perform independent measurements of spin on a pair of electrons , prepared at a source in a special state called a spin singlet state .once alice measures spin in one direction , bob s measurement in that direction is determined with certainty , as being the opposite outcome to that of alice , whereas immediately before alice s measurement bob s outcome was only statistically determined ( i.e. , was only a probability , not a certainty ) .this is an unusually strong correlation that classical models with an arbitrary predetermined strategy ( that is , a local hidden variable ) can not replicate .recently , algebraic geometry offered a new path to rule out local hidden variable models following from bell s theorem . by describing probabilistic models as multivariate polynomials, we can generate a sequence of semidefinite programming relaxations which give an increasingly tight bound on the global solution of the polynomial optimization problem . depending on the solution, one might be able to reject a latent variable model with a high degree of confidence . in our case ,alice and bob decide about references to be picked in complete isolation , yet their decisions , in spite of being independent from each other s , may be still correlated .if we identify the source of the shared state preceding their decisions as they make their choices , we can observe correlations between author pairs , and conclude that their patterns of citing behaviour can not be explained alone by the fact that they have always liked each other .in other words , experimental findings may rule out latent homophily as a single source of correlations in certain scenarios . in a bell scenario , this means that alice and bob can agree on a strategy beforehand ( latent hidden variable ) , but at the end of the day , their observed correlations are so strong that they could only be caused by shared entanglement .due to these conceptual overlaps , we believe there is value in introducing this algebraic geometric framework to citation analysis for the following reasons : * it can indicate the presence of peer influence ( e.g. intellectual fashion , social pressures etc . ) interfering with scientific conviction . also , following and offering a different angle on it , this would correspond to correlations that can not be explained by latent homophily alone . singling out such casescould be a methodological step forward for citation studies ; * in our model , latent homophily corresponds to what we call a latent hidden variable model in bell scenarios in quantum information theory .rejecting such a model indicates entanglement in quantum mechanics , promising a next stepping stone for methodological progress in the study of citation patterns ; * given that entanglement in qm goes back to non - classical correlations , it would be a valuable finding that given such outcome , classical and non - classical correlations both contribute to patternedness in citation data .this provides a new research alliance prospect between citation studies and quantum theory based approaches , e.g. new trends in computational linguistics or decision theory .to translate the above to experiment design , we must discuss how latent homophily manifests in citation networks and why we want to restrict our attention to static models .we shall be interested in citation patterns of individual authors who have co - authored papers previously .social ` contagion ' means that authors will cite similar papers later on if they previously co - authored a paper . on the other hand, latent homophily means that some external factor such as shared scientific interest can explain the observed correlations on its own .given an influence model in which a pair of authors make subsequent decisions , if we allow the probability of transition to change in between time steps , then arbitrary correlations can emerge .static latent homophily means that the impact of the hidden variable is constant over time , that is , the transition probabilities do not change from one time step to the other .we restrict our attention to such models , this being a necessary technical assumption for the algebraic geometric framework . in practice, this means that an author does not get more or less inclined over time to cite a particular paper .a straightforward way to analyze correlations is to look at citation patterns between authors .departing from a set of authors in an initial period , we can study whether the references an author makes influence the subsequent references of her or his coauthors as defined in the initial period . in this sense, we define a graph where each node is an author - reference .two nodes are connected if the authors have co - authored a paper at some initial time step .a node is assigned a binary state , reflecting whether that author - reference pair is actually present .the influence model is outlined in fig . [ influence ] and cause the edges in the co - author network and are also the sole influence in changes whether an author - reference pair changes in subsequent time steps . ]we can not , however , look at all the references that an author made until the end of some time period .if we assign + 1 to the condition that an author - reference pair exists , i.e. the author cited the paper until the end of the specified period , this node state will never flip back to -1 . in other words ,given sufficient time , all node states would become + 1 , revealing very little about correlations .therefore we assign a + 1 state to a node if the author cites a paper _ within _ the observation period .if during the next period he or she does not cite it , it will flip back to -1 . inwhat follows , we follow the formalism as described by , which , for an individual time step , also closely resembles the study of bell scenarios by semidefinite programming in quantum information theory .suppose we are looking at a pair of authors , for alice and for bob .let be the probability that node flips from to , and the probability of the reverse transition .the initial probability of being in the state is .we define the same probabilities for with and .the state of node at time step is , and the sequence denotes the states until some time step ; similarly for .further suppose that depends on some hidden variable and on .a random variable depends on both hidden variables and it represents edges between time steps , that is , describes our graph structure .the probability of a sequence of possible transitions is as follows : where and are counters of the transitions : similarly for .let be the parameter vector .we are ready to move towards a geometric description of the problem .let us take observables on and these can be the indicator functions of all possible outcomes , for instance .we define the expectation values of these observables as where the constraints on the variables are such that they must be probabilities , therefore we have the equalities in together with the constraints in are all polynomials . if there is a hidden variable model , the constraints can be satisfied . if not , the problem is infeasible and we must reject the hidden variable model . identifying the feasibility of this problemis a hard task , and we provide a relaxation .this relaxation will approximate the feasible set from the outside : that is , if the relaxation is an infeasible problem , the original one too must be infeasible .therefore by the same relaxation one can reject hidden variable models . to explain how it works ,suppose we are interested in finding the global optimum of the following constrained polynomial optimization problem : such that here and are polynomials in .we can think of the constraints as a semialgebraic set .lasserre s method gives a series of semidefinite programming ( sdp ) relaxations of increasing size that approximate this optimum through the moments of .for polynomial optimization problems of noncommuting variables this amounts to the exclusion of hidden variable theorems in networked data , and that we can verify the strength of observed correlations . even in this formulation, there is an implicit constraint on a moment : the top left element of the moment matrix is 1 . given a representing measure ,this means that .it is actually because of this that a dual variable appears in the dual formulation : such that } , \mathrm{deg}\sigma_0\leq 2d.\ ] ] in fact , we can move to the right - hand side , where the sum - of - squares ( sos ) decomposition is , being a trivial sos multiplied by the constraint , that is , by 1 .we normally think of the constraints that define as a collection of polynomial constraints underlying a semialgebraic set , and then in the relaxation we construct matching localizing matrices .we can , however , impose more constraints on the moments .for instance , we can add a constraint that .all of these constraints will have a constant instead of an sos polynomial in the dual .this sdp hierarchy and the sos decomposition have found extensive use in analyzing quantum correlations , and given the notion of local hidden variables in studying nonlocality , there is a natural extension to studying causal structures in general . for a static latent homophily model ,we are interested in the following sos decomposition : such that ,\end{aligned}\ ] ] where contains the observables extracted from the data , and and encode our model .if this problem is infeasible , we can rule out a local hidden variable model as imposed by the constraints .[ cols="<,<,^,^,^,^,^ " , ] clearly , the earliest period was the sparsest .the sdp solver detected dual infeasibility , therefore we could rule out latent homophily as the single source of correlations . on this time scale , however , assuming that the network remained static is unrealistic .therefore , we repeated the test with a span of thirty , ten , and five years . for the thirty- andthe ten - year spans , we analyzed every subsequent fifth year as the starting year .due to sparse data in the first years , all analysis in this part started with 1949 .thus , for instance , we analyzed 19491979 , followed by 19541984 , and so on .this gave us a total of twenty time intervals , with only one case , the ten - year period of 19491959 allowing the possibility of latent homophily .for the five - year intervals , we started with 1959 , again , for reasons of data sparsity . then we analyzed intervals starting with every third year , so , for instance , 19591964 , followed by 19621967 , and so on .this gave us another seventeen data points , with only two intervals , 19591964 and 19651970 , not being able to rule out latent homophily .our result indirectly confirms that contagion in the practice of citation is a distinct possibility .if citation patterns continue spreading , over time everybody will cite more or less the same papers .this in turn explains the phenomenon of sleeping beauties : since dominant authors do not cite such articles , everybody else ignores them .secondly , we recall that in its simplest form , bell s theorem states that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics , i.e. it rules out such variables as a viable explanation of quantum mechanics .therefore we hypothesized that if we can find entanglement in our data , with local hidden variables as their source ruled out , patterns in the sample must be _ quantum - like _ for non - obvious reasons . ruling out bell inequalities as the source of entanglement in our results points to such non - classical correlations at work in the dataset .citation and coauthor networks offer an insight into the dynamics of scientific progress . to understand this dynamics, we treated such a network as the representation of a causal structure , a logical process captured in a graph , and inquired from a causal perspective if authors form groups primarily due to their prior shared interest , or if their favourite topics are ` contagious ' and spread through co - authorship . following an algebraic geometric methodology that relies on a sequence of semidefinite programming ( sdp ) relaxations, we analyzed a sample citation network for the impact of latent hidden variables . using the sdp relaxations, we were able to rule out latent homophily , or shared prior interest as the source of correlations , hinting at that citation patterns in fact spread .statistical sampling on the author pairs was akin to making repeated measurements with bipartite bell scenarios in quantum mechanics .the finding that shared prior interest as a latent variable can not account on its own for citation patterns calls for a related analysis into the nature of ` contagious ' influences including fashionable topics , reputation etc ., affecting the outcome .this confirmation and the algebraic geometric framework to compute it are novel concepts in scientometrics .we hope this work will act as a stepping stone for further research .peter wittek and sndor darnyi were supported by the european commission seventh framework programme under grant agreement number fp7 - 601138 pericles .the dataset was compiled by nasrine olson and gustaf nelhans ( university of bors ) .asano , m. , basieva , i. , khrennikov , a. , ohya , m. , tanaka , y. , and yamato , i. ( 2012 ) . a quantum - like model of _ escherichia coli _s metabolism based on adaptive dynamics . in _ proceedings of qi-12 , 6th international quantum interaction symposium _ , pages 6067 .blacoe , w. , kashefi , e. , and lapata , m. ( 2013 ) . a quantum - theoretic approach to distributional semantics . in _ proceedings of naacl - hlt-13 , conference of the north american chapter of the association for computational linguistics : human language technologies _ , pages 847857 .cohen , t. , widdows , d. , schvaneveldt , r. , and rindflesch , t. ( 2010 ) . logical leaps and quantum connectives : forging paths through predication space . in _ proceedings of qi-10 , 4th symposium on quantum informatics for cognitive , social , and semantic processes _ , pages 1113 .darnyi , s. and wittek , p. ( 2012 ) . connecting the dots : mass , energy , word meaning , and particle - wave duality . in _ proceedings of qi-12 , 6th international quantum interaction symposium _ , pages 207217 .ver steeg , g. and galstyan , a. ( 2011 ) . a sequence of relaxations constraining hidden variable models . in _ proceedings of uai-11 , 27thconference on uncertainty in artificial intelligence _ , pages 717726 .wittek , p. , lim , i. k. , and rubio - campillo , x. ( 2013 ) . quantum probabilistic description of dealing with risk and ambiguity in foraging decisions . in_ proceedings of qi-13 , 7th international quantum interaction symposium _ , pages 296307 .zahedi , z. , costas , r. , and wouters , p. ( 2014 ) .how well developed are altmetrics ? a cross - disciplinary analysis of the presence of alternative metrics " in scientific publications ., 101(2):14911513 .
|
citation and coauthor networks offer an insight into the dynamics of scientific progress . we can also view them as representations of a causal structure , a logical process captured in a graph . from a causal perspective , we can ask questions such as whether authors form groups primarily due to their prior shared interest , or if their favourite topics are ` contagious ' and spread through co - authorship . such networks have been widely studied by the artificial intelligence community , and recently a connection has been made to nonlocal correlations produced by entangled particles in quantum physics the impact of latent hidden variables can be analyzed by the same algebraic geometric methodology that relies on a sequence of semidefinite programming ( sdp ) relaxations . following this trail , we treat our sample coauthor network as a causal graph and , using sdp relaxations , rule out latent homophily as a manifestation of prior shared interest only , leading to the observed patternedness . by introducing algebraic geometry to citation studies , we add a new tool to existing methods for the analysis of content - related social influences .
|
this colloquium focuses on fluctuation relations and in particular on their quantum versions .these relations constitute a research topic that recently has attracted a great deal of attention . at the microscopic level ,matter is in a permanent state of agitation ; consequently many physical quantities of interest continuously undergo random fluctuations .the purpose of statistical mechanics is the characterization of the statistical properties of those fluctuating quantities from the known laws of classical and quantum physics that govern the dynamics of the constituents of matter .a paradigmatic example is the maxwell distribution of velocities in a rarefied gas at equilibrium , which follows from the sole assumptions that the micro - dynamics are hamiltonian , and that the very many system constituents interact via negligible , short range forces . besides the fluctuation of velocity ( or energy ) at equilibrium , one might be interested in the properties of other fluctuating quantities , e.g. heat and work , characterizing non - equilibrium transformations . imposed by the reversibility of microscopic dynamical laws , the fluctuation relations put severe restrictions on the form that the probability density function ( pdf ) of the considered non - equilibrium fluctuating quantities may assume .fluctuation relations are typically expressed in the form , \label{eq : ft - general}\ ] ] where is the probability density function ( pdf ) of the fluctuating quantity during a nonequilibrium thermodynamic transformation referred to for simplicity as the forward ( ) transformation , and is the pdf of during the reversed ( backward , ) transformation .the precise meaning of these expressions will be amply clarified below .the real - valued constants , contain information about the _ equilibrium _starting points of the and transformations .figure [ fig : histogram ] depicts a probability distribution satisfying the fluctuation relation , as measured in a recent experiment of electron transport through a nano - junction .we shall analyze this experiment in detail in sec .[ sec : exp ] . ) . left panel : probability distribution of number of electrons , transported through a nano - junction subject to an electrical potential difference .right panel : the linear behavior of ] ( the superscript denoting the heisenberg picture with respect to the unperturbed dynamics . )moreover , kubo derived the general relation between differently ordered thermal correlation functions and deduced from it the celebrated quantum fluctuation - dissipation theorem , reading : where , denotes the fourier transform of the symmetrized , stationary equilibrium correlation function , and the fourier transform of the response function .note that the fluctuation - dissipation theorem is valid also for many - particle systems independent of the respective particle statistics .besides offering a unified and rigorous picture of the fluctuation - dissipation theorem , the theory of kubo also included other important advancements in the field of non - equilibrium thermodynamics .specifically , we note the celebrated onsager - casimir reciprocity relations ( ; ) .these relations state that , as a consequence of _ microreversibility _ , the matrix of transport coefficients that connects applied forces to so - called fluxes in a system close to equilibrium consists of a symmetric and an anti - symmetric block .the symmetric block couples forces and fluxes that have same parity under time - reversal and the antisymmetric block couples forces and fluxes that have different parity .most importantly , the analysis of opened the possibility for a systematic advancement of response theory , allowing in particular to investigate the existence of higher order fluctuation - dissipation relations , beyond linear regime .this task was soon undertaken by , who pointed out a hierarchy of irreversible thermodynamic relationships .these higher order fluctuation dissipation relations were investigated in detail by stratonovich for markovian system , and later by for non - markovian systems , see ( * ? ? ?i ) and references therein . even for arbitrary systemsfar from equilibrium the linear response to an applied force can likewise be related to tailored two - point correlation functions of corresponding stationary nonequilibrium fluctuations of the underlying _ unperturbed _ , stationary nonequilibrium system .these authors coined the expression `` fluctuation theorems '' for these relations .as in the near thermal equilibrium case , also in this case higher order nonlinear response can be linked to corresponding higher order correlation functions of those nonequilibrium fluctuations . at the same time , in the late seventies of the last century provided a single compact _classical _ expression that contains fluctuation relations of all orders for systems that are at thermal equilibrium when unperturbed .this expression , see eq .( [ eq : bk - gen - functional - identity ] ) below , can be seen as a fully nonlinear , exact and universal fluctuation relation .this formula , eq .( [ eq : bk - gen - functional - identity ] ) below , soon turned out useful in addressing the problem of connecting the deterministic and the stochastic descriptions of _ nonlinear _ dissipative systems .as it often happens in physics , the most elegant , compact and universal relations , are consequences of general physical symmetries . in the case of fluctuation relation follows from the time reversal invariance of the equations of microscopic motion , combined with the assumption that the system initially resides in thermal equilibrium described by the classical analogue of the gibbs state , eq .( [ eq : varrho_0 ] ) . proved eq .( [ eq : bk - gen - functional - identity ] ) below for classical systems .their derivation will be reviewed in the next section . the quantum version ,( [ eq : q - j - gen - functional - identity ] ) , was not reported until very recently . in sec .[ subsec : work - not - observable ] we shall discuss the fundamental obstacles that prevented and who also studied this very quantum problem .a new wave of activity in fluctuation relations was initiated by the works of and on the statistics of the entropy produced in non - equilibrium steady states , and of on the statistics of work performed by a transient , time - dependent perturbation . since then, the field has generated grand interest and flourished considerably .the existing reviews on this topic mostly cover classical fluctuation relations , while the comprehensive review by provides a solid , though in parts technical account of the state of the art of quantum fluctuation theorems .with this work we want to present a widely accessible introduction to quantum fluctuation relations , covering as well the most recent decisive advancements .particularly , our emphasis will be on ( i ) their connection to the linear and non - linear response theory , sec .[ sec : nonlinear ] , ( ii ) the clarification of fundamental issues that relate to the notion of `` work '' , sec .[ sec : fund ] , ( iii ) the derivation of quantum fluctuation relations for both , closed and open quantum systems , sec .[ sec : qft ] and [ sec : xft ] , and also ( iv ) their impact for experimental applications and validation , sec .[ sec : exp ] .two ingredients are at the heart of fluctuation relations .the first one concerns the initial condition of the system under study .this is supposed to be in thermal equilibrium described by a canonical distribution of the form of eq .( [ eq : varrho_0 ] ) .it hence is of _ statistical _ nature .its use and properties are discussed in many textbooks on statistical mechanics .the other ingredient , concerning the _ dynamics _ of the system is the principle of microreversibility .this point needs some clarification since microreversibility is customarily understood as a property of autonomous ( i.e. , non - driven ) systems described by a time - independent hamiltonian ( * ? ? ?xv ) . on the contrary , here we are concerned with non - autonomous systems , governed by explicitly time - dependent hamiltonians . in the followingwe will analyze this principle for classical systems in a way that at first glance may appear rather formal but will prove indispensable later on . the analogous discussion of the quantum case will be given next in sec . [ subsec : q - microrev ] .we deal here with a classical system characterized by a hamiltonian that consists of an unperturbed part and a perturbation due to an external force that couples to the conjugate coordinate .then the total system hamiltonian becomes where denotes a point in the phase space of the considered system . in the followingwe assume that the force acts within a temporal interval set by a starting time and a final time .the instantaneous force values are specified by a function , which we will refer to as the _ force protocol_. in the sequel , it will turn out necessary to clearly distinguish between the function and the value that it takes at a particular instant of time . for these systemsthe principle of microreversibility holds in the following sense .the solution of hamilton s equations of motion assigns to each initial point in phase space a point at the later time ] which is a function of the initial point and a functional of the force protocol . ] and until time to .the time - reversed final condition evolves , under the protocol from time until to =\varepsilon \varphi_{t,0}[\mathbf{z}_0;\lambda] ] denote its temporal evolution .depending on the initial condition different trajectories are realized . under the above stated assumption that at time the system is prepared in a gibbs equilibrium ,the initial conditions are randomly sampled from the distribution with .consequently the trajectory becomes a random quantity .next we introduce the quantity : = \int_{0}^{\tau}\mathrm{d}t \lambda_t \dot q_t , \label{eq : ex - work - integral}\ ] ] where is the time derivative of ) ] .the only random element entering the work , eq .( [ eq : ex - work ] ) , is the initial phase point which is distributed according to eq .( [ eq : rho_0 ] ) .therefore ] on is contained in the term ] and integrating over , one recovers the bochkov - kuzovlev identity , eq .( [ eq : bk - identity ] ) .an alternative definition of work is based on the comparison of the _ total _ hamiltonians at the end and the beginning of a force protocol , leading to the notion of `` inclusive work '' in contrast to the `` exclusive work '' defined in eq .( [ eq : ex - work ] ) .the latter equals the energy difference referring to the unperturbed hamiltonian . accordingly , the inclusive work is the difference of the total hamiltonians at the final time and the initial time : =h(\mathbf{z}_\tau , \lambda_\tau)-h(\mathbf{z}_0 , \lambda_0 ) .\label{eq : in - work}\ ] ] in terms of the force and the conjugate coordinate , the inclusive work is expressed as . ] : &=\int_0^\tau dt \dot \lambda_t \frac{\partial h(\mathbf{z}_t,\lambda_t)}{\partial \lambda_t } \label{eq : in - work - integral}\\ & = - \int_{0}^{\tau } \mathrm{d}t \dot \lambda_t q_t \nonumber\\ & = w_0 [ \mathbf{z}_0;\lambda]-\lambda_tq_t|_{0}^{\tau}.\nonumber\end{aligned}\ ] ] for the sake of simplicity we confine ourselves to the case of an even conjugate coordinate . in the corresponding way , as described in appendix [ app : bk ] , we obtain the following relation between generating functionals of forward and backward processes in analogy to eq .( [ eq : bk - gen - functional - identity ] ) , reading while on the left hand side the time evolution is controlled by the forward protocol and the average is performed with respect to the initial thermal distribution , on the right hand side the time evolution is governed by the reversed protocol and averaged over the reference equilibrium state . here formally describes thermal equilibrium of a system with the hamiltonian at the inverse temperature .the partition function is defined accordingly as .note that in general the reference state is different from the actual phase space distribution reached under the action of the protocol at time , i.e. , ,\lambda_0) ] denotes the point in phase space that evolves to in the time to under the action of .setting we obtain where is the free energy difference between the reference state and the initial equilibrium state .as a consequence of eq .( [ eq : j - identity ] ) we have which is yet another expression of the second law of thermodynamics . equation ( [ eq : j - identity ] ) was first put forward by , and is commonly referred to in the literature as the `` jarzynski equality '' . in close analogy to the bochkov - kuzovlev approach the pdf of the inclusive work can be formally expressed as =\int \mathrm{d}\mathbf{z}_0 \rho(\mathbf{z}_0 , \lambda_0)\delta[w - h(\mathbf{z}_\tau,\lambda_\tau)+h(\mathbf{z}_0 , \lambda_0 ) ] .\label{eq : jpw}\end{aligned}\ ] ] its fourier transform defines the characteristic function of work : &=\int \mathrm{d}w e^{iuw}p[w;\lambda]\nonumber\\ & = \int \mathrm{d}\mathbf{z}_0 e^{iu[h(\mathbf{z}_\tau,\lambda_\tau)-h(\mathbf{z}_0 , \lambda_0 ) ] } e^{-\beta h(\mathbf{z}_0,\lambda_0)}/z(\lambda_0 ) \nonumber\\ = \int \mathrm{d}\mathbf{z}_0 & \exp\left[iu\int_0^\tau \mathrm{d}t \dot \lambda_t\frac{\partial h(\mathbf{z}_t,\lambda_t)}{\partial \lambda_t}\right ] \frac{e^{-\beta h(\mathbf{z}_0,\lambda_0)}}{z(\lambda_0)}. \label{eq : jgu}\end{aligned}\ ] ] using the microreversibility principle , eq . ( [ eq : microreversibility ] ) , we obtain in a way similar to eq .( [ eq : bk - w - fluc - theo ] ) the ( inclusive ) work fluctuation relation : }{p[-w ; \widetilde \lambda]}=e^{\beta ( w-\delta f ) } , \label{eq : j - w - fluc - theo}\ ] ] where the probability ] and integrating over .equations ( [ eq : j - gen - functional - identity ] , [ eq : j - identity ] , [ eq : j - w - fluc - theo ] ) continue to hold also when is odd under time reversal , with the provision that is replaced by .we here point out the salient fact that , within the inclusive approach , a connection is established between the _ nonequilibrium _ work and the difference of free energies , of the corresponding _ equilibrium states _ and .most remarkably , eq . ( [ eq : w > deltaf ] ) says that the average ( inclusive ) work is always larger than or equal to the free energy difference , no matter the form of the protocol ; even more surprising is the content of eq .( [ eq : j - identity ] ) saying that the equilibrium free energy difference may be inferred by measurements of nonequilibrium work in many realizations of the forcing experiment .this is similar in spirit to the fluctuation - dissipation theorem , also connecting an equilibrium property ( the fluctuations ) , to a non - equilibrium one ( the linear response ) , with the major difference that eq .( [ eq : j - identity ] ) is an exact result , whereas the fluctuation - dissipation theorem holds only to first order in the perturbation .note that as a consequence of eq .( [ eq : j - w - fluc - theo ] ) the forward and backward pdf s of exclusive work take on the same value at .this property has been used in experiments in order to determine free energy differences from nonequilibrium measurements of work .equations ( [ eq : j - identity ] , [ eq : j - w - fluc - theo ] ) have further been employed to develop efficient numerical methods for the estimation of free energies . both the crooks fluctuation theorem , eq .( [ eq : j - w - fluc - theo ] ) , and the jarzynski equality , eq . ( [ eq : j - identity ] ) , continue to hold for any time dependent hamiltonian without restriction to hamiltonians of the form in eq .( [ eq : h ] ) .indeed no restriction of the form in eq .( [ eq : h ] ) was imposed in the seminal paper by . in the original works of and ,( [ eq : j - identity ] ) and ( [ eq : j - w - fluc - theo ] ) were obtained directly , without passing through the more general formula in eq .( [ eq : j - gen - functional - identity ] ) .notably , neither these seminal papers , nor the subsequent literature , refer to such general functional identities as eq .( [ eq : j - gen - functional - identity ] ) .we introduced them here to emphasize the connection between the recent results , eqs .( [ eq : j - identity ] ) and ( [ eq : j - w - fluc - theo ] ) , with the older results of , eqs .( [ eq : bk - identity ] , [ eq : bk - w - fluc - theo ] ) .the latter ones were practically ignored , or sometimes misinterpreted as special instances of the former ones for the case of cyclic protocols ( ) , by those working in the field of non - equilibrium work fluctuations .only recently pointed out the differences and analogies between the inclusive and exclusive approaches .as we evidenced in the previous section , the studies of and are based on different definitions of work , eqs .( [ eq : ex - work ] , [ eq : in - work ] ) , reflecting two different viewpoints . from the `` exclusive '' viewpoint of change in the energy of the unforced system is considered , thus the forcing term of the total hamiltonian is not included in the computation of work . from the `` inclusive '' point of view the definition of work ,( [ eq : in - work ] ) , is based on the change of the total energy including the forcing term . in experiments and practical applications of fluctuation relations ,special care must be paid in properly identifying the measured work with either the inclusive ( ) or exclusive ( ) work , bearing in mind that represents the prescribed parameter progression and is the measured conjugate coordinate .the experiment of is very well suited to illustrate this point . in that experimenta prescribed torque was applied to a torsion pendulum whose angular displacement was continuously monitored .the hamiltonian of the system is where is the canonical momentum conjugate to , is the hamiltonian of the thermal bath to which the pendulum is coupled via the hamiltonian , and is a point in the bath phase space .using the definitions of inclusive and exclusive work , eqs .( [ eq : ex - work - integral ] , [ eq : in - work - integral ] ) , and noticing that plays the role of and that of , we find in this case and .note that the work , obtained by monitoring the pendulum degree of freedom only , amounts to the energy change of the total pendulum+bath system .this is true in general .writing the total hamiltonian as with being the hamiltonian of the system of interest , one obtains because and do not depend on time , and as a consequence of hamilton s equations of motion . introducing the notation , for the dissipated work , one deduces that the jarzynski equality can be re - expressed in a way that looks exactly as the bochkov - kuzovlev identity , namely : this might let one believe that the dissipated work coincides with .this , however , would be incorrect .as discussed in and explicitly demonstrated by and constitute distinct stochastic quantities with different pdf s .the inclusive , exclusive and dissipated work coincide only in the case of cyclic forcing .we point out that the inclusive work , and free energy difference , as defined in eqs .( [ eq : in - work ] , [ eq : f ] ) , are to use the expression coined by not `` true physical quantities . '' that is to say they are not invariant under gauge transformations that lead to a time - dependent shift of the energy reference point . to elucidate this ,consider a mechanical system whose dynamics are governed by the hamiltonian .the new hamiltonian where is an arbitrary function of the time dependent force , generates the _ same _ equations of motion as . however , the work that one obtains from this hamiltonian differs from the one that follows from , eq .( [ eq : in - work ] ) : likewise we have , for the free energy difference evidently the jarzynski equality , eq .( [ eq : j - identity ] ) , _ is invariant under such gauge transformations _ , because the term appearing on both sides of the identity in the primed gauge , would cancel ; explicitly this reads : thus , there is no fundamental problem associated with the gauge freedom .however one must be aware that , in each particular experiment , the very way by which the work is measured implies a specific gauge . consider for example the torsion pendulum experiment of .the inclusive work was computed as : .the condition that this measured work is related to the hamiltonian of eq .( [ eq : h - torsion ] ) via the relation , eq .( [ eq : in - work ] ) , is equivalent to , see eq .( [ eq : wm = w ] ) .if this is required for all then the stricter condition is implied , restricting the remaining gauge freedom to the choice of a constant function .this residual freedom however is not important as it does neither affect work nor free energy .we now consider a different experimental setup where the support to which the pendulum is attached is rotated in a prescribed way according to a protocol , specifying the angular position of the support with respect to the lab frame .the dynamics of the pendulum are now described by the hamiltonian if the work done by the elastic torque on the support is recorded then the requirement singles out the gauge , leaving only the freedom to chose the unimportant constant .note that when , the pendulum obeys exactly the same equations of motion in the two examples above , eqs .( [ eq : h - torsion ] , [ eq : h - torsion-2 ] ) .the gauge is irrelevant for the law of motion but is essential for the energy - hamiltonian connection .the issue of gauge freedom was first pointed out by , who questioned whether a connection between work and hamiltonian may actually exist . since thenthis topic had been highly debated , but neither the gauge invariance of fluctuation relations nor the fact that different experimental setups imply different gauges were clearly recognized before .thus far we have reviewed the general approach to work fluctuation relations for classical systems .the question then naturally arises of how to treat the quantum case . obviously , the hamilton function is to be replaced by the hamilton operator , eq .( [ eq : h - quantum ] ) .the probability density is then replaced by the density matrix , reading where is the partition function and denotes the trace over the system hilbert space .the free energy is obtained from the partition function in the same way as for classical systems , i.e. , less obvious is the definition of work in quantum mechanics .originally , defined the exclusive quantum work , in analogy with the classical expression , eqs .( [ eq : ex - work - integral ] , [ eq : ex - work ] ) , as the operator where the superscript denotes the heisenberg picture : \ , \mathcal b\ , u_{t,0}[\lambda ] .\label{eq : heisenberg}\ ] ] here is an operator in the schrdinger picture and ] to emphasize that , like the classical evolution ] may of course only depend on the part of the protocol including times from up to . ] the time derivative is determined by the heisenberg equation . in case of a time - independent operator it becomes /\hbar ] agrees with only if commutes at different times =0 ] where we used the concatenation rule =u_{t,0}[\lambda]u_{0,\tau}[\lambda] ] of the propagator ] and |i\rangle ] , eq .( [ eq : schroedinger ] ) , hence it evolves according to \varrho_n u^{\dagger}_{t,0}[\lambda ] .\label{eq : rho - n}\ ] ] at time a second measurement of yielding the eigenvalue with probability = \tr \,\pi_m^{\lambda_\tau } \varrho_n(\tau ) \ , .\label{eq : p(m|n)}\ ] ] is performed .the pdf to observe the work is thus given by : = \sum_{m , n } \delta(w-[e_m^{\lambda_\tau}-e_n^{\lambda_0}])p_{m|n}[\lambda]p_n^0 .\label{eq : p[w;lambda]}\ ] ] the work pdf has been calculated explicitly for a forced harmonic oscillator and for a parametric oscillator with varying frequency .the characteristic function of work ] , ] .equation ( [ eq : g[u;lambda ] ] ) was derived first by in the case of nondegenerate and later generalized by to the case of possibly degenerate . ] ) , remains valid for any initial state , with the provision that the average is taken with respect to representing the diagonal part of in the eigenbasis of . ] the product of the two exponential operators can be combined into a single exponent under the protection of the time ordering operator to yield } ] and integrating over .given the fact that the characteristic function is determined by a two - time quantum correlation function rather than by a single time expectation value is another clear indication that work is not an observable but instead characterizes a process .as discussed in the tasaki - crooks relation , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) and the quantum version of the jarzynski equality , eq . ( [ eq : q - j - identity ] ) , continue to hold even if further projective measurements of any observable are performed within the protocol duration .these measurements , however do alter the work pdf . the jarzynski equality can also immediately been obtained from the characteristic function by setting , in eq .( [ eq : g[u;lambda ] ] ) . in order to obtain this resultit is important that the hamiltonian operators at initial and final times enter into the characteristic function , eq .( [ eq : g[u;lambda ] ] ) , as arguments of two factorizing exponential functions , i.e. in the form . in general ,this of course is different from a single exponential } ] , eq .( [ eq : p(m|n ) ] ) , is given by the simple expression : =|\langle \psi_m^{\lambda_\tau}|u_{\tau,0}[\lambda]|\psi_n^{\lambda_0}\rangle|^2 \label{eq:<psi|u|psi > } ] , see eq .( [ eq : q - microreversibility ] ) , the following symmetry relation is obtained for the conditional probabilities : =p_{n|m}[\widetilde \lambda ] .\label{eq : pmn = pnm}\ ] ] note the exchanged position of and in the two sides of this equation . from eq .( [ eq : pmn = pnm ] ) the tasaki - crooks fluctuation theorem is readily obtained for a canonical initial state , using eq .( [ eq : p[w;lambda ] ] ) . using instead an initial microcanonical state at energy , described by the density matrix - functionhas to be understood as a sharply peaked function with infinite support . ]/\omega(e,\lambda_0)\ , , \ ] ] where , we obtain : }{p[e+w ,- w;\widetilde \lambda]}=e^{[s(e+w,\lambda_\tau)-s(e,\lambda_0)]/k_b}\ , , \label{eq : mcft}\end{aligned}\ ] ] where , denotes boltzmann s thermodynamic equilibrium entropy .the corresponding classical derivation was provided by .a classical microcanonical version of the jarzynski equality was put forward by for non - hamiltonian iso - energetic dynamics .it was recently generalized to energy controlled systems by . in the previous sections [ subsec : workpdf ] , [ subsec : workcharfun ] , [ subsec : q - genfun ], we studied a quantum mechanical system at canonical equilibrium at time . during the subsequent action of the protocolit is assumed to be completely isolated from its surrounding apart from the influence of the external work source and hence to undergo a unitary time evolution .the quality of this approximation depends on the relative strength of the interaction between the system and its environment , compared to typical energies of the isolated system as well as on the duration of the protocol . in general , though , a treatment that takes into account possible environmental interactions is necessary .as will be shown below , the interaction with a thermal bath does _ not _ lead to a modification of the jarzynski equality , eq .( [ eq : q - j - identity ] ) , nor of the quantum work fluctuation relation , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) , both in the cases of weak and strong coupling ; a main finding which holds true as well for classical systems . in this sectionwe address the weak coupling case , while the more intricate case of strong coupling is discussed in the next section .we consider a driven system described by the time dependent hamiltonian , in contact with a thermal bath with time independent hamiltonian , see fig .[ fig : opensystem ] .the hamiltonian of the compound system is where the energy contribution stemming from is assumed to be much smaller than the energies of the system and bath resulting from and .the parameter that is manipulated according to a protocol solely enters in the system hamiltonian , . ]is coupled to a bath [ represented by ] via the interaction hamiltonian .the compound system is in vanishingly weak contact with a super - bath , that provides the initial canonical state at inverse temperature , eq .( [ eq : varrho - coupling]).,width=264 ] the compound system is assumed to be initially ( ) in the canonical state where is the corresponding partition function .this initial state may be provided by contact with a super - bath at inverse temperature , see fig .[ fig : opensystem ] .it is then assumed that either the contact is removed for or that the super - bath is so weakly coupled to the compound system that it bears no influence on its dynamics over the time span to .because the system and the environmental hamiltonians commute with each other , their energies can be simultaneously measured .we denote the eigenvalues of as , and those of as .in analogy with the isolated case we assume that at time a joint measurement of and is performed , with outcomes , .a second joint measurement of and at yields the outcomes , . in analogy to the energy change of an isolated system , the differences of the eigenvalues of system and bath hamiltonians yield the energy changes of system and bath , and , respectively , in a single realization of the protocol , i.e. , in the weak coupling limit, the change of the energy content of the total system is given by the sum of the energy changes of the system and bath energies apart from a negligibly small contribution due to the interaction hamiltonian .the work performed on the system coincides with the change of the total energy because the force is assumed to act only directly on the system . for the same reason ,the change of the bath energy is solely due to an energy exchange with the system and hence , can be interpreted as negative heat , .accordingly we have ) , the averaged quantity \delta e= \tr\ , \varrho_\tau \mathcal{h}_s(\lambda_\tau)- \tr\ , \varrho(\lambda_0 ) \mathcal{h}_s ( \lambda_0) ] that the system energy changes by and the heat is exchanged , under the protocol : =\sum_{m , n,\mu,\nu}&\delta[\delta e - e_m^{\lambda_\tau}+e_n^{\lambda_0 } ] \delta [ q + e^b_\mu - e^b_\nu ] \nonumber \\ & \times p_{m\mu|n\nu}[\lambda]p^0_{n\nu } \ , , \label{eq : p(e , q)}\end{aligned}\ ] ] where ] can be expressed in terms of the projectors on the common eigenstates of , and the unitary evolution generated by the total hamiltonian . by taking the fourier transform of ] obeys the tasaki - crooks relation : }{p[-w;\widetilde \lambda]}=e^{\beta ( w-\delta f_s ) } .\label{eq : q - j - fluctheo - marginal}\ ] ] subsequently the jarzynski equality , , is also satisfied . thus the fluctuation relation , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) , and the jarzynski equality , eq .( [ eq : q - j - identity ] ) , keep holding , unaltered , also in the case of weak coupling .this result was originally found upon assuming a markovian quantum dynamics for the reduced system dynamics . with the above derivation we followed in which one does not rely on a markovian quantum evolution andconsequently the results hold true as well for a general non - markovian reduced quantum dynamics of the system dynamics . in the case of strong coupling ,the system - bath interaction energy is non - negligible , and therefore it is no longer possible to identify the heat as the energy change of the bath . how to define heat in a strongly coupled driven system and whether it is possible to define it at all currently remain open problems .this , however does not hinder the possibility to prove that the work fluctuation relation , eq .( [ eq : q - j - fluctheo - marginal ] ) , remains valid also in the case of strong coupling .for this purpose it suffices to properly identify the work done on , and the free energy of an open system , without entering the issue of what heat means in a strong - coupling situation . as for the classical case ,[ subsec : inc - exc ] , the system hamiltonian is the only time dependent part of the total hamiltonian .therefore the work done on the open quantum system , coincides with the work done on the total system , as in the weak coupling case treated in the previous section , sec .[ subsec : weak ] .consequently , the work done on an open quantum system in a single realization is where are the eigenvalues of the total hamiltonian .regarding the proper identification of the free energy of an open quantum system , the situation is more involved because the bare partition function can not take into account the full effect of the environment in any case other than the limiting situation of weak coupling . for strong couplingthe equilibrium statistical mechanical description has to be based on a partition function of the open quantum system that is given as the ratio of the partition functions of the total system and the isolated environment , i.e. : where and with , denoting the trace over the bath hilbert space and the total hilbert space , respectively .it must be stressed that in general , the partition function of an open quantum system differs from its partition function in absence of a bath : the equality is restored , though , in the limit of of weak coupling .the free energy of an open quantum system follows according to the standard rule of equilibrium statistical mechanics as in this way the influences of the bath on the thermodynamic properties of the system are properly taken into account . besides , eq .( [ eq : q - fs - strong ] ) complies with all the grand laws of thermodynamics . for a total system initially prepared in the gibbs state , eq .( [ eq : varrho - coupling ] ) , the tasaki - crooks fluctuation theorem , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) , applies yielding }{p[-w;\widetilde \lambda]}= \frac{y(\lambda_\tau)}{y(\lambda_0)}e^{\beta w } .\ ] ] since does not depend on time , the salient relation holds , leading to : }{p[-w;\widetilde \lambda]}= \frac{\mathcal z_s(\lambda_\tau)}{\mathcal z_s(\lambda_0)}e^{\beta w}= e^{\beta ( w- \delta f_s)}\ ] ] where is the proper free energy difference of an open quantum system . since coincides with the work performed on the open system the tasaki - crooks theorem , eq .( [ eq : q - j - fluctheo - marginal ] ) , is recovered in the case of strong coupling .the transport of energy and matter between two reservoirs that stay at different temperatures and chemical potentials represents an important experimental set - up , see also sec .[ sec : exp ] below , as well as a central problem of non - equilibrium thermodynamics . here, the two - measurement scheme described above in conjunction with the principle of microreversibility leads to fluctuation relations similar to the tasaki - crooks relation , eq . ( [ eq : q - j - w - fluc - theo - lambda ] ) , for the probabilities of energy and matter exchanges .the resulting fluctuation relations have been referred to as `` exchange fluctuation theorems '' , to distinguish them from the `` work fluctuation theorems '' .the first quantum exchange fluctuation theorem was put forward by .it applies to two systems initially at different temperatures that are allowed to interact over the lapse of time , via a possibly time - dependent interaction .this situation was later generalized by and , to allow for the exchange of energy and particles between several interacting systems initially at different temperatures and chemical potentials ; see the schematic illustration in fig .[ fig : xft ] .( symbolized by the small circle ) , which is switched on at time and switched off at . during the on - period the reservoirs exchange energy and matter with each other .the resulting net energy change of the -th reservoir is and its particle content changes by .initially the reservoirs have prescribed temperatures , , and chemical potentials , , of the particle species that are exchanged , respectively.,width=188 ] the total hamiltonian consisting of subsystems is : where is the hamiltonian of the -th system , and describes the interaction between the subsystems , which sets in at time and ends at time .consequently for , and in particular .as before , it is very important to distinguish between the values at a specific time and the whole protocol .initially , the subsystems are supposed to be isolated from each other and to stay in a factorized grand canonical state }}/{\xi_i}\ , , \label{eq : varrho_0-factorized}\ ] ] with the chemical potential , inverse temperature , and grand potential , respectively , of subsystem . here , and denote the particle number operator and the trace of the subsystem , respectively .we also assume that in absence of interaction the particle numbers in each subsystem are conserved , i.e. , =0 ] , =0 ] for any .accordingly , one may measure all the s and all the s simultaneously . adopting the two - measurement scheme discussed above in the context the work fluctuation relation ,we make a first measurement of all the s and all the s at .accordingly , the wave function collapses onto a common eigenstate of all these observable with eigenvalues , .subsequently , this wave function evolves according to the evolution ] completely describes the effect of the interaction protocol =\sum_{m , n } \prod_i \delta(\delta e_i- e^i_m+ e^i_n ) \nonumber \\ \times \delta(\delta n_i- n^i_m+ n^i_n ) p_{m|n}[\mathcal v]p_n^0\ , , \end{aligned}\ ] ] where ] is the initial distribution of energies and particles . herethe symbols and , are short notations for the individual energy and particle number changes of all subsystems and , respectively . assuming that the total hamiltonian commutes with the time reversal operator at any instant of time , and using the time reversal property of the transition probabilities , eq .( [ eq : pmn = pnm ] ) , one obtains that }{p[-\delta \mathbf e,-\delta \mathbf n ; \widetilde{\mathcal v } ] } = \prod_i e^{\beta_i [ \delta e_i-\mu_i \delta n_i]}. \label{eq : xft - general}\end{aligned}\ ] ] this equation was derived by , and expresses the exchange fluctuation relation for the case of transport of energy and matter . in the case of a single isolated system ( ), it reduces to the tasaki - crooks work fluctuation theorem , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) , upon rewriting and assuming that the total number of particles is conserved also when the interaction is switched on .i.e. , =0 ] . in order to determine the backward probability the same type of experiments has to be repeated with the inverted protocol , starting from an equilibrium state at inverse temperature and at those parameter values that are reached at the end of the forward protocol . suggest an experiment that follows exactly the procedure described above .they propose to implement a quantum harmonic oscillator by optically trapping an ion in the quadratic potential generated by a laser trap , using the set - up developed by . in principle, the set - up of allows , on one hand , to drive the system by changing in time the stiffness of the trap , and , on the other hand , to probe whether the ion is in a certain fock state ; i.e. , in an energy eigenstate of the harmonic oscillator. the measurement apparatus may be understood as a single fock state `` filter '' , whose outcome is yes " or `` no '' , depending on whether the ion is or is not in the probed state .thus the experimentalist probes each possible outcome , where denotes the fock states at time and , respectively .then , the relative frequency of the outcome occurrence is recored by repeating the driving protocol many times always preparing the system in the same canonical initial state . in this waythe joint probabilities ^0 ] , eq .( [ eq : pmn = pnm ] ) and compare their experimental values with the known theoretical values .another suitable quantum system to test quantum fluctuation relations are quantum versions of nanomechanical oscillator set - ups that with present day nanotechnology are at the verge of entering the quantum regime . in these systemswork protocols can be imposed by optomechanical means .currently , the experiment proposed by has not yet been carried out .an analogous experiment could , in principle , be performed in a circuit quantum electrodynamics ( qed ) set - up as the one described in ( , ) .the set - up consists of a cooper pair box qubit ( a two states quantum system ) that can be coupled to and de - coupled from a superconducting 1d transmission line , where the latter mimics a quantum harmonic oscillator . with this architectureit is possible to implement various functions with very high degree of accuracy . among themthe following tasks are of special interest in the present context : ( i ) creation of pure fock states , i.e. , the energy eigenstates of the quantum harmonic oscillator in the resonator .( ii ) measurement of photon statistics , i.e. , measurements of the population of each quantum state of the oscillator .( iii ) driving the resonator by means of an external field . report , for example , on the creation of the ground fock state , followed by a driving protocol ( a properly engineered microwave pulse applied to the resonator ) that `` displaces '' the oscillator and creates a coherent state , whose photon statistics ] is actually the conditional probability to find the state at time , given that the system was in the state at time .thus , by preparing the oscillator in the fock state instead of the ground state , and repeating the same driving and readout as before , the matrix ] , eq .( [ eq : pmn = pnm ] ) , which in turn implies the work fluctuation relation , see sec .[ subsec : microrev+condprobs ] . at variance with the proposal of , in this case the initial statewould not be randomly sampled from a canonical state , but would be rather created deterministically by the experimenter .the theoretical values of transition probabilities for this case corresponding to a displacement of the oscillator were first reported by , see also . provided an analytical expression for the characteristic function of work and investigated in detail the work probability distribution function and its dependence on the initial state , such as for example canonical , microcanonical , and coherent state .so far we have addressed possible experimental tests of the tasaki - crooks work fluctuation theorem , eq .( [ eq : q - j - w - fluc - theo - lambda ] ) , for isolated systems .the case of open systems , interacting with a thermal bath , poses extra difficulties related to the fact that in order to measure the work in this case one should make two measurements of the energy of the total macroscopic system , made up of the system of interest and its environment .this presents an extra obstacle that at the moment seems difficult to surmount except for a situation at ( i ) weak coupling and ( ii ) , then yielding , together with eq .( [ eq : firstlaw ] ) . like the quantum work fluctuation relations, the quantum exchange fluctuation relations are understood in terms of two point quantum measurements . in an experimental test , the net amount of energy and/or number of particles ( depending which of the three exchange fluctuation relations , eqs .( [ eq : xft - general ] , [ eq : xft - heat ] , [ eq : xft - matter ] ) , is studied ) has to be measured in each subsystems twice , at the beginning and at the end of the protocol. however typically these are macroscopic reservoirs , whose energy and particle number measurement is practically impossible .thus , seemingly , the verification of the exchange fluctuation relations would be even more problematic than the validation of the quantum work fluctuation relations . indeed , while experimental tests of the work fluctuation relations have not been reported yet , experiments concerning quantum exchange fluctuation relations have been already performed . in the followingwe shall discuss two of them , one by and the other by . in doingso we demonstrate how the obstacle of energy / particle content measurement of macroscopic reservoirs was circumvented . have recently performed an experimental verification of the particle exchange fluctuation relation , eq .( [ eq : xft - matter ] ) , using bi - directional electron counting statistics .the experimental set - up consists of two electron reservoirs ( leads ) at the same temperature .the two leads are connected via a double quantum dot , see fig .[ fig : bidirectional - counting ] . ) and same temperature ( ) are connected through a double quantum dot ( small circles ) , whose quantum state is continuously monitored .the state ( 1,0 ) , i.e. , `` one electron in the left dot , no electrons in the right dot '' is depicted .the transition from this state to the state ( 0,1 ) signals the exchange of one electron from subsystem 1 to subsystem 2 . and denote the hamiltonian and electron number operators of the subsystems , respectively.,width=188 ] when an electric potential difference is applied to the leads , a net flow of electrons starts transporting charges from one lead to the other , via lead - dot and dot - dot quantum tunnelings .the measurement apparatus consists of a secondary circuit in which a current flows due to an applied voltage .thanks to a properly engineered coupling between secondary circuit and the double quantum dot , the current in the circuit depends on the quantum state of the double dot .the latter has four relevant states , which we shall denote as corresponding respectively to : no electrons in the left dot and no electrons in the right dot , no electrons in the left dot and one electron in the right dot , etc . , .each of these states leads to a different value of the current in the secondary circuit . in the experimentan electric potential difference is applied to the two leads for a time . during this timethe state of the double quantum dot is monitored by registering the current in the secondary circuit .this current was found to switch between the four values corresponding to the four quantum states mentioned above .the outcome of the experiment is a sequence of current values , with taking only four possible values . in other terms , the outcome of the experiment consists of a sequence ( with ) of joint eigenvalues of two commuting observables specifying the occupation of the left ( ) and right ( ) dots by single electrons at the time of the measurement .the presence of an exchange of entries within one time step of the form signals the transfer of one electron from left to right , and vice versa the transfer from right to left .thus , given a sequence , the total number ] , eq .( [ eq : xft - matter ] ) , was satisfied with the actual temperature of the leads replaced by an effective temperature , see fig [ fig : histogram ] .the renormalization of temperature was explained as an effect due to an exchange of electrons occurring between the dots and the secondary circuit .the question however remains of how to connect this experiment in which the flux of electrons through an interface is monitored and the theory , leading to eq .( [ eq : xft - matter ] ) , which instead prescribes only two measurements of total particle numbers in the reservoirs .the answer was given in , who showed that the exchange fluctuation relation , eq .( [ eq : xft - general ] ) , remains valid , if in addition to the two measurements of total energy and particle numbers occurring at and , the evolution of a quantum system is interrupted by means of projective quantum measurements of any observable that commutes with the quantum time reversal operator .in other words , while the forward and backward probabilities are affected by the occurrence of intermediate measurement processes , their ratio remains unaltered . in the experiment of one does not need to measure the initial and final content of particles in the reservoirs because the number of exchanged particles is inferred from the sequence of intermediate measurements outcomes .thus , thanks to the fact that quantum measurements do not alter the fluctuation relation , one may overcome the problem of measuring the energy and number of particles of the macroscopic reservoirs , by monitoring instead the flux through a microscopic junction .as discussed in the introduction , the original motivation for the study of fluctuation relations was to overcome the limitations of linear response theory and to obtain relations connecting higher order response functions to fluctuation properties of the unperturbed system .as an indirect and partial confirmation of the fluctuation relations higher order static fluctuation - response relations can be tested experimentally . such a validation was recently accomplished in coherent quantum transport experiments by ( , ) , where the average current and the zero - frequency current noise power generated in an aharonov - bohm ring were investigated as a function of an applied dc voltage , and magnetic field . in the nonlinear response regime , the current and noise power may be expressed as power series of the applied voltage : where the coefficients depend on the applied magnetic field . the steady state fluctuation theorem , eq .( [ eq : ssft ] ) , then predicts the following fluctuation relations where , , and analogous definitions for and .the first equation in ( [ eq : s012 ] ) is the johnson - nyquist relation . in the experiment by good quantitative agreement with the first and the third expressions in ( [ eq : s012 ] ) was established , whereas , for the time being , only qualitative agreement was found with the second relation .the higher order static fluctuation dissipation relations ( [ eq : s012 ] ) were obtained from a steady state fluctuation theorem for particle exchange under the simplifying assumption that no heat exchange occurs .then the probability of transferring particles is related to the probability of the reverse transfer by where is the so - called affinity . if both sides are multiplied by and integrated over a comparison of equal powers of applied voltage yields eq .( [ eq : s012 ] ) . an alternative approach , that also allows to include the effect of heat conduction , is offered by the fluctuation theorems for currents in open quantum systems .this objective has been put forward by and also by , based on a generating function approach in the spirit of eq .( [ eq : q - j - gen - functional - identity ] ) .in closing this colloquium we stress that the known fluctuation relations are based on two facts : ( a ) microreversibility for non - autonomous hamiltonian systems eq .( [ eq : q - microreversibility ] ) , and ( b ) the special nature of the initial equilibrium states which is expressible in either micro - canonical , canonical or grand - canonical form , or products thereof .the final state reached at the end of a protocol though is in no way restricted .it evolves from the initial state according to the governing dynamical laws under a prescribed protocol . in generalthis final state may markedly differ from any kind of equilibrium state . for quantum mechanical systemsit also is of utmost importance to correctly identify the work performed on a system as the difference between the energy of the system at the end and the beginning of the protocol . in case of open systems the difference of the energies of the total system at the end and beginning of the protocolcoincides with the work done on the open system as long as the forces exclusively act on this open system . with the free energy of an open system determined as the difference of free energies of the total system and that of the isolated environment the quantum and classical jarzynski equality and the tasaki - crooks theorem continue to hold true even for systems strongly interacting with their environment .deviations from the fluctuation relations however must be expected if protocol forces not only act on the system alone but as well directly on the environmental degrees of freedom ; for example , if a time - dependent system - bath interaction protocol is applied .the most general and compact formulation of quantum work fluctuation relations also containing the onsager - casimir reciprocity relations and nonlinear response to all orders , is the andrieux - gaspard relation , eq .( [ eq : q - j - gen - functional - identity ] ) which represents the proper quantum version of the classical bochkov - kuzovlev formula , eq .( [ eq : bk - gen - functional - identity ] ) .these relations provide a complete theoretical understanding of those nonequilibrium situations that emerge from arbitrary time - dependent perturbations of equilibrium initial states .less understood are exchange fluctuation relations with their important applications to counting statistics .the theory there so far is restricted to situations where the initial state factorizes into grand - canonical states of reservoirs at different temperatures or chemical potentials .the interaction between these reservoirs is turned on and it is assumed that it will lead to a steady state within the duration of the protocol .experimentally , it is in general difficult to exactly follow this prescription and therefore a comparison of theory and experiment is only meaningful for the steady state .alternative derivations of exchange relations for more realistic , non - factorizing initial states would certainly be of much interest . in this context , the issue of deriving quantum fluctuation relations for open systems that initially are in nonequilibrium steady quantum transport states constitutes a most interesting challenge .likewise , from the theoretical point of view little is known thus far about quantum effects for transport in presence of time dependent reservoirs , for example using a varying temperature and/or chemical potentials .the experimental applications and validation schemes involving nonlinear quantum fluctuation relations still are in a state of infancy , as detailed with sec .[ sec : exp ] , so that there is plenty of room for advancements . the major obstacle for the experimental verification of the work fluctuation relation is posed by the necessity of performing quantum projective measurements of energy .besides the proposal of employing trapped ions , we suggested here the scheme of a possible experiment employing circuit - qed architectures . in regard to exchange fluctuation relations instead, the main problem is related to the difficulty of measuring microscopic changes of macroscopic quantities pertaining to heat and matter reservoirs .continuous measurements of fluxes seemingly provide a practical and efficient loophole for this dilemma .the idea that useful work may be obtained by using information has established a connection between the topical fields of quantum information theory and quantum fluctuation relations . and used fluctuation relations and information theoretic measures to derive landauer s principle .a generalization of the jarzynski equality to the case of feedback controlled systems was provided in the classical case by , and in the quantum case by .recently gave bounds on the entropy production in terms of quantum information concepts .in similar spirit , presented a method by relating relative quantum entropy to the quantum jarzynski fluctuation identity in order to quantify multi - partite entanglement within different thermal quantum states .a practical application of the jarzynski equality in quantum computation was showed by . in conclusion ,the authors are confident in their belief that this topic of quantum fluctuation relations will exhibit an ever growing activity within nanosciences and further may invigorate readers to pursue still own research and experiments as this theme certainly offers many more surprises and unforeseen applications .the authors thank for providing the data for fig . [fig : histogram ] .this work was supported by the cluster of excellence nanosystems initiative munich ( nim ) and the volkswagen foundation ( project i/83902 ) .we report below the steps leading to eq .( [ eq : bk - gen - functional - identity ] ) e^{-\beta w_0}\right\rangle_{\lambda}=\nonumber\\ = & \int \!\mathrm{d}\mathbf{z}_0 \frac{e^{-\beta[h_0(\mathbf{z}_0)+w_0]}}{z(t_0)}\exp \left[\int_{0}^{\tau}\ ! \mathrm{d } s\ , u_s b(\varphi_{s,0}[\mathbf{z}_0;\lambda])\right ] \nonumber\\ = & \int\!\mathrm{d}\mathbf{z}_\tau \rho_0(\mathbf{z}_\tau ) \exp \left[\int_{0}^{\tau}\!\!\!\ ! \mathrm{d}s \ , u_s b(\varepsilon\varphi_{\tau - s,0}[\varepsilon \mathbf{z}_\tau;\varepsilon_q\widetilde \lambda])\right ] \nonumber\\ = & \int \ !\mathrm{d}\mathbf{z}_\tau ' \rho_0(\mathbf{z}'_\tau ) \exp \left[\int_{0}^{\tau}\!\!\!\!\mathrm{d}r\ , u_{\tau - r } \varepsilon_b b(\varphi_{r,0}[\mathbf{z}'_\tau;\varepsilon_q\widetilde \lambda])\right ] \ , , \label{eq : bkderivation}\end{aligned}\ ] ] where the first equality provides an explicit expression for the l.h.s . of eq .( [ eq : bk - gen - functional - identity ] ) . in going from the second to the third line we employed the expression of work in eq .( [ eq : ex - work ] ) , the microreversibility principle ( [ eq : microreversibility ] ) and made the change of variable .the jacobian of this transformation is unity , because the time evolution in classical mechanics is a canonical transformation . a further change of variables , whose jacobian is unity as well , and the change , yields the expression in the last line , that coincides with the right hand side of eq .( [ eq : bk - gen - functional - identity ] ) . in the last line we used the property , inherited by from the assumed time reversal invariance of the hamiltonian , .in order to prove the quantum principle of microreversibility , we first discretize time and express the time evolution operator ] .therefore , = \nonumber \\ & \tr \,\theta^\dagger u_{\tau,0}[\widetilde \lambda ] \theta e^{i u \mathcal h(\lambda_\tau ) } \theta^\dagger u^{\dagger}_{\tau,0}[\widetilde\lambda ] \theta e^{-i u \mathcal h(\lambda_0 ) } e^{-\beta \mathcal h(\lambda_0 ) } \theta ^\dagger\theta\;,\end{aligned}\ ] ] where we inserted under the trace . using eq .( [ eq : theta - e^ih - theta ] ) , we obtain =\\ & \tr \,\theta^\dagger u_{\tau,0}[\widetilde\lambda ] e^{-i u^ * \mathcal h(\lambda_\tau ) } u_{\tau,0}^{\dagger}[\widetilde\lambda ] e^{i u^ * \mathcal h(\lambda_0 ) } e^{-\beta \mathcal h(\lambda_0)}\theta.\nonumber\end{aligned}\ ] ] the anti - linearity of implies , for any trace class operator : using this we can write =\tr \,e^{-\beta \mathcal h(\lambda_0 ) } e^{-i u \mathcal h(\lambda_0 ) } u_{\tau,0}[\widetilde\lambda ] e^{i u \mathcal h(\lambda_\tau ) } u^{\dagger}_{\tau,0}[\widetilde\lambda ] \ ; .\ ] ] using the cyclic property of the trace one then obtains the important result & \nonumber \\ = \tr \ , u^{\dagger}_{\tau,0}[\widetilde\lambda]&e^{i(-u+i\beta ) \mathcal{h}(\lambda_0 ) } u_{\tau,0}[\widetilde\lambda ]e^{-i(-u+i\beta ) \mathcal{h}(\lambda_0 ) } e^{-\beta \mathcal{h}(\lambda_0 ) } \nonumber\\ = & \mathcal z(\lambda_\tau ) g[-u+i\beta;\widetilde \lambda]\ ; .\end{aligned}\ ] ]
|
two fundamental ingredients play a decisive role in the foundation of fluctuation relations : the principle of microreversibility and the fact that thermal equilibrium is described by the gibbs canonical ensemble . building on these two pillars we guide the reader through a self - contained exposition of the theory and applications of quantum fluctuation relations . these are exact results that constitute the fulcrum of the recent development of nonequilibrium thermodynamics beyond the linear response regime . the material is organized in a way that emphasizes the historical connection between quantum fluctuation relations and ( non)-linear response theory . we also attempt to clarify a number of fundamental issues which were not completely settled in the prior literature . the main focus is on ( i ) work fluctuation relations for transiently driven closed or open quantum systems , and ( ii ) on fluctuation relations for heat and matter exchange in quantum transport settings . recently performed and proposed experimental applications are presented and discussed .
|
symmetric positive definite matrices arise in many areas in a variety of guises : covariances , kernels , graph laplacians , or otherwise .a basic computation with such matrices is evaluation of the bilinear form , where is a matrix function and , are given vectors .if , we speak of computing a _ bilinear inverse form ( bif ) _ .for example , with ( canonical vector ) is the diagonal entry of the inverse . in this paper , we are interested in efficiently computing bifs , primarily due to their importance in several machine learning contexts , e.g. , evaluation of gaussian density at a point , the woodbury matrix inversion lemma , implementation of mcmc samplers for determinantal point processes ( dpp ) , computation of graph centrality measures , and greedy submodular maximization ( see section [ sec : motiv.app ] ) .when is large , it is preferable to compute iteratively rather than to first compute ( using cholesky ) at a cost of operations .one could think of using conjugate gradients to solve approximately , and then obtain .but several applications require precise bounds on numerical estimates to ( e.g. , in mcmc based dppsamplers such bounds help decide whether to accept or reject a transition in each iteration see section [ sec : mcdpp ] ) , which necessitates a more finessed approach .gauss quadrature is one such approach .originally proposed in for approximating integrals , gauss- and _ gauss - type quadrature _( i.e. , gauss - lobatto and gauss - radau quadrature ) have since found application to bilinear forms including computation of . also show that gauss and ( right ) gauss - radau quadrature yield lower bounds , while gauss - lobatto and ( left ) gauss - radau yield upper bounds on the bif . however , despite its long history and voluminous existing work ( see e.g. , ) , our understanding of gauss - type quadrature for matrix problems is far from complete .for instance , it is not known whether the bounds on bifs improve with more quadrature iterations ; nor is it known how the bounds obtained from gauss , gauss - radau and gauss - lobatto quadrature compare with each other ._ we do not even know how fast the iterates of gauss - radau or gauss - lobatto quadrature converge ._ * contributions . *we address all the aforementioned problems and make the following main contributions : =1em we show that the lower and upper bounds generated by gauss - type quadrature monotonically approach the target value ( theorems [ thm : lowbtwn ] and [ thm : upbtwn ] ; corr .[ cor : monlow ] ) .furthermore , we show that for the same number of iterations , gauss - radau quadrature yields bounds superior to those given by gauss or gauss - lobatto , but somewhat surprisingly all three share the same convergence rate .we prove linear convergence rates for gauss - radau and gauss - lobatto explicitly ( theorems [ thm : rrconv ] and [ thm : lrconv ] ; corr .[ cor : loconv ] ) .we demonstrate implications of our results for two tasks : ( i ) scalable markov chain sampling from a dpp ; and ( ii ) running a greedy algorithm for submodular optimization . in these applications, quadrature accelerates computations , and the bounds aid early stopping . indeed , on large - scale sparse problems our methods lead to even several orders of magnitude in speedup .[ [ related - work . ] ] related work .+ + + + + + + + + + + + + there exist a number of methods for efficiently approximating matrix bilinear forms . and use extrapolation of matrix moments and interpolation to estimate the 2-norm error of linear systems and the trace of the matrix inverse . the extrapolation method to bifs and show that the derived one - term and two - term approximations coincide with gauss quadrature , hence providing lower bounds .further generalizations address for a hermitian matrix .in addition , other methods exist for estimating trace of a matrix function or diagonal elements of matrix inverse .many of these methods may be applied to computing bifs .but they do not provide intervals bounding the target value , just approximations .thus , a black - box use of these methods may change the execution of an algorithm whose decisions ( e.g. , whether to transit in a markov chain ) rely on the bif value to be within a specific interval .such changes can break the correctness of the algorithm .our framework , in contrast , yields iteratively tighter lower and upper bounds ( section [ sec : main ] ) , so the algorithm is guaranteed to make correct decisions ( section [ sec : algos ] ) .bifs are important to numerous problems .we recount below several notable examples : in all cases , efficient computation of bounds on bifs is key to making the algorithms practical . * determinantal point processes . *a determinantal point process ( dpp ) is a distribution over subsets of a set ( ) . in its _ l - ensemble _ form ,a dppuses a positive semidefinite kernel , and to a set assigns probability where is the submatrix of indexed by entries in .if we restrict to , we obtain a -dpp .dpp s are widely used in machine learning , see e.g. , the survey . exact sampling from a ( -)dpprequires eigendecomposition of , which is prohibitive .for large , metropolis hastings ( mh ) or gibbs sampling are preferred and state - of - the - art . therein the core task is to compute transition probabilities an expression involving bifs which are compared with a random scalar threshold . for mh , the transition probabilities from a current subset ( state ) to are for ; and for . in a -dpp ,the moves are swaps with transition probabilities for replacing by ( and ) .we illustrate this application in greater detail in section [ sec : mcdpp ] .dpps are also useful for ( repulsive ) priors in bayesian models .inference for such latent variable models uses gibbs sampling , which again involves bifs . * submodular optimization , sensing .* algorithms for maximizing submodular functions can equally benefit from efficient bif bounds .given a positive definite matrix , the set function is _ submodular _: for all ] , it holds that .finding the set ] between observed and unobserved variables . greedy algorithms for maximizing monotone or non - monotone submodular functions rely on marginal gains of the form for and \backslash s ] is _exact_. for gauss quadrature , we can recursively build the _ jacobi matrix _ and obtain from its spectrum the desired weights and nodes .theorem [ thm : gauss ] makes this more precise .[thm : gauss ] the eigenvalues of form the nodes of gauss quadrature ; the weights are given by the squares of the first components of the eigenvectors of . if has the eigendecomposition , then for gauss quadrature thm .[ thm : gauss ] yields given and , our task is to compute and the jacobi matrix .for bifs , we have that , so becomes , which can be computed recursively using the lanczos algorithm . for gauss - radau and gauss - lobatto quadrature we can compute modified versions of jacobi matrices ( for left gauss - radau ) , ( for right gauss - radau ) and ( for gauss - lobatto ) based on . the corresponding nodes and weights , andthus the approximation of gauss - radau and gauss - lobatto quadratures , are then obtained from these modified jacobi matrices , similar to gauss quadrature . aggregating all these computations yields an algorithm that iteratively obtains bounds on .the combined procedure , _ gauss quadrature lanczos ( gql ) _ , is summarily presented as algorithm [ algo : gql ] .the complete algorithm may be found in appendix [ append : sec : gauss ] .[lem : bounds ] let , , , and be the -th iterates of gauss , left gauss - radau , right gauss - radau , and gauss - lobatto quadrature , respectively , as computed by alg .[ algo : gql ] . then , and provide lower bounds on , while and provide upper bounds .* initialize * : , , update using a lanczos iteration solve for the modified jacobi matrices , and .compute , , and with sherman - morrison formula .it turns out that the bounds given by gauss quadrature have a close relation to the approximation error of conjugate gradient ( cg ) applied to a suitable problem . since we know the convergence rate of cg, we can obtain from it the following estimate on the _ relative error _ of gauss quadrature .[ thm : gaussconv ] the -th iterate of gauss quadrature satisfies the relative error bound where is the condition number of . in other words , thm .[ thm : gaussconv ] shows that the iterates of gauss quadrature have a linear ( geometric ) convergence rate .in this section we summarize our main theoretical results . as before , detailed proofs may be found in appendix [ app : sec : proofs ] .the key questions that we answer are : ( i ) do the bounds on generated by gql improve monotonically with each iteration ; ( ii ) how tight are these bounds ; and ( iii ) how fast do gauss - radau and gauss - lobatto iterations converge ?our answers not only fill gaps in the literature on quadrature , but provide a theoretical base for speeding up algorithms for some applications ( see sections [ sec : motiv.app ] and [ sec : algos ] ) . our first result shows that both gauss and right gauss - radau quadratures give iteratively better lower bounds on .moreover , with the same number of iterations , right gauss - radau yields tighter bounds .[ thm : lowbtwn ] let .then , yields better bounds than but worse bounds than ; more precisely , combining theorem [ thm : lowbtwn ] with the convergence rate of relative error for gauss quadrature ( thm . [ thm : gaussconv ] ) we obtain the following convergence rate estimate for right gauss - radau .[ thm : rrconv ] for each iteration , the right gauss - radau iterate satisfies our second result compares gauss - lobatto with left gauss - radau quadrature .[ thm : upbtwn ] let .then , gives better upper bounds than but worse than ; more precisely , this shows that bounds given by both gauss - lobatto and left gauss - radau become tighter with each iteration .for the same number of iterations , left gauss - radau provides a tighter bound than gauss - lobatto . combining the above two theorems, we obtain the following corollary for all four gauss - type quadratures .[ cor : monlow ] with increasing , and give increasingly better lower bounds and and give increasingly better upper bounds , that is , our next two results state linear convergence rates for left gauss - radau quadrature and gauss - lobatto quadrature applied to computing the bif .[ thm : lrconv ] for each , the left gauss - radau iterate satisfies where .theorem [ thm : lrconv ] shows that the error again decreases linearly , and it also depends on teh accuracy of , our estimate of the smallest eigenvalue that determines the range of integration . using the relations between left gauss - radau and gauss - lobatto, we readily obtain the following corollary .[ cor : loconv ] for each , the gauss - lobatto iterate satisfies where .* remarks * all aforementioned results assumed that is strictly positive definite with simple eigenvalues . in appendix [ append : sec : general ] , we show similar results for the more general case that is only required to be symmetric , and lies in the space spanned by eigenvectors of corresponding to distinct positive eigenvalues .next , we empirically verify our the theoretical results shown above .we generate a random symmetric matrix with density , where each entry is either zero or standard normal , and shift its diagonal entries to make its smallest eigenvalue , thus making positive definite .we set and .we randomly sample from a standard normal distribution .figure [ fig : conv ] illustrates how the lower and upper bounds given by the four quadrature rules evolve with the number of iterations .figure [ fig : conv ] ( b ) and ( c ) show the sensitivity of the rules ( except gauss quadrature ) to estimating the extremal eigenvalues . specifically , we use and ..32 .32 .32 the plots in figure [ fig : conv ] agree with the theoretical results .first , all quadrature rules are seen to yield iteratively tighter bounds .the bounds obtained by the gauss - radau quadrature are superior to those given by gauss and gauss - lobatto quadrature ( also numerically verified ) . notably, the bounds given by all quadrature rules converge very fast within 25 iterations they yield reasonably tight bounds .it is valuable to see how the bounds are affected if we do not have good approximations to the extremal eigenvalues and .since gauss quadrature does not depend on the approximations and , its bounds remain the same in ( a),(b),(c ) .left gauss - radau depends on the quality of , and , with a poor approximation takes more iterations to converge ( figure [ fig : conv](b ) ) .right gauss - radau depends on the quality of ; thus , if we use as our approximation , its bounds become worse ( figure [ fig : conv](c ) ). however , its bounds are never worse than those obtained by gauss quadrature .finally , gauss - lobatto depends on both and , so its bounds become worse whenever we lack good approximations to or . nevertheless , its quality is lower - bounded by left gauss - radau as stated in thm .[ thm : upbtwn ] .our theoretical results show that gauss - radau quadrature provides good lower and upper bounds to bifs .more importantly , these bounds get iteratively tighter at a linear rate , finally becoming exact ( see appendix [ app : sec : proofs ] ) .however , in many applications motivating our work ( see section [ sec : motiv.app ] ) , we do not need exact values of bifs ; bounds that are tight enough suffice for the algorithms to proceed . as a result ,all these applications benefit from our theoretical results that provide iteratively tighter bounds .this idea translates into a _ retrospective _ framework for accelerating methods whose progress relies on knowing an interval containing the bif . whenever the algorithm takes a step ( _ transition _ ) that depends on a bif ( e.g. , as in the next section , a state transition in a sampler if the bif exceeds a certain threshold ) , we compute rough bounds on its value . if the bounds suffice to take the critical decision ( e.g. , decide the comparison ) , then we stop the quadrature .if they do not suffice , we take one or more additional iterations of quadrature to tighten the bound .algorithm [ algo : framework ] makes this idea explicit .proceed with the original algorithm retrospectively run one more iteration of left and(or ) right gauss - radau to obtain tighter bounds .make the correct transition with bounds we illustrate our framework by accelerating : ( i ) markov chain sampling for ( -)dpps ; and ( ii ) maximization of a ( specific ) nonmonotone submodular function .first , we use our framework to accelerate iterative samplers for determinantal point processes .specifically , we discuss mh sampling ; the variant for gibbs sampling follows analogously .the key insight is that all state transitions of the markov chain rely on a comparison between a scalar and a quantity involving the bilinear inverse form . given the current set , assume we propose to add element to .the probability of transitioning to state is . to decide whether to accept this transition ,we sample ; if then we accept the transition , otherwise we remain at .hence , we need to compute just accurately enough to decide whether .to do so , we can use the aforementioned lower and upper bounds on .let and be lower and upper bounds for this bif in the -th iteration of gauss quadrature . if , then we can safely accept the transition , if , then we can safely reject the transition .only if , we can not make a decision yet , and therefore retrospectively perform one more iteration of gauss quadrature to obtain tighter upper and lower bounds and .we continue until the bounds are sharp enough to safely decide whether to make the transition .note that in each iteration we make the same decision as we would with the exact value of the bif , and hence the resulting algorithm ( alg .[ algo : gaussdpp ] ) is an exact markov chain for the dpp . in each iteration, it calls alg .[ algo : dpp_judge ] , which uses step - wise lazy gauss quadrature for deciding the comparison , while stopping as early as possible .randomly initialize pick , uniformly randomly compute bounds , on the spectrum of compute bounds , on the spectrum of run one gauss - radau iteration to get and for .* return * * true * * return * * false * if we condition the dppon observing a set of a fixed cardinality , we obtain a -dpp .the mh sampler for this process is similar , but a state transition corresponds to swapping two elements ( adding and removing at the same time ) . assume the current set is . if we propose to delete and add to , then the corresponding transition probability is again , we sample , but now we must compute two quantities , and hence two sets of lower and upper bounds : , for in the -th gauss quadrature iteration , and , for in the -th gauss quadrature iteration . then if we have , we can safely accept the transition ; and if we can safely reject the transition ; otherwise , we tighten the bounds via additional gauss - radau iterations. * refinements .* we could perform one iteration for both and , but it may be that one set of bounds is already sufficiently tight , while the other is loose .a straightforward idea would be to judge the tightness of the lower and upper bounds by their difference ( gap ) , and decide accordingly which quadrature to iterate further .but the bounds for and are not symmetric and contribute differently to the transition decision .in essence , we need to judge the relation between and , or , equivalently , the relation between and .since the left hand side is `` easy '' , the essential part is the right hand side . assuming that in practice the impact is larger when the gap is larger , we tighten the bounds for if , and otherwise tighen the bounds for .details of the final algorithm with this refinement are shown in appendix [ append : sec : kdppalgo ] . as indicated in section [ sec : motiv.app ] , a number of applications , including sensing and information maximization with gaussian processes , rely on maximizing a submodular function given as .in general , this function may be non - monotone . in this case , an algorithm of choice is the double greedy algorithm of .the double greedy algorithm starts with two sets and and serially iterates through all elements to construct a near - optimal subset . at iteration , it includes element into with probability , and with probability it excludes from .the decisive value is determined by the marginal gains and : + / [ \delta_i^+]_+ + [ \delta_i^-]_+ . \end{aligned}\ ] ] for the log - det function , we obtain where . in other words , at iteration the algorithm uniformly samples , and then checks if + \le ( 1-p)[\delta_i^+]_+,\ ] ] and if true , adds to , otherwise removes it from .this essential decision , whether to retain or discard an element , again involves bounding bifs , for which we can take advantage of our framework , and profit from the typical sparsity of the data .concretely , we retrospectively compute the lower and upper bounds on these bifs , i.e. , lower and upper bounds and on , and and on . if + \le ( 1-p)[l_i^+]_+ ] we safely remove from ; otherwise we compute a set of tighter bounds by further iterating the quadrature . as before , the bounds for and may not contribute equally to the transition decision .we can again apply the refinement mentioned in section [ sec : mcdpp ] : if + - [ l_i^-]_+)\le ( 1-p)([u_i^+]_+ - [ l_i^+]_+) ] .there also exists methods for efficiently constructing sparse inverse matrix .if happens to be an sdd matrix , we can use techniques introduced in to construct an approximate sparse inverse in near linear time .in this paper we present a general and powerful computational framework for algorithms that rely on computations of bilinear inverse forms .the framework uses gauss quadrature methods to lazily and iteratively tighten bounds , and is supported by our new theoretical results .we analyze properties of the various types of gauss quadratures for approximating the bilinear inverse forms and show that all bounds are monotonically becoming tighter with the number of iterations ; those given by gauss - radau are superior to those obtained from other gauss - type quadratures ; and both lower and upper bounds enjoy a linear convergence rate .we empirically verify the efficiency of our framework and are able to obtain speedups of up to a thousand times for two popular examples : maximizing information gain and sampling from determinantal point processes .[ [ acknowledgements ] ] acknowledgements + + + + + + + + + + + + + + + + this research was partially supported by nsf career award 1553284 and a google research award .62 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 anari , nima , gharan , shayan oveis , and rezaei , alireza .onte carlo markov chain algorithms for sampling strongly rayleigh distributions and determinantal point processes . in _ colt _ , 2016 .atzori , luigi , iera , antonio , and morabito , giacomo .the internet of things : a survey . _ computer networks _ , 540 ( 15):0 27872805 , 2010 .bai , zhaojun and golub , gene h. bounds for the trace of the inverse and the determinant of symmetric positive definite matrices ._ annals of numerical mathematics _ , pp . 2938 , 1996 .bai , zhaojun , fahey , gark , and golub , gene h. some large - scale matrix computation problems ._ journal of computational and applied mathematics _ , pp . 7189 , 1996 .bekas , constantine , kokiopoulou , effrosyni , and saad , yousef .an estimator for the diagonal of a matrix ._ applied numerical mathematics _ , pp . 12141229 , 2007 .bekas , constantine , curioni , alessandro , and fedulova , irina .low cost high performance uncertainty quantification . in _ proceedings of the 2nd workshop on high performance computational finance _ , 2009 .belabbas , mohamed - ali and wolfe , patrick j. spectral methods in machine learning and new strategies for very large datasets ._ proceedings of the national academy of sciences _ , pp . 369374 , 2009 .benzi , michele and golub , gene h. bounds for the entries of matrix functions with applications to preconditioning ._ bit numerical mathematics _ , pp . 417438 , 1999 .benzi , michele and klymko , christine .total communicability as a centrality measure ._ j. complex networks _ , pp .124149 , 2013 .bonacich , phillip .power and centrality : a family of measures. _ american journal of sociology _ , pp . 11701182 , 1987 .boutsidis , christos , mahoney , michael w. , and drineas , petros .an improved approximation algorithm for the column subset selection problem . in _ soda _ , pp . 968977 , 2009 .brezinski , claude .error estimates for the solution of linear systems ._ siam journal on scientific computing _ , pp . 764781 , 1999 .brezinski , claude , fika , paraskevi , and mitrouli , marilena .estimations of the trace of powers of positive self - adjoint operators by extrapolation of the moments ._ electronic transactions on numerical analysis _ , pp .144155 , 2012 .buchbinder , niv , feldman , moran , naor , joseph , and schwartz , roy . a tight linear time( 1/2)-approximation for unconstrained submodular maximization . in _ focs _ , 2012 .dong , shao - jing and liu , keh - fei .stochastic estimation with noise. _ physics letters b _ , pp . 130136 , 1994 .estrada , ernesto and higham , desmond j. network properties revealed through matrix functions ._ siam review _ , pp . 696714 , 2010 .fenu , caterina , martin , david r. , reichel , lothar , and rodriguez , giuseppe .network analysis via partial spectral factorization and gauss quadrature ._ siam journal on scientific computing _ , pp .a2046a2068 , 2013 .fika , paraskevi and koukouvinos , christos .stochastic estimates for the trace of functions of matrices via hadamard matrices ._ communications in statistics - simulation and computation _ , 2015 .fika , paraskevi and mitrouli , marilena .estimation of the bilinear form for hermitian matrices. _ linear algebra and its applications _ , 2015 .fika , paraskevi , mitrouli , marilena , and roupa , paraskevi .estimates for the bilinear form with applications to linear algebra problems . _electronic transactions on numerical analysis _ , pp . 7089 , 2014 .freericks , james k. transport in multilayered nanostructures . _ the dynamical mean - field theory approach , imperial college , london _ , 2006 .frommer , andreas , lippert , thomas , medeke , bjrn , and schilling , klaus . _ numerical challenges in lattice quantum chromodynamics : joint interdisciplinary workshop of john von neumann institute for computing , jlich , and institute of applied computer science , wuppertal university , august 1999 _ , volume 15 .springer science & business media , 2012 .gauss , carl f. _ methodus nova integralium valores per approximationem inveniendi_. apvd henricvm dieterich , 1815 .gautschi , walter .a survey of gauss - christoffel quadrature formulae . in _eb christoffel _ , pp .springer , 1981 .gillenwater , jennifer , kulesza , alex , and taskar , ben .near - optimal map inference for determinantal point processes . in _ nips _, 2012 .gittens , alex and mahoney , michael w. revisiting the nystrm method for improved large - scale machine learning ._ icml _ , 2013 .golub , gene h. some modified matrix eigenvalue problems ._ siam review _ , pp . 318334 , 1973 .golub , gene h. and meurant , grard .matrices , moments and quadrature ii ; how to compute the norm of the error in iterative methods ._ bit numerical mathematics _ , pp .687705 , 1997 .golub , gene h. and meurant , grard ._ matrices , moments and quadrature with applications_. princeton university press , 2009 .golub , gene h. and welsch , john h. calculation of gauss quadrature rules . _ mathematics of computation _ , pp . 221230 , 1969 .golub , gene h. , stoll , martin , and wathen , andy .approximation of the scattering amplitude and linear systems ._ elec . tran . on numerical analysis _ , pp . 178203 , 2008 .hestenes , magnus r. and stiefel , eduard. methods of conjugate gradients for solving linear systems ._ j. research of the national bureau of standards _ , pp .409436 , 1952 .hough , j. ben , krishnapur , manjunath , peres , yuval , and virg , blint .determinantal processes and independence ._ probability surveys _ , 2006 .kang , byungkon .fast determinantal point process sampling with application to clustering . in _nips _ , pp . 23192327 , 2013 .krause , andreas , singh , ajit , and guestrin , carlos .near - optimal sensor placements in gaussian processes : theory , efficient algorithms and empirical studies ._ jmlr _ , pp . 235284 , 2008 .kulesza , alex and taskar , ben .determinantal point processes for machine learning ._ arxiv:1207.6083 _ , 2012 .kwok , james t. and adams , ryan p. priors for diversity in generative latent variable models . in _ nips _ , pp . 29963004 , 2012 .lanczos , cornelius ._ an iteration method for the solution of the eigenvalue problem of linear differential and integral operators_. united states governm .press office los angeles , ca , 1950 .lee , christina e. , ozdaglar , asuman e. , and shah , devavrat . solving systems of linear equations : locally and asynchronously ._ arxiv _ , abs/1411.2647 , 2014 .leskovec , jure , lang , kevin j. , dasgupta , anirban , and mahoney , michael w. statistical properties of community structure in large social and information networks . in _ www _ , pp . 695704 , 2008 .lin , lin , yang , chao , lu , jianfeng , and ying , lexing . a fast parallel algorithm for selected inversion of structured sparse matrices with application to 2d electronic structure calculations ._ siam journal on scientific computing _ , pp . 13291351 , 2011 .lin , lin , yang , chao , meza , juan c. , lu , jianfeng , ying , lexing , and e , weinan .an algorithm for selected inversion of a sparse symmetric matrix ._ acm transactions on mathematical software _ , 2011 .lobatto , rehuel ._ lessen over de differentiaal - en integraal - rekening : dl . 2 integraal - rekening _ , volume 1 .van cleef , 1852 .meurant , grard .the computation of bounds for the norm of the error in the conjugate gradient algorithm ._ numerical algorithms _ , pp . 7787 , 1997 .meurant , grard .numerical experiments in computing bounds for the norm of the error in the preconditioned conjugate gradient algorithm . _ numerical algorithms _ , pp . 353365 , 1999 .meurant , grard . _the lanczos and conjugate gradient algorithms : from theory to finite precision computations _ ,volume 19 .siam , 2006 .minoux , michel . accelerated greedy algorithms for maximizing submodular set functions . in _optimization techniques _ , pp . 234243 .springer , 1978 .mirzasoleiman , baharan , badanidiyuru , ashwinkumar , karbasi , amin , vondrk , jan , and krause , andreas .lazier than lazy greedy . in _ aaai _ , 2015 .nemhauser , george l .. , wolsey , laurence a. , and fisher , marshall l. an analysis of approximations for maximizing submodular set functions _ mathematical programming _ , pp .265294 , 1978 . page , lawrence , brin , sergey , motwani , rajeev , and winograd , terry ._ the pagerank citation ranking : bringing order to the web ._ stanford infolab , 1999 .radau , rodolphe .tude sur les formules dapproximation qui servent calculer la valeur numrique dune intgrale dfinie ._ j. de mathmatiques pures et appliques _ , pp . 283336 , 1880 .rasmussen , carl e. and williams , christopher k. i. _ gaussian processes for machine learning_. mit press , cambridge , ma , 2006 .rockov , veronika and george , edward i. determinantal priors for variable selection , 2015 .scott , john ._ social network analysis_. sage , 2012 .sherman , jack and morrison , winifred j. adjustment of an inverse matrix corresponding to a change in one element of a given matrix . _ the annals of mathematical statistics _ , pp . 124127 , 1950 .shewchuk , jonathan r. an introduction to the conjugate gradient method without the agonizing pain , 1994 .sidje , roger b. and saad , yousef .rational approximation to the fermi dirac function with applications in density functional theory ._ numerical algorithms _ , pp .455479 , 2011 .stoer , josef and bulirsch , roland . _ introduction to numerical analysis _ ,volume 12 .springer science & business media , 2013 .sviridenko , maxim , vondrk , jan , and ward , justin .optimal approximation for submodular and supermodular optimization with bounded curvature . in _ soda _ , 2015 .tang , jok m. and saad , yousef .a probing method for computing the diagonal of a matrix inverse ._ numerical linear algebra with applications _ , pp .485501 , 2012 .wasow , wolfgang r. a note on the inversion of matrices by random walks ._ mathematical tables and other aids to computation _ , pp .7881 , 1952 .wilf , herbert s. _ mathematics for the physical sciences_. wiley , new york , 1962 .we present below a more detailed summary of material on gauss quadrature to make the paper self - contained .we ve described that the riemann - stieltjes integral could be expressed as : = q_{n } + r_{n } = { \sum\nolimits}_{i=1}^n \omega_i f(\theta_i ) + { \sum\nolimits}_{i=1}^m \nu_i f(\tau_i ) + r_{n}[f],\ ] ] where denotes the degree approximation and denotes a remainder term .the weights , and nodes are chosen such that for all polynomials of degree less than , denoted , we have _ exact _ interpolation = q_{n} ] , and form the nodes for gauss quadrature ( see , e.g. , ( * ? ? ?* ch . 6 ) ) .consider the two _ monic polynomials _ whose roots serve as quadrature nodes : where for consistency .we further denote , where the sign is taken to ensure on ] , is canonical unit vector , and is the tridiagonal matrix this matrix is known as the _ jacobi matrix _ , and is closed related to gauss quadrature .the following well - known theorem makes this relation precise .[ append : thm : gauss ] the eigenvalues of form the nodes of gauss - type quadratures .the weights are given by the squares of the first elements of the normalized eigenvectors of .thus , if has the eigendecomposition , then for gauss quadrature thm .[ append : thm : gauss ] yields [ [ specialization . ] ] specialization .+ + + + + + + + + + + + + + + we now specialize to our main focus , , for which we prove more precise results . in this case , becomes {1,1} ] , letting and invoking the sherman - morrison identity we obtain the recursion : {1,1 } = [ j_i^{-1}]_{1,1 } + \frac{\beta_i^2 ( [ j_i]_{1})^2}{\alpha_{i+1 } - \beta_i^2 [ j_i]_i},\end{aligned}\ ] ] where ] can be recursively computed using a cholesky - like factorization of . for gauss - radau quadrature , we need to modify so that it has a prescribed eigenvalue . more precisely , we extend to for left gauss - radau ( for right gauss - radau ) with on the off - diagonal and ( ) on the diagonal , so that ( ) has a prescribed eigenvalue of ( ) . for gauss - lobatto quadrature ,we extend to with values and chosen to ensure that has the prescribed eigenvalues and . for more detailed on the construction , see . for all methods ,the approximated values are calculated as {1,1} ] by gauss - type quadratures can be expressed as = { f^{(2n+m)}(\xi)\over ( 2n+m ) ! } i[\rho_m\pi_n^2],\ ] ] for some ] ; but with different values of and we obtain different ( but fixed ) signs for ] ; for left gauss - radau and , so we have \le 0 ] ; while for gauss - lobatto we have , and , so that \le 0 ] , which shows that is exact for . for left and right gauss - radau quadrature, we have , , and , while all other elements of the -th row or column of are zeros .thus , the eigenvalues of are , and again equals . as a result , the remainder satisfies = { f^{(2n)}(\xi)\over ( 2n ) ! }i[(\lambda - \tau_1)\pi_n^2 ] = 0,\ ] ] from which it follows that both and are exact . the convergence rate in thm .[ append : thm : cgconv ] and the final exactness of iterations in lemma [ append : lem : exact ] does not necessarily indicate that we are making progress at each iterations .however , by exploiting the relations to cg we can indeed conclude that we are making progress in each iteration in gauss quadrature .[ append : thm : monogauss ] the approximation generated by gauss quadrature is monotonically nondecreasing , i.e. , at each iteration is taken to be orthogonal to the -th krylov space : .let be the projection onto the complement space of .the residual then satisfies where the last inequality follows from .thus is monotonically nonincreasing , whereby is monotonically decreasing and thus is monotonically nondecreasing . before we proceed to gauss - radau ,let us recall a useful theorem and its corollary .[ append : thm : lanczospoly ] let be the vector generated by alg .[ algo : gql ] at the -th iteration ; let be the lanczos polynomial of degree . then we have from the expression of lanczos polynomial we have the following corollary specifying the sign of the polynomial at specific points .assume .if is odd , then ; for even , , while for any .since is similar to , its spectrum is bounded by and from left and right .thus , is positive semi - definite , and is negative semi - definite . taking into considerationwe will get the desired conclusions .we are ready to state our main result that compares ( right ) gauss - radau with gauss quadrature .[ append : thm : lowbtwn ] let .then , gives better bounds than but worse bounds than ; more precisely , we prove inequality using the recurrences satisfied by and ( see alg .[ algo : gql ] ) _ upper bound : ._ the iterative quadrature algorithm uses the recursive updates it suffices to thus compare and .the three - term recursion for lanczos polynomials shows that where is the original lanczos polynomial , and is the modified polynomial that has as a root . noting that , we see that . moreover , from thm .[ append : thm : monogauss ] we know that the s are monotonically increasing , whereby .it follows that and from this inequality it is clear that ._ lower - bound : . _ since and , we readily obtain combining thm . [append : thm : lowbtwn ] with the convergence rate of relative error for gauss quadrature ( thm .[ append : thm : gaussconv ] ) immediately yields the following convergence rate for right gauss - radau quadrature : [ append : thm : rrconv ] for each , the right gauss - radau iterates satisfy this results shows that with the same number of iterations , right gauss - radau gives superior approximation over gauss quadrature , though they share the same relative error convergence rate .our second main result compares gauss - lobatto with ( left ) gauss - radau quadrature .[ append : thm : upbtwn ] let . then , gives better upper bounds than but worse than ; more precisely , we prove these inequalities using the recurrences for and from alg .[ append : algo : gql ] . _ _ : from alg .[ append : algo : gql ] we observe that . thus we can write and as to compare these quantities , as before it is helpful to begin with the original three - term recursion for the lanczos polynomial , namely in the construction of gauss - lobatto , to make a new polynomial of order that has roots and , we add and to the original polynomial to ensure since , , and are all greater than , . to determine the sign of polynomials at ,consider the two cases : 1 .odd . in this case , , and ; 2 .even . in this case , , and .thus , if , where the signs take values in , then , and .hence , must hold , and thus given that for . using with , an application of monotonicity of the univariate function for to the recurrences defining and yields the desired inequality . _ _ : from recursion formulas we have establishing thus amounts to showing that ( noting the relations among , and ) : where the last inequality is obviously true ; hence the proof is complete . in summary, we have the following corollary for all the four quadrature rules : [ append : cor : monlow ] as the iteration proceeds , and gives increasingly better asymptotic lower bounds and and gives increasingly better upper bounds , namely directly drawn from thm .[ append : thm : monogauss ] , thm .[ append : thm : lowbtwn ] and thm .[ append : thm : upbtwn ] . before proceeding further to our analysis of convergence rates of left gauss - radau and gauss - lobatto ,we note two technical results that we will need .[ append : lem : delta ] let and be as in alg .[ algo : gql ] .the difference satisfies . from the lanczos polynomials in the definition of left gauss - radau quadrature we have rearrange this equation to write , which can be further rewritten as lemma [ append : lem : delta ] has an implication beyond its utility for the subsequent proofs : it provides a new way of calculating given the quantities and ; this saves calculation in alg .[ append : algo : gql ] .the following lemma relates to , which will prove useful in subsequent analysis .[ append : lem : comp ] let and be computed in the -th iteration of alg .[ algo : gql ] .then , we have the following : we prove by induction . since , and we know that .assume that is true for all and considering the -th iteration : to prove , simply observe the following with aforementioned lemmas we will be able to show how fast the difference between and decays .note that gives an upper bound on the objective while gives a lower bound .[ append : lem : diffconv ] the difference between and decreases linearly .more specifically we have where and is the condition number of , i.e. , .we rewrite the difference as follows where .next , recall that . since lower bounds , we have thus , we can conclude that now we focus on the term . using lemma [ append : lem :delta ] we know that .hence , finally we have [ append : thm : lrconv ] for left gauss - radau quadrature where the preassigned node is , we have the following bound on relative error : where .write .since , using lemma [ append : lem : diffconv ] to bound the second term we obtain from which the claim follows upon rearrangement . due to the relations between left gauss - radau and gauss - lobatto, we have the following corollary : [ append : cor : loconv ] for gauss - lobatto quadrature , we have the following bound on relative error : where .in this section we consider the case where lies in the column space of several top eigenvectors of , and discuss how the aforementioned theorems vary . in particular , note that the previous analysis assumes that is positive definite . with our analysis in this sectionwe relax this assumption to the more general case where is symmetric with simple eigenvalues , though we require to lie in the space spanned by eigenvectors of corresponding to positive eigenvalues .we consider the case where is symmetric and has the eigendecomposition of where s are eigenvalues of increasing with and s are corresponding eigenvectors .assume that lies in the column space spanned by top eigenvectors of where all these eigenvectors correspond to positive eigenvalues .namely we have and . since we only assume that is symmetric , it is possible that is singular and thus we consider the value of , where is the pseudo - inverse of . due to the constraints on we have where .namely , if lies in the column space spanned by the top eigenvectors of then it is equivalent to substitute with , which is the truncated version of at top eigenvalues and corresponding eigenvectors .another key observation is that , given that lies only in the space spanned by , the krylov space starting at becomes this indicates that lanczos iteration starting at matrix and vector will finish constructing the corresponding krylov space after the -th iteration .thus under this condition , alg .[ algo : gql ] will run at most iterations and then stop . at that time, the eigenvalues of are exactly the eigenvalues of , thus they are exactly of . using similar proof as in lemma [ append : lem : exact ] , we can obtain the following generalized exactness result . , and are exact for , namely the monotonicity and the relations between bounds given by various gauss - type quadratures will still be the same as in the original case in section [ sec : main ] , but the original convergence rate can not apply in this case because now we probably have , making undefined .this crash of convergence rate results from the crash of the convergence of the corresponding conjugate gradient algorithm for solving .however , by looking at the proof of , e.g. , , and by noting that , with a slight modification of the proof we actually obtain the bound ^ 2 \|{\varepsilon}^0\|_a^2,\end{aligned}\ ] ] where is a polynomial of order . by using properties of chebyshev polynomials and following the original proof ( e.g. , or ) we obtain the following lemma for conjugate gradient .let be as before ( for conjugate gradient ) .then , following this new convergence rate and connections between conjugate gradient , lanczos iterations and gauss quadrature mentioned in section [ sec : main ] , we have the following convergence bounds .[ append : cor : spec_conv ] under the above assumptions on and , due to the connection between gauss quadrature , lanczos algorithm and conjugate gradient , the relative convergence rates of , , and are given by where and is a lowerbound for nonzero eigenvalues of .we present the details of the function dpp - judgegauss( ) ( mentioned in section [ sec : mcdpp ] ) in alg .[ append : algo : dpp_judge ] . , , , , , , , , , , , , , return _ true _ return _ false _ , present details of a _ retrospective markov chain monte carlo ( mcmc ) _ in alg .[ append : algo : gausskdpp ] and alg .[ append : algo : kdpp_judge ] that samples for efficiently drawing samples from a -dpp , by accelerating it using our results on gauss - type quadratures .randomly initialize where pick and uniformly randomly pick uniformly randomly get lower and upper bounds , of the spectrum of , , , , , , , , run one more iteration of gauss - radau on to get tighter and run one more iteration of gauss - radau on to get tighter and return _ true _ return _ false _ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , return _ true _ return _ false _we present details of _ retrospective stochastic double greedy _ in alg .[ append : algo : gaussdg ] and alg .[ append : algo : dg_judge ] that efficiently select a subset that approximately maximize . , run one more iteration of gauss - radau on to get tighter lower and upper bounds , for run one more iteration of gauss - radau on to get tighter lower and upper bounds , for return _ true _ return _ false _
|
we present a framework for accelerating a spectrum of machine learning algorithms that require computation of _ bilinear inverse forms _ , where is a positive definite matrix and a given vector . our framework is built on gauss - type quadrature and easily scales to large , sparse matrices . further , it allows retrospective computation of lower and upper bounds on , which in turn accelerates several algorithms . we prove that these bounds tighten iteratively and converge at a linear ( geometric ) rate . to our knowledge , ours is the first work to demonstrate these key properties of gauss - type quadrature , which is a classical and deeply studied topic . we illustrate empirical consequences of our results by using quadrature to accelerate machine learning tasks involving determinantal point processes and submodular optimization , and observe tremendous speedups in several instances .
|
a major obstacle for understanding the nature of the faintest x - ray sources is poor statistics , which prevents the usual practice of spectral analysis such as fitting with known models .even for apparently bright sources , as we investigate the sources in detail , we frequently need to divide source photons in various phases or states of the sources and the success of such an analysis is often limited by statistics .the common practice to extract spectral properties of x - ray sources with poor statistics is to calculate x - ray hardness or color of the sources . in this conventional method ,the full - energy range is divided into two or three sub - bands and the detected source photons are counted separately in each band .the ratio of these counts is defined as x - ray hardness or color , to serve as an indicator of the spectral properties of the source . in principle, one can constrain a meaningful x - ray hardness or color to a source if at least one source photon is detected in each of at least two bands . however , the equivalent requirement for total counts in the full - energy band can be rather demanding and it is strongly spectral dependent . fig .1 shows an example of a x - ray color - color diagram . in the figure, we divide the full - energy range ( 0.3 - 8.0 kev ) into three sub - energy bands ; 0.3 - 0.9 ( s ) , 0.9 - 2.5 ( m ) , and 2.5 - 8.0 kev ( h ) .the ratio of the net source counts in s to m band is defined as the soft x - ray color ( -axis in the figure ) , and the ratio of the net counts in m to h band as the hard x - ray color ( -axis ) .the energy range in this example is the region where the chandra acis - s detectors are sensitive , but the technique described in this paper is not bounded to any particular energy range , provided the energy distribution of photons can be obtained .the grid pattern in the figure represents the true location for the sources with the power - law spectra governed by two parameters : power - law index ( ) and absorbing column depths ( ) along the line of the sight .the grid pattern is drawn for an ideal detector response , which is constant over the full - energy band .the grid pattern appears to be properly spaced to the changes of the spectral parameters and such an arrangement suggests that this color - color diagram may be an ideal way to classify sources with poor statistics .however , the appearance can be deceiving , which is hinted by the uneven sizes of error bars in the figure .the error bar near each grid node represents the central 68% of simulation results from the spectral shape at the grid node .each simulation has 1000 source counts in the full - energy range with no background counts , and thus the size of the symbol represents the size of typical error bars for 1000 count sources in the diagram . in the figure , there are two kinds of error bars for some spectral shapes ( e.g. = 0 and = cm ) ; the thick error bars represent the central 68% distribution of the simulation results that have `` proper '' color values ( defined here as at least one photon in each of all three bands ) , and the thin error bars show the 68% distribution of all the simulation trials ( 10,000 ) for each spectral model , i.e. a distribution of the central 6827 trials .the two error bars should be identical if all the trials produce a proper soft and hard color . the right panel in fig .1 shows the minimum counts in the full - energy band to have at least one count in each of three bands .the figure indicates that the required minimum counts are dramatically different among the models in the grid . for the power - law spectra with = 0 and cm, more than 500 counts are required in order to have reasonable color values , while in the case of = 2 and cm , a few counts are sufficient . because of statistical fluctuations , for some models , even 1000 counts in the full - energy range do not guarantee positive net counts in all three bands , which explains why some trials fail to produce proper colors .the spectral - model dependence of error bars in this kind of color - color diagram is inevitable and the dependence is determined by the choice of the sub - energy bands for a full - energy range . in this example , the sub bands are chosen so that the diagram is mostly sensitive near = 2 and cm .in fact , it is not possible to select sub bands to have uniform sensitivity over different spectral shapes .1 clearly indicates that the conventional color - color diagram ( cccd ) is heavily biased , which is very unfortunate when trying to extract unknown spectral properties from the sources .the heavy requirement on the total counts for certain spectral shapes defeats the purpose of the diagram since the large total counts may allow nominal spectral analysis such as spectral fitting with known models .one might have to repeat the analysis with different choices of sub - bands to explore all the interesting possibilities of spectral shapes .in order to overcome the selection effects originating from the predetermined sub - energy bands , we propose to use the energy value to divide photons into predetermined fractions .we choose fractions to take full advantage of the given statistics , such as 50% ( median ) , 33% & 67% ( tercile ) , and 25% & 75% ( quartile ) , although any quantile may be used .we use the corresponding energy values - quantiles - as an indicator of the x - ray hardness or color of the source .in particular , we will show below that the median , , is an improved substitute for the conventional x - ray hardness .let be the energy below which the net counts is of the total counts and we define quantile where and is the lower and upper boundary of the full - energy band respectively ( 0.3 and 8.0 kev in this example ) .the algorithm for estimation of quantiles and their errors relevant to x - ray astronomy is given in the appendix .unlike the conventional x - ray hardness or colors , for calculating quantiles , there is no spectral dependence of required minimum counts .the required minimum only depends on types of quantiles : two counts for median and terciles , and three for quartiles .2 shows an example of quantile - based color - color diagrams ( qccds ) using the median for the -axis and the ratio of two quartiles / for the -axis ( see 5 for the motivation of the choice of the axes ) . the grid pattern in fig . 2is drawn for the same spectral parameters as in fig . 1 with five additional cases for = cm .note that the approximate axes of and are rotated 90 from those in fig . 1 .the error bars are drawn for the central 68% of the same 10,000 simulation results of the spectral shape at each grid node , and each simulation run contains 1000 source counts with no background counts in the full - energy band .the relatively similar size of the error bars for 1000 count sources in the figure indicates that there is no spectral dependent selection effect .note that there is no need of distinction for thin and thick error bars in this figure because all the trials produce proper quantiles .the grid pattern of power - law spectra in fig . 2 appears to be less intuitively arranged than the one in fig . 1 .however , we believe that the proximity of any two spectra in the quantile - based diagram accurately exhibits the similarity of the two spectral shapes ( as folded through the detector response , which is constant in this example ) .for example , in the case for = 0 in fig . 2 , the separation in phase space by various values is much smaller than in the case for 1 .note that the column depth ( ) in the considered range can change mainly soft x - rays ( 2 kev ) and the spectrum for = 0 is less dominated by soft x - rays than that for 1 . in other words ,the overall effects of on the spectral shape would be much smaller in the case for = 0 than in the case for 1 .therefore , the grid spacing in fig .2 indeed reveals the appropriate statistical power needed to discern these spectral shapes , despite the degeneracy arising from utilizing only three variables ( quantiles ) extracted from a spectrum .now let us consider more realistic examples .we introduce the chandra acis - s response function ; we use the pre - flight calibration data for energy resolution 100 250 ev fwhm .see http://cxc.harvard.edu / cal / acis/. ] and the energy dependent effective area from pimms version 2.3 .3 shows the simulation results of the spectra with 100 and 50 source counts with no background counts .the error bars again indicate the interval that encloses the central 68% of the simulation results for each model .the results are shown for both the conventional and new color - color diagrams in the figure .the grid patterns in the figures are for the same power - law models and they look different from the previous examples because of the chandra acis - s response function .the analysis of faint sources is often limited by background fluctuations .in contrast to fig . 3 , we allow for large background counts in the source region ( e.g. a point source in a bright diffuse emission region ) .the top two panels in fig .4 shows the simulation results of the spectra with 100 source counts and 50 background counts in the source region , and the bottom two panel shows the case of 50 source counts and 25 background counts . for background subtraction, we set the background region to be five times larger than the source region .the background region contains 250 ( top panels ) and 125 background ( bottom panels ) counts respectively .note these are relatively high backgrounds for a typical chandra source and thus illustrate the worst case of a point source superimposed in background diffuse emission .the background photons are sampled from a power - law spectrum with = 0 and = 0 , which is folded through the chandra acis - s response function . in the case of the cccd in figs . 3 and 4, one can notice that some of the error bars lie away from their true location ( grid node ) .the severe requirement for the total counts ( ) for some of the spectral models results in only a lower or upper limit for the color in many simulation runs .cases with proper colors for these models are greatly influenced by statistical fluctuations because of low source counts in one or two sub - bands , and thus the estimated colors fail to produce the true value .it is evident that this conventional diagram is more sensitive towards 2 and cm . in the case of the new quantile - based diagram, the error bars stay at the correct location and the size of the error bars are relatively uniform across the model phase space , indicating that the phase space is properly arranged .the quantile - based diagram shows more consistent results regardless of background .when we explore the spectral properties of sources with an even smaller number of detected photons , the only available tool is the use of a single hardness ratio , which requires only two sub - energy bands .here we use net counts in a 0.3 - 2.0 kev soft band and net counts in a 2.0 - 8.0 kev hard band . two popular definitions for hardness ratio exist in the literature : hr = and hr= .we compare the median with the conventional hardness ratio using the first definition .the left panel in fig . 5 shows the performance of the conventional hardness ratio as a function of the total counts folded through the acis - s response .the plot contains three spectral shapes with = 2 .the shaded regions represent the central 68% of the simulation results for a given total net counts ( except for the case of or 0 ) .the simulation is done for the case without background ( top panels ) and for the case that the source region contains an equal number of source and background counts ( bottom panels ) , where we perform a background subtraction identical to the one described in the previous section .the background follows the same spectrum as in the previous examples .similar to the cccd , the required minimum of the total counts for the conventional hardness ratio depends on spectral shape .the right panels in fig . 5 show the fraction of the simulations that have at least one count in both the soft and hard bands ( otherwise hr = 0 or ) .the plot indicates that one can not assign a proper hardness ratio value in a substantial fraction of cases when the total net counts are less than 100 . in the case of cm , many simulation runs result in ( too soft ) , and for cm , ( too hard ) .a set of predetermined bands keeps the hardness ratio meaningful only within a certain range of spectral shapes . in order to compensate for such a limitation , one needs to repeat the analysis with different choices of bands , which in turn will have a different limitation . even for the cases with a proper hardness ratio value ( the left panel in fig . 5 ) , the hardness ratio distribution of two spectral types (= and cm ) overlaps substantially when the total net counts are less than 40 in the case of high background . below 10 counts , it is difficult to relate the distributions to their true value .6 shows the performance of the new hardness defined by the median .first , there is no loss of events .second , three spectra are well separated down to less than 10 counts regardless of background . even at two or three net counts ,each distribution is distinct and well related to its true value .the newly defined hardness and color - color diagrams using the median and other quantiles are far superior to the conventional ones .one can choose specific quantiles and their combinations relevant to one s needs .we believe the new phase space by the median and quartiles is very useful in general , as summarized in fig .7 . for a given spectrum ,various quantiles are not independent variables , unlike the counts in different energy bands . however , the ratio of two quartiles is mostly independent from the median , which makes them good candidate parameters for color - color diagrams .in essence , the median shows how dense the first ( or second ) half of the spectrum is and the quartile ratio / shows a similar measure of the middle half of the spectrum . for the -axis in fig . 7, one can simply use ( ) on a log scale to explore the wide range of the hardness . however , a simple log scale compresses the phase space for relatively hard spectra ( ; note ) .our expression log( ) when 0 , and when 1 . ] , log( ) , shows both the soft and hard phase space equally well . in fig . 7, a flat spectrum lies at =0 and =1 in the diagram ( ) .the spectrum changes from soft to hard as one goes from left to right in the diagram , and it changes from concave - upwards to concave - downwards , as one goes from bottom to top . the examples in the previous sections explore a soft part of this phase diagram , which is modeled by a power law . in the case of a narrow energy range , where an emission or absorption line is expected on a relatively flat continuum , one can use the qccd to explore a spectral line feature even with limited statistics that normally forbids normal spectral fitting . in this case , the median ( -axis ) in the qccd can be a good measure of line shift , and the quartile ratio ( -axis ) can be a good measure of line broadening . the right panel in fig .7 shows the overlay of the grid patterns for typical spectral models - power law , thermal bremsstrahlung , and black body emission . for high count sources, one can use such a diagram to find relevant models for the spectra before detailed analysis .finally , we investigate how the detector energy resolution affects the performance of the quantile - based diagram .the left panel in fig .8 shows the grid patterns of the power - law emission models for detectors with various energy resolutions but with the same detection efficiency of the chandra acis - s detector . starting with the energy resolution of the chandra acis - s detector in the previous examples ( figs . 3 , 4 , 5 and 6 ;fwhm 150 ev at 1.5 kev and 200 ev at 4.5 kev ) , we have successively decreased the energy resolution by multiplying the energy resolution at each energy by a constant factor .each pattern is labeled by the energy resolution ( ) at 1.5 kev .as the energy resolution decreases , the grid pattern shrinks .the pattern will shrink down to a point ( )=(0,1 ) in the diagram for a detector with no energy resolution . the right panel in fig .8 shows the 68% of the simulation results for a detector with = 100% at 1.5 kev .the size of the 68% distribution in this example is more or less similar to that in the case of the regular chandra acis - s detector with = 10% ( the top - right panel in the fig .the similarity is due to the fact that , unlike the grid pattern , the dispersion of the simulation results ( or the error size of a data model point ) for each model is mainly due to statistical fluctuations for low count sources . in summary ,as the energy resolution decreases , the relative distance between various models in the diagram decreases , but the error size of the data remains roughly the same , and thus the spectral sensitivity of the diagram decreases .note that the overall size of the grid patterns in the left panel of fig .8 is more or less similar when % ( at 1.5 kev ) .we expect that for a detector with energy resolution , the overall size is dependent on a quantity since quantiles are determined over the full energy range .therefore , the quantile - based diagram is quite robust against finite energy resolutions of typical x - ray detectors .this work is supported by nasa grant ar4 - 5003a for our on - going chandra multiwavelength plane ( champlane ) survey of the galactic plane .babu , g. j. 1999 `` breakdown theory for estimators based on bootstrap and other resampling schemes '' in asymptotics , nonparameterics , and time series ( eg .s. ghosh ) , stat .158 , dekker : ny , pp 669 - 681 .harrell , f. e. & davis , c. e. , 1982 , biometrika , 69 , 635 - 640 kim , d. w. , fabbiano , g. , & trinchieri , g. 1992 , apj , 393 , 134 kim , d. w. , et al . , 2003 apj , in press ( astro - ph/0308493 ) maritz , j. s. & jarrett , r. g. , 1978 , journal of the american statistical association , 73 , 194 - 196 netzer , h. , turner , t. j. , george , i. m. 1994 , apj , 435 , 106 prestwich , a. h. , irwin , j. a. , kilgard , r. e. , krauss , m. i. ; zezas , a. , primini , f. , kaaret , p. , boroson , b. 2003 , apj , 595 , 719 schulz , n. s. , hasinger , g. , & trumper , j. 1989 , a&a , 225 , 48 wilcox , r. r. , _ introduction to robust estimation and hypothesis testing _ , academic press , 1997 many routines have been developed to estimate quantiles and the related statistics .we use a simple interpolation technique based on order statistics to estimate quantiles . for real data, we also need to have a reliable estimate for the errors of quantile values , for which we employ the technique by . for simplicity, we assume that we measure the energy value of each photon .in the literature , many quantile estimation algorithms often assume that the lowest value ( energy ) of the given distribution is the ( equivalent ) lower bound of the distribution and the highest value the upper bound . in x - ray astronomy , the lower ( ) and upper bound ( ) of the energy range is usually set by the instrument or user selection , where these bounds may or may not be the lowest and highest energies of the detected photons .we can explicitly impose this boundary condition by assigning 0% and 100% quantiles to and respectively .we sort the detected photons in ascending order of energy , and we assign to the energy of the -th photon as and is the total number of net counts . using = and = , one can interpolate any quantiles from the above relation of and . with the definitions of = and = , the above interpolation is very robust .one can even calculate quantiles without any detected photons ( although not meaningful ) , the result of which is identical to the case of a flat spectrum . in the case of only one detected photon at ,the above relation reduces to = .therefore , the distribution of the median from one count sources with the same spectra is the source spectrum itself .the above interpolation for quantile essentially uses only two energy values , and , where . in order to take advantage of other energy values , one can use more sophisticated techniques like that of . in many cases , the harrell - davis method estimates quantiles with smaller uncertainties than the simple order statistics technique , but because of smoothing effects in the harrell - davis method , there are cases that the simple order statistics performs better , such as distributions containing discontinuous breaks . in real data , the finite detector resolution tends to smooth out any discontinuity in the spectra .our simulation shows that about 10 - 15% better performance is achieved by the harrell - davis method compared to the simple order statistics for the cases of 50 source photons and 25 background photons .the results in the paper are generated by the order statistics technique .once quantile values are acquired , one can always rely on simulations to estimate their uncertainty ( or error ) using a suspected spectral model with parameters derived from the qccd .if no single model stands out as a primary candidate , one can derive a final uncertainty of the quantiles by combining the results of multiple simulations from a number of models . however , error estimation by simulations can be slow , cumbersome , and even redundant , considering that the quantile errors are more sensitive to the number of counts than the choice of spectral shapes ( cf . figs . 3 and 4 ) .a quick and rough error estimate is often sufficient , and so we introduce a simple quantile error estimate technique the maritz - jarrett method .the results ( error bars and shades in fig . 1 to 6 ) in the main text indicate the interval that encloses the central 68% of the simulation results , and were not driven by the maritz - jarrett method .the maritz - jarrett method uses a technique similar to the harrell - davis method of quantile estimation .both methods rely on l - estimators ( linear sums of order statistics ) using beta functions .we sort the photons in the ascending order of energy from to .then , we apply the maritz - jarrett method with small modifications ] is the integer part of . ] as follows . for, we set we then define using the incomplete beta function , where and ( the gamma function ) . using and , we calculate the l - estimators , the error estimate by the maritz - jarrett method requires at least 3 counts for medians , 5 counts for terciles , and 6 counts for quartiles ( ) .9 shows the accuracy of the error estimates by the maritz - jarrett method for = 2 and cm . in the case of no background ,the maritz - jarrett method is accurate when the total net counts are greater than 30 .it tends to overestimate the errors when the total net counts are below 30 . in the case of high background, it tends to underestimate the errors overall since the adopted background subtraction procedure ( see below ) does not inherit background statistics for the error estimates .we find that multiplying by an empirical factor can compensate , approximately , for the underestimation ( : total source counts , : total background counts ) .for the error of the ratio / , since these two quartiles are not independent variables , the simple quadratic combination from two quartile errors overestimates the error of the ratio ( 20 30% for the examples in figs . 3 and 4 ) .one can find more sophisticated techniques such as bayesian statistics to estimate quantile errors in the literature .for example , and the reference therein discuss the limitation of bootstrap estimation and show other techniques such as the half - sample method .for background subtraction , we calculate quantiles at a set of finely stepped fractions separately for photons in the source region and the background region . then , by a simple linear interpolation , we establish the integrated counts ( number of photons with energy greater than ) as a function of energy for both regions .now at each energy , one can subtract the integrated counts of the background region from that of the source region with a proper ratio factor of the area of the two regions .because of statistical fluctuations , the subtracted integrated net counts may not be monotonically increasing from to .therefore we force a monotonic behavior by setting if and .such a requirement can underestimate the quantiles overall , so we repeat the above using and force a monotonic decrease .then , quantiles for the net distribution are given by where is the total net counts . if there is no statistical fluctuation , .for the error estimation , we need to know the energy of each source photon , which will be lost from background subtraction .so we generate a set of energy values for source photons matching the above quantile relation and then apply the above error estimation technique using the maritz - jarrett method .
|
we present a new technique called _ quantile analysis _ to classify spectral properties of x - ray sources with limited statistics . the quantile analysis is superior to the conventional approaches such as x - ray hardness ratio or x - ray color analysis to study relatively faint sources or to investigate a certain phase or state of a source in detail , where poor statistics does not allow spectral fitting using a model . instead of working with predetermined energy bands , we determine the energy values that divide the detected photons into predetermined fractions of the total counts such as median ( 50% ) , tercile ( 33% & 67% ) , and quartile ( 25% & 75% ) . we use these quantiles as an indicator of the x - ray hardness or color of the source . we show that the median is an improved substitute for the conventional x - ray hardness ratio . the median and other quantiles form a phase space , similar to the conventional x - ray color - color diagrams . the quantile - based phase space is more evenly sensitive over various spectral shapes than the conventional color - color diagrams , and it is naturally arranged to properly represent the statistical similarity of various spectral shapes . we demonstrate the new technique in the 0.3 - 8 kev energy range using chandra acis - s detector response function and a typical aperture photometry involving background subtraction . the technique can be applied in any energy band , provided the energy distribution of photons can be obtained .
|
in the relatively new field of quantum information science , there have been limited numbers of true breakthroughs .the quantum computer remains a machine on paper only , for a working realization has proved elusive .for one of the most powerful tools that the quantum computer would provide , namely the reduction of the factorization problem to polynomial time by shor s algorithm , the most successful implementation to date has been to factor the number 15 into 3 times 5 [ 1 ] .the quantum computer might remain unrealized for years to come .indeed , kak has suggested that the current quantum circuit model for quantum computers is fundamentally flawed and new models must be developed to tackle the problem [ 2 ] . while there have been limited strides in some areas of quantum information science , one area in particularhas produced realizable solutions in the field of cryptography .quantum key distribution protocols have been successfully implemented and have produced commercially available products . in this paper , we will discuss a new protocol proposed by kak called the `` three stage protocol . ''to kak s protocol , we will introduce a modification which allows for greater security against man in the middle attacks .in addition , we introduce a new single stage protocol which similarly allows for security against such attacks .the usefulness of quantum key distribution lies in the properties of the qubit , the quantum unit of information . since a qubit is an object representing a quantum superposition state , the qubitcan not be copied .this is commonly called the no - cloning theorem [ 3 ] .this property ensures that during qubit data transmission , it is impossible for an evesedropper ( eve ) to simply make copies of the qubits being sent , and thus manipulate these copied qubits to obtain the message .this useful property allows quantum data transmission to be used effectively in key distribution protocols as shown in [ 3 ] .when a private key can be transmitted securly along a quantum channel , then secure classical communication between the two parties ( alice and bob ) can be achieved using the private key ( figure 1 ) .one private key cryptosystem in use today is the vernam cipher , also called the one time pad . according to nielsen and chuang , the security of the private key used in the vernam cipher is sometimes ensured by transmitting it via such low - tech solutions as clandestine meetings or trusted couriers .the need for better transmission protocols is obvious .in [ 4 ] , s. kak proposed a new quantum key distribution protocol based on secret unitary transformations ( figure 2 ) .his protocol , like bb84 , has three stages , but unlike bb84 , it remains quantum across all three stages . in the first stage , alice manipulates the message , which is simply one of two orthogonal stages ( e.g. and ) by means of a unitary transformation , known only to her .bob receives the new state , and in the second stage , applies his own secret transformation , which is both a unitary transformation , and one that commutes with , and sends the result back to alice . in the third stage , alice applies the hermitian conjugate of her transformation , , and sends the result back to bob .since , bob simply applies and obtains the previously unknown state , .the suceptability of both bb84 and kak s three - stage protocol to man in the middle attacks has been documented [ e.g. 5,6 ] , and various methods to counter these attacks have been proposed [ e.g. 7,8 ] .in such an attack , the eavesdropper , eve , can attempt to thwart the communication between alice and bob in one of the following ways ( figure 3 ) .* eve receives the message from alice by impersonating bob .eve then decodes alice s message , and , now impersonating alice , duplicates this message to bob . in this scenerio , both eve and bob obtain the secret message . *eve impersonates bob and decodes alice s message as in scenerio 1 , but instead of relaying the actual message to bob , eve relays a different message of her own choosing . in this scenerio, only eve obtains the secret message . *eve impersonates bob , but is not able to decode alice s message .instead , she impersonates alice and sends her own message to bob . in this scenerio ,communication between alice and bob is blocked , but no secret message is comprimised .in kak s paper [ 4 ] , he suggests using secret real valued orthogonal transformations to encrypt the qubits . under orthogonal transformations of the same form ( see below ) , the selection of the angles and by alice and bob respectively does not affect the outcome of the protocol .furthermore , both alice and bob do not need to know what each other s angle selection is .the reason that a man in the middle attack can be carried out is that when is assumed to be real valued , it is very easy for eve to find another unitary transformation , , which commutes with .this is the underlying assumption by both perkins [ 5 ] , and basuchowdhuri [ 7 ] .indeed , for a 2x2 real valued unitary transformation ( i.e. an orthogonal transformation ) , there is a limitation on its form . consider a 2x2 transformation : then , for to be orthogonal ( unitary ) , .this gives rise to the following equations : these equations are satisfied only when has one of the two following forms : a rotation , as kak proposed , or \mathbf{u_{2}(\theta ) } = \begin{bmatrix } cos(\theta)&sin(\theta)\\sin(\theta)&-cos(\theta ) \end{bmatrix},\ ] ] a reflection across the line .the three - stage protocol demands that while bob does nt know the value of that alice is using , he must know which of the two above forms of that alice chooses .the reason for this is that while commutes with for any and , and commutes with for any and , does not commute with in general .so we will consider the choice of the form of to be public information .once this information is known , bob simply needs to choose his own angle , and his transformation will be of the same form as .+ + it is easy to see that when the same form is used , and commute ( i.e. ). + for form 1 : \ ] ] applying trigonometric identities , since and have the same form , it is clear that they commute .+ for form 2 : \ ] ] again , applying trigonometric identities , since and always commute given the same form , then for eve to impersonate bob and obtain alice s secret message , she only needs to select any angle and use it in her own transformation where is of the same form as .using , eve can obtain the secret state in the exact same way that bob can obtain it .in addition , eve can relay a message to bob using her own transformation . since bobdoes nt know what alice s transformation is , eve s is a valid substitution .suppose now instead of a orthogonal ( i.e. unitary and real value ) transformation , alice chooses a more general complex valued unitary transformation , when alice chooses a of this form , it is more difficult for bob to find another transform , which commutes .we see that when then and these transformations commute only when ( or any 2 multiple ) . for bob to decode alice s message , he must have more information than simply the form of her transformation .he must know also the value of that she has chosen .while this might seem to be a hindrence to the protocol , it allows for much greater security against a man in the middle attack .eve attempts to intercept alice s message to bob by choosing a to impersonate bob s . as we saw earlier , when is real valued, eve can simply pick any angle and generate a transformation that commutes . but with a complex valued , eve can not guarantee a commuting transformation without knowing the value of . consider eve s choice of a without knowledge of . then , .when , as in the variation to kak s protocol described above , bob knows the value of that alice has chosen for her transformation ( assuming as above that the form of is public information ) , then he has full knowledge of . in this situation, alice and bob can forego the second two stages of the protocol and let bob perform the transform to obtain the unknown state ( figure 4 ) .we have simply , .in this situation , there is no need for to be complex valued .we can have , as kak proposed in [ 4 ] , so for eve to intercept the message and properly decode it , she would have to know the value of .the strength of this protocol is dependent on keeping the value of a secret known only to alice and bob .we enhance the security of our protocol by allowing for to change , which blocks any attempt by eve at a statistical analysis of the qubits .we assume that before secure transmission may begin , there is some other secure protocol that alice may use to transmit her initial value of to bob .one example is perkins protocol which uses trusted certificates [ 6 ] .suppose we restrict to the upper half plane of the unit circle .after qubits are successfully transmitted from alice to bob , the qubits to will be used to obtain the new value of .the data bits selected by alice for these qubits will represent an integer such that if is the bit transmitted ( ) , then . when these four qubits are received by bob and decoded , alice and bob adjust their transformations and respectively such that after this , alice transmits more qubits to bob before again changing the value of . in this fashion ,any attempt by eve to obtain the value of with no prior knowledge would be extremely difficult .the author thanks the louisiana board of regents , borsf , under agreement nasa / leqsf(2005 - 2010)-laspace and nasa / laspace under grant nng05gh22h for support during this project . 8 l.m.k .vandersypen , m. steffen , g. breyta , c.s .yannoni , m.h .sherwood , i.l .chuang , experimental realization of shor s quantum factoring algorithm using nuclear magnetic resonance .nature 414 , 883 - 887 ( 20 dec 2001 ) .arxiv : quant - ph/0112176v1 s. kak , are quantum computing models realistic ?acm ubiquity , 7 ( 11 ) : 1 - 9 , 2006 .arxiv : quant - ph/0110040v5 m.a .nielsen , i.l .chuang , quantum computation and quantum information .cambridge university press , 2000 .s. kak , a three - stage quantum cryptography protocol .foundations of physics letters 19 ( 2006 ) , 293 - 296 .arxiv : quant - ph/0503027v2 g. gilbert , m. hamrick ( mitre ) , constraints on eavesdropping on the bb84 protocol .arxiv : quant - ph/0106034v2 w. perkins , trusted certificates in quantum cryptography .arxiv : cs/0603046v1 [ cs.cr ] k. svozil , feasibility of the interlock protocol against man - in - the - middle attacks on quantum cryptography . international journal of quantum information , vol .3 , no . 4 ( 2005 )649 - 654 .arxiv : quant - ph/0501062v4 p. basuchowdhuri , classical authentication aided three - stage quantum protocol .arxiv : cs/0605083v1
|
this paper introduces a variation on kak s three - stage quanutm key distribution protocol which allows for defence against the man in the middle attack . in addition , we introduce a new protocol , which also offers similar resiliance against such an attack .
|
astrometry , that branch of observational astronomy that deals with the precise and accurate estimation of angular positions of light - emitting ( usually point - like ) sources projected against the celestial sphere , is the oldest technique employed in the study of the heavens .repeated measurements of positions , spread over time , allow a determination of the motions and distances of theses sources , with astrophysical implications on dynamical studies of stellar systems and the milky way as a whole . with the advent of solid - state detectors and all - digital techniques applied to radio - interferometry and specialized ground- and space - based missions , astrometry has been revolutionized in recent years , as we have entered a high - precision era in which this technique has started to play an increasingly important role in all areas of astronomy , astrophysics , and cosmology .current technology , based on two - dimensional discrete digital detectors ( such as _ charged coupled devices _ - ccds ) , record a ( noisy ) image ( on an array of photo - sensitive pixels ) of celestial sources , from which it is possible to estimate both their astrometry and photometry , simultaneously .the inference problem associated to the determination of these quantities is at the core of the astrometric endeavor described previously .a number of techniques have been proposed to estimate the location and flux of celestial sources as recorded on digital detectors . in this context , estimators based on the use of a least - squares error function ( ls hereafter ) have been widely adopted .the use of this type of decision rule has been traditionally justified through heuristic reasons and because they are conceptually straightforward to formulate based on the observation model of these problems .indeed , the ls approach was the classical method used when the observations were obtained with analog devices ( which corresponds to a gaussian noise model for the observations , different from that of modern digital detectors , which is characterized instead by a poisson statistics ) and , consequently , the ls method was naturally adopted from the analogous to the digital observational setting . ] . in contemporary astrometry ( gaia , for instance ), stellar positions will be obtained by optimizing a likelihood function ( see , e.g. , , which uses the equivalent of our equations ( [ eq_pre_5 ] ) and ( [ log - like ] ) in sections [ sub_sec_astro_photo ] and [ subsec_achie ] respectively ) , not by ls . nevertheless , since ls methods offer computationally efficient implementations and have shown reasonable performance they are still widely used in astrometry either on general - purpose software packages for the analysis of digital images such as daophot , or on dedicated pipelines , such as that adopted in the sloan digital sky survey survey ( sdss hereafter , ) .for example , in daophot astrometry ( and photometry ) are obtained through a two - step process which involves a ls minimization of a trial function ( e.g. , a bi - dimensional gaussian , see ( * ? ? ?* equation 6 ) , equivalent to our one - dimensional case in equation ( [ eq_subsec_mse_of_ls_1 ] ) , section [ subsec_achie ] ) , and then applying a correction by means of an empirically determined look - up table ( also computed performing a ls on a set of high signal - to - noise ratio images distributed over the field of view of the image , see ( * ? ? ?* equation 8) ) .this last step accounts for the fact that the psf of the image under analysis may not be exactly gaussian .the sdss pipeline ( ) obtains its centroids also through a two - step process : first it fits a karhunen - love transform ( kl transform hereafter , see , e.g. , ) to a set of isolated bright stars in the field - of - view of the image , and then it uses the base functions determined in this way , to fit the astrometry and photometry for the object(s ) under consideration using a ls minimization scheme ( see ( * ? ? ?* equation ( 5 ) ) ) .both codes , daophot and the sdss pipeline have been extensively used and tested by the astronomical community , giving very reliable results ( see , e.g. , ) . considering that ls methods are still in use in astrometry , and driven by the increase in the intrinsic precision available by the new detectors and instrumental settings , and by the fact that ccds will likely continue to be the detector of choice for the focal - plane in science - quality imaging application at optical wavelengths for both space- as well as ground - borne programs , it is timely to re - visit the pertinence of the use of ls estimators . indeed ,in the digital setting , where we observe discrete samples ( or counts ) on a photon integrating device , there is no formal justification that the ls approach is optimal in the sense of minimizing the mean - square - error ( mse ) of the parameter estimation , in particular for astrometry , which is the focus of this work .the question of optimality ( in some statistical sense ) has always been in the interest of the astronomical community , in particular the idea of characterizing fundamental performance bounds that can be used to analyze the efficiency of the adopted estimation schemes . in this context, we can mention some seminal works on the use of the celebrated cramr - rao ( cr hereafter ) bound in astronomy by ; and .the cr bound is a minimum variance ( mv ) bound for the family of unbiased estimators . in astrometry and photometrythis bound has offered meaningful closed - form expressions that can be used to analyze the complexity of the inference task , and its dependency on key observational and design parameters such as the position of the object in the array , the intensity of the object , the signal - to - noise ratio ( snr hereafter ) , and the resolution of the instrument . in particular , for photometry , used the cr bound to show that the ls estimator is a good estimator , achieving a performance close to the limit in a wide range of observational regimes , and approaching very closely the bound at low snr . in astrometry , on the other hand , have recently studied the structure of this bound and have analyzed its dependency with respect to important observational parameters , under realistic astronomical observing conditions . in those works , closed - form expressions for the bound were derived in a number of important settings ( high spatial resolution , low and high snr ) , and their trends were explored across angular resolution and the position of the object in the array . as an interesting outcome , the analysis of the cr bound allows us to find the optimal pixel resolution of the array for a given setting , as well as providing formal justification to some heuristic techniques commonly used to improve performance in astrometry , like _ dithering _ for undersampled images ( * ? ? ?the specific problem of evaluating the existence of an estimator that achieves the cr bound has not been covered in the literature , and remains an interesting open problem . on this , have empirically assessed ( using numerical simulations ) the performance of two ls methods and the maximum - likelihood ( ml hereafter ) estimator , showing that their variances follow very closely the cr limit in some specific regimes . in this paper , we analyze in detail the performance of the ls estimator with respect to the cr bound , with the goal of finding concrete regimes , if any , where this estimator approaches the cr bound and , consequently , where it can be considered an efficient solution to the astrometric problem .this application is a challenging one , because estimators based on a ls type of objective function do not have a closed - form expression in astrometry .in fact , this estimation approach corresponds to a non linear regression problem , where the resulting estimator is implicitly defined . as a result, no expressions for the performance of the ls estimator can be obtained analytically .to address this issue , our main result ( theorem [ ls_performances_bounds ] , section [ subsec_mse_of_ls ] ) derives expressions that bound and approximate the variance of the ls estimator .our approach is based on the work by , where the authors tackle the problem of approximating the bias and mse of general estimators that are the solution of an optimization problem . in methodology is given to approximate the variance and mean of implicitly defined estimators , which has been applied to medical imaging and acoustic source localization .the main result of our paper is a refined version of the result presented in , where one of their key assumptions , which is not applicable in our estimation problem , is reformulated . in this process , we derive lower and upper bounds for the mse performance of the ls estimator . using these bounds ,we analyze how closely the performance of the ls estimator approaches the cr bound across different observational regimes .we show that for high snr there is a considerable gap between the cr bound and the performance of the ls estimator .remarkably , we show that for the more challenging low snr observational regime ( weak astronomical sources ) , the ls estimator is near optimal , as its performance is arbitrarily close to the cr bound .the paper is organized as follows .section [ sec_pre ] introduces the problem , notation , as well as some preliminary results .section [ main_sec ] represents the main contribution , where theorem [ ls_performances_bounds ] and its interpretation are introduced .section [ subsec_empirical ] shows numerical analyses of the performance of ls estimator under different observational regimes .finally section [ final ] provides a summary of our results , and some final remarks .in this section we introduce the problem of astrometry as well as concepts and definitions that will be used throughout the paper . for simplicity , we focus on the 1-d scenario of a linear array detector , as it captures the key conceptual elements of the problem .the specific problem of interest is the inference of the position of a point source .this source is parameterized by two scalar quantities , the position of the object in the array , and its intensity ( or brightness , or flux ) that we denote by .these two parameters induce a probability over an observation space that we denote by .more precisely , given a point source represented by the pair , it creates a nominal intensity profile in a photon integrating device , typically a ccd , which can be generally written as : where denotes the one dimensional normalized point spread function ( psf ) evaluated on the pixel coordinate , and where is a generic parameter that determines the width ( or spread ) of the light distribution on the detector ( typically a function of wavelength and the quality of the observing site , see section [ subsec_empirical ] ) ( see for more details ) . the profile in equation ( [ eq_pre_1 ] )is not observed directly , but through three sources of perturbations : first , an additive background which accounts for the photon emissions of the open ( diffuse ) sky and the contributions from the noise of the instrument itself ( the read - out noise and dark - current ) modeled by in equation ( [ eq_pre_2b ] ) .second , an intrinsic uncertainty between the aggregated intensity ( the nominal object brightness plus the background ) and the actual measurements , denoted by in what follows , which is modeled by independent random variables that obey a poisson probability law . and , finally , we need to consider the spatial quantization process associated with the pixel - resolution of the detector as specified by in equations ( [ eq_pre_2b ] ) and ( [ eq_pre_3 ] ) is sometimes referred to as the the `` pixel response function '' . ] . including these three effects, we have a countable collection of independent and not identically distributed random variables ( observations or counts ) , where , driven by the expected intensity at each pixel element , given by : and , where represents the expectation value of the argument , and denotes the standard uniform quantization of the real line - array with resolution , i.e. , for all . in practice, the detector has a finite collection of measured elements ( or pixels ) , then a basic assumption here is that we have a good coverage of the object of interest , in the sense that for a given position : note that equation ( [ eq_pre_3 ] ) adopts the idealized situation where every pixel has the exact same response function ( equal to unity ) , or , equivalently , that our flat - field process has been achieved with minimal uncertainty .it also assumes that the intra - pixel response is uniform .the latter is more important in the severely undersampled regime ( see , e.g. , ( * ? ? ?* figure 1 ) ) which is not explored in this paper .however a relevant aspect of data calibration is achieving a proper flat - fielding which can affect the correctness of our analysis and the form of the adopted likelihood function ( see below ) . at the end, the likelihood of the joint random observation vector ( with values in ) , given the source parameters , is given by : where denotes the probability mass function ( pmf ) of the poisson law .we emphasize that equation ( [ eq_pre_5 ] ) assumes that the observations ( contained in the individual pixels , denoted by the index ) are independent ( although not identically distributed , since they follow ) .of course , this is only an approximation to the real situation since it implies , in particular , that we are neglecting any electronic defects or features in the device such as , e.g. , the cross - talk present in multi - port ccds , or read - out correlations such as the odd - even column effect in ir detectors , as well as calibration or data reduction deficiencies ( e.g. , due to inadequate flat - fielding ) that may alter this idealized detection process .a serious attempt is done by manufacturers and observatories to minimize the impact of these defects , either by an appropriate electronic design , or by adjusting the detector operational regimes ( e.g. , cross - talk can be reduced to less than 1 part in by adjusting the readout speed and by a proper reduction process , ) .in essence , we are considering an ideal detector that would satisfy the proposed likelihood function given by equation ( [ eq_pre_5 ] ) , in real detectors the likelihood function could be considerably more complex .we can formulate the astrometric and photometric estimation task , as the problem of characterizing a decision rule , with being a parameter space , where given an observation the parameters to be estimated are .in other words , gives us a prescription ( or statistics ) that would allow us to estimate the underlying parameters ( the estimated parameters are denoted by ) from the available data vector . in the simplest scenario ,in which one is interested in determining a single ( unknown ) parameter ( e.g. , in our case either or , assuming that all other parameters are perfectly well known ) , a commonly used decision rule adopted in statistics to estimate this parameter ( the estimation being ) is to consider the prescription of minimum variance ( denoted by ) , given by : where `` '' represents the argument that minimizes the expression , while is a generic variable representing the parameter to be determined . note that in the last equality we have assumed that is an unbiased estimator of the parameter ( i.e. , that ) , so that under this rule we are implicitly minimizing the mse of the estimate with respect to the hidden true parameter .unfortunately , the general solution of equation ( [ minvar ] ) is intractable , as in principle it requires the knowledge of , which is the essence of the inference problem .an additional issue with equation ( [ minvar ] ) is that , by itself , it does not provide an analytical expression that tells us how to compute in terms of ( e.g. , a function of some sort ) which is trained with a ( `` good '' ) subset of the data itself ( thus approximately removing the ambiguity that the true parameter is , in fact , unknown ) .this heuristic approach is adopted , e.g. , in the sdss pipeline trough the use of the kl transform , which is a good approximation to a matched filter ( also known as the `` north filter '' , see e.g. , ) . ] .fortunately , there are performance bounds that characterize how far can we be from the theoretical solution in equation ( [ minvar ] ) , and even scenarios where the optimal solution can be achieved in a closed - form ( see section [ subsec_achie ] and appendix [ proof_pro_achie_astrometry ] ) .one of the most significant results in this field is the cr minimum variance bound which will be further explained below .the cr bound offers a performance bound on the variance of the family of unbiased estimators .more precisely , let be a collection of independent observations that follow a parametric pmf defined on .the parameters to be estimated from will be denoted in general by the vector .let be an unbiased estimator , .] of , and be the likelihood of the observation given . then , the cr bound establishes that if : then , {i , i},\ ] ] where is the _ fisher information _ matrix given by : {i , j } = \mathbb{e}_{i^n \sim f^n_{\bar{\theta } } } \left\lbrace \frac{\partial \ln l(i^n ; \bar{\theta})}{\partial \theta_i } \cdot \frac{\partial \ln l(i^n ; \bar{\theta } ) } { \partial \theta_j } \right\rbrace \;\ ; \forall i , j \in \left\{1,\ldots , m \right\}.\ ] ] in particular , for the scalar case ( ) , we have that for all : \right\rbrace^{-1},\ ] ] where is the collection of unbiased estimators and . returning to our problem in section [ sub_sec_astro_photo ] , have characterized and analyzed the cr bound for the isolated problem of astrometry and photometry , respectively , as well as the joint problem of photometry and astrometry .particularly , we highlight the following results , which will be used later on : ( ) [ pro_fi_photometry_astrometry ] let us assume that is fixed and known , and we want to estimate ( fixed but unknown ) from in equation ( [ eq_pre_5 ] ) . in this scalar parametric context , the fisher information is given by : which from equation ( [ cr_scalar ] ) induces a mv bound for the _ photometry estimation problem_. on the other hand , if is fixed and known , and we want to estimate ( fixed but unknown ) from in equation ( [ eq_pre_5 ] ) , then the fisher information is given by : which from equation ( [ cr_scalar ] ) induces a mv bound for the _ astrometric estimation problem _ , and where denotes the ( astrometric ) cr bound .at this point it is relevant to study if there is any practical estimator that achieves the cr bound presented in equations ( [ fi_photometry ] ) and ( [ fi_astrometry ] ) for the photometry and astrometry problem , respectively . for the photometric case , ( * ? ? ? * their appendix a ) has shown that the classical ls estimator is near - optimal , in the sense that its variance is close to the cr bound for a wide range of experimental regimes , and furthermore , in the low snr regime , when , its variance ( determined in closed - form ) asymptotically achieves the mv bound in equation ( [ fi_photometry ] ) .this is a formal justification for the goodness of the ls as a method for doing isolated photometry in the setting presented in section [ sub_sec_astro_photo ] .an equivalent analysis has not been conducted for the astrometric problem , which is the focus of the next section of this work .we first evaluate if the cr bound for the astrometric problem , from equation ( [ fi_astrometry ] ) , can possibly be achieved by any unbiased estimator .then , we focus on the widely used ls estimation approach , to evaluate its performance in comparison with the astrometric mv bound presented in proposition [ pro_fi_photometry_astrometry ] . concerning achievability, we demonstrate that for astrometry ( i.e. , assuming is known ) there is no estimator that achieves the cr bound in any observational regime we demonstrate that in the low snr limit the ls estimator can asymptotically approach the cr bound . ] . the log - likelihood function associated to equation ( [ eq_pre_5 ] ) in this case is given by : and we have the following result : [ pro_achie_astrometry ] for any fixed and unknown parameter , and any unbiased estimator where follows a poisson pmf ( hereafter , to shorten notation ) from equation ( [ eq_pre_5 ] ) .( the proof is presented in appendix [ proof_pro_achie_astrometry ] ) .the non - achievability condition imposed by proposition [ pro_achie_astrometry ] supports the adoption of _ alternative criteria _ for position estimation , being ml and the classical ls two of the most commonly adopted approaches .the ml estimate of the position is obtained through the following rule : where `` '' represents the argument that maximizes the expression , while is a generic variable representing the astrometric position . imposing the first order condition on this optimization problem, it reduces to satisfying the condition , and , consequently , we can work with the general expression given by equation ( [ eq_proof_pro_achie_astrometry_2 ] ) .we note that a well - known statistical result indicates that in the case of independent and identically distributed samples the ml approach is asymptotically unbiased and efficient ( i.e. , it achieves the cr bound when the number of observations goes to infinity ( * ? ? ?* chapter 7.5 ) ) .however for the ( still independent ) but non - identically distributed setting of astrometry described by equation ( [ eq_pre_5 ] ) , this asymptotic result , to the best of our knowledge , has not been proven , and remains an open problem . on the other hand , a version of the ls estimator ( given the model presented in section [ sub_sec_astro_photo ] )corresponds to the solution of : with and where is given by equation ( [ eq_pre_3 ] ) , then the probability mass function for each individual observation would have been given by .in this case the log - likelihood function ( i.e. , the equivalent of equation ( [ log - like ] ) ) would be given by . therefore , in this scenario , finding the maximum of the log - likelihood would be the same as finding the minimum of the ls , as in equation ( [ eq_subsec_mse_of_ls_1 ] ) .this is a well - established result , described in many statistical books . ] . in a previous paper ( * ? ? ?* section 5 ) , we have carried out numerical simulations using equations ( [ ml1 ] ) and ( [ eq_subsec_mse_of_ls_1 ] ) , and have demonstrated that both approaches are reasonable .however , an inspection of ( * ? ? ?* table 3 ) suggests that the ls method exhibits a loss of optimality at high - snr in comparison with either the ( poisson variance- ) weighted ls or the ml method .this motivates a deeper study of the ls method , to properly understand its behavior and limitations in terms of its mse and possible statistical bias , which is the focus of the following section .the solution to equation ( [ eq_subsec_mse_of_ls_1 ] ) is non - linear ( see figure [ fig_behave ] ) , and it does not have a closed - form expression .consequently , a number of iterative approaches have been adopted ( see , e.g. , ) to solve or approximate .hence as is implicit , it is not possible to compute its mean , its variance , nor its estimation error directly .we also note that , since we will be mainly analyzing the behavior of , none of the caveats concerning the properness of the likelihood function ( equation ( [ eq_pre_5 ] ) ) raised in section [ sub_sec_astro_photo ] are relevant in what follows , except for what concerns the adequacy of equation ( [ eq_pre_2b ] ) , which we take as a valid description of the underlying flux distribution . the problem of computing the mse of an estimator that is the solution of an optimization problem has been recently addressed by using a general framework .their basic idea was to provide sufficient conditions on the objective function , in our case , to derive a good approximation for . based on this idea, we provide below a refined result ( specialized to our astrometry problem ) , which relaxes one of the idealized assumptions proposed in ( * ? ? ?* their equation ( 5 ) ) , and which is not strictly satisfied in our problem ( see remark [ remark2 ] in section [ anaint ] ) . as a consequence ,our result offers upper and lower bounds for the bias and mse of , respectively .[ ls_performances_bounds ] let us consider a fixed and unknown parameter , and that .in addition , let us define the residual random variable , and . ] .if there exists such that , then : where and ( the proof is presented in appendix [ proof_ls_performances_bounds ] ) .[ rem_1 ] theorem [ ls_performances_bounds ] is obtained under a bounded condition ( with probability one ) over the random variable . to verify whether this condition is actually met , it is therefore important to derive an explicit expression for .starting from equation ( [ eq_subsec_mse_of_ls_1 ] ) it follows that : \ ] ] and , consequently , .therefore : .\ ] ] then , is not bounded almost surely , since could take any value in with non - zero probability .however , , and its variance in closed - form is : .\ ] ] from this , we can evaluate how far we are from the bounded assumption of theorem [ ls_performances_bounds ] . to do this ,we can resort to _ markov s inequality _ , where .then , for any , we can characterize a critical such that . using this result and theorem [ ls_performances_bounds ], we can bound the conditional bias and conditional mse of using equations ( [ eq_subsec_mse_of_ls_2a ] ) and ( [ eq_subsec_mse_of_ls_2b ] ) , respectively . in section [ subsec_empirical ], we conduct a numerical analysis , where it is shown that the bounded assumption for is indeed satisfied for a number of important realistic experimental settings in astrometry ( with very high probability ) .[ remark2 ] concerning the mse of the ls estimator , equation ( [ eq_subsec_mse_of_ls_2b ] ) offers a lower and upper bound in terms of a _ nominal value _ ( given by equation ( [ nominal ] ) ) , and an interval around it . in the interesting regimewhere ( this regime approaches the ideal case studied by in which case the variable becomes deterministic ) , we have that is an unbiased estimator , as shown by equation ( [ eq_subsec_mse_of_ls_2a ] ) , and , furthermore : from ( [ eq_subsec_mse_of_ls_2b ] ) _ls(i^n ) -x_c ) ^2=^2_ls(n ) _ cr^2, from equations ( [ eq_subsec_mse_of_ls_2b ] ) and ( [ cr_scalar ] ) .thus , it is interesting to provide an explicit expression for which will be valid for the mse of the ls method in this regime .first we note that , , and therefore : therefore , which implies that : in the next section , we provide a numerical analysis to compare the predictions of equation ( [ eq_subsec_mse_of_ls_4 ] ) with the cr bound computed through equation ( [ fi_astrometry ] ) .we also analyze if this nominal value is representative of the performance of the ls estimator .( idealized low snr regime ) [ rm_ideal_low_snr ] following the ideal scenario where , we explore the weak signal case in which considering a constant background across the pixels , i.e. , for all .then adopting equation ( [ eq_subsec_mse_of_ls_4 ] ) we have that : on the other hand , from equation ( [ fi_astrometry ] ) we have that .remarkably in this context , the ls estimator is optimal in the sense that it approaches the cr bound asymptotically when a weak signal is observed , we have demonstrated that the cr can not be exactly reached in astrometry . ] .this results is consistent with the numerical simulations in ( * ? ? ?* table 3 ) .( idealized high snr regime ) [ rm_ideal_high_snr ] for the high snr regime , assuming again that , we consider the case where for all . in this case : ^{-1}\mbox { and } \sigma_{cr}^2 \approx \left [ \tilde{f } \sum_{i=1}^n ( g_i'(x_c))^2/g_i(x_c ) \right]^{-1}.\ ] ] therefore , in this strong signal scenario , there is no match between the variance of the ls estimator and the cr bound , and consequently , we have that . to provide more insight on the nature of this performance gap , in the next propositionwe offer a closed - form expression for this mismatch in the high - resolution scenario where the source is oversampled , and the size of the pixel is a small fraction of the width parameter of the psf in equation ( [ eq_pre_1 ] ) .[ pro_gap_cr_ls_hsnr ] assuming the idealized high snr regime , if we have a gaussian - like psf and , then : ( the proof is presented in appendix [ proof_pro_gap_cr_ls_hsnr ] ) .equation ( [ eq_subsec_mse_of_ls_7 ] ) shows that there is a very significant performance gap between the cr bound and the mse of the ls estimator in the high snr regime .this result should motivate the exploration of alternative estimators that could approach more closely the cr bound in this regime .in this section we explore the implications of theorem [ ls_performances_bounds ] in astrometry through the use of simulated observations .first , we analyze if the bounded condition over adopted in theorem [ ls_performances_bounds ] is a valid assumption for the type of settings considered in astronomical observations . after that, we compare how efficient is the ls estimator proposed in equation ( [ eq_subsec_mse_of_ls_1 ] ) as a function of the snr and pixel resolution , adopting for that purpose theorem [ ls_performances_bounds ] and proposition [ pro_fi_photometry_astrometry ] .to perform our simulations , we adopt some realistic design variables and astronomical observing conditions to model the problem . for the psf , various analytical and semi - empirical forms have been proposed , see for instance the ground - based model in and the space - based model in . for our analysis , we adopt in equation ( [ eq_pre_3 ] ) the gaussian psf where , and where is the width of the psf , assumed to be known . this psf has been found to be a good representation for typical astrometric - quality ground - based data . in terms of nomenclature , the , measured in arcsec , denotes the _ full - width at half - maximum _ ( ) parameter , which is an overall indicator of the image quality at the observing site to a constant value .however , in many cases the image quality changes as a function of position in the field - of - view due to optical distortions , specially important in large focal plane arrays .for example , in the case of the sdss ( which consists of 30 ccds of 2048 pix each , covering 2.3 on the sky at a resolution of 0.396 arcsec / pix ) , the may vary up to 15% from center to corner of _ one _ detector ( ( * ? ? ?* section 4.1 ) ) .the impact on the cr bound of these changes has been discussed in some detail by ( * ? ? ?* section 3.4 ) . ] .the background profile , represented by , is a function of several variables , like the wavelength of the observations , the moon phase ( which contributes significantly to the diffuse sky background ) , the quality of the observing site , and the specifications of the instrument itself .we will consider a uniform background across pixels underneath the psf , i.e. , for all . to characterize the magnitude of , it is important to first mention that the detector does not measure photon counts ( or , actually , photo- ) directly , but a discrete variable in `` _ analog to digital units _ ( adus ) '' of the instrument , which is a linear proportion of the photon counts .this linear proportion is characterized by the gain of the instrument in units of /adu . is just a scaling factor , where we can define and as the brightness of the object and background , respectively , in the specific adus of the instrument .then , the background ( in adus ) depends on the pixel size as follows : where is the ( diffuse ) sky background in adu / arcsec ( if is measured in arcsec ) , while and , both measured in , model the dark - current and read - out - noise of the detector on each pixel , respectively .note that the first component in equation ( [ eq_subsec_empirical_1 ] ) is attributed to the site , and its effect is proportional to the pixel size . on the other hand ,the second component is attributed to errors of the integrating device ( detector ) , and it is pixel - size independent .this distinction is central when analyzing the performance as a function of the pixel resolution of the array ( see details in ( * ? ? ?more important is the fact that in typical ground - based astronomical observation , long exposure times are considered , which implies that the background is dominated by diffuse light coming from the sky ( the first term in the rhs of equation ( [ eq_subsec_empirical_1 ] ) ) , and not from the detector ( * ? ? ?4 ) . for the experimental conditions, we consider the scenario of a ground - based station located at a good site with clear atmospheric conditions and the specifications of current science - grade ccds , where adu / arcsec , , , arcsec and /adu ( with these values adu for arcsec using equation ( [ eq_subsec_empirical_1 ] ) ) . in terms of scenarios of analysis , we explore different pixel resolutions for the ccd array $ ] measured in arcsec .note that a change in will be reflected upon the limits of the integral to compute the pixel response function ( see equation ( [ eq_pre_3 ] ) ) as well as in the calculation of the background level per pixel , according to equation ( [ eq_subsec_empirical_1 ] ) .therefore , a change in is not only a design feature of the detector device , but it implies also a change in the distribution of the background underneath the psf .the impact of this covariant device - and - atmosphere change in the cr bound is explained in detail in ( * ? ? ?* section 4 , see also their figure 2 ) . in our simulations , we also consider different signal strengths , measured in photo- , corresponding to snr respectively and there is a weak dependency of snr on the pixel size , see equation ( 28 ) in . ] .note that increasing implies increasing the snr of the problem , which can be approximately measured by the ratio can be used as a proxy for snr , in what follows we have used the exact expression to compute this quantity , as given by equation ( 28 ) in . ] . on a given detector plus telescope setting ,these different snr scenarios can be obtained by changing appropriately the exposure time ( open shutter ) that generates the image . to validate how realistic is the bounded assumption over in our problem, we first evaluate the variance of from equation ( [ eq_subsec_mse_of_ls_3b ] ) , this is presented in figure [ fig1 ] for different snr regimes and pixel resolutions in the array .overall , the magnitudes are very small considering the admissible range for stipulated in theorem [ ls_performances_bounds ] .also , given that has zero mean , the bounded condition will happen with high probability . complementing this, figure [ fig2 ] presents the critical across different pixel resolutions and snr regimes realizations of the random variable for the different snr regimes and pixel resolutions . ] . for this, we fix a small value of ( in this case ) , and calculate such that with probability . from the curves obtained , we can say that the bounded assumption is holding ( values of in ) for a wide range of representative experimental conditions and , consequently , we can use theorem [ ls_performances_bounds ] to provide a range on the performance of the ls estimator . note that the idealized condition of is realized only for the very high snr regime ( strong signals ) .we adopt equation ( [ eq_subsec_mse_of_ls_2b ] ) which provides an admissible range for the mse performance of the ls estimator . for that we use the critical in figure [ fig2 ] .these curves for the different snr regimes and pixel resolutions are shown in figure [ fig3 ] . following the trend reported in figure [ fig2 ] ,the nominal value is a precise indicator for the ls estimator performance for strong signals ( matching the idealized conditions stated in remark [ rm_ideal_high_snr ] ) , while on the other hand , theorem [ ls_performances_bounds ] does not indicate whether is accurate or not for low snr , as we deviate from the idealized case elaborated in remark [ rm_ideal_low_snr ] .nevertheless , we will see , based on some complementary empirical results reported in what follows , that even for low snr , the nominal predicts the performance of the ls estimator quite well . assuming for a moment the idealized case in which , we can reduce the performance analysis to measuring the gap between the nominal value predicted by theorem [ ls_performances_bounds ] ( equation ( [ nominal ] ) ) , and the cr bound in proposition [ pro_fi_photometry_astrometry ] .figure [ fig4 ] shows the relative difference given by . from the figure we can clearly see that , in the low snr regime, the relative performance differences tends to zero and , consequently , the ls estimator approaches the cr bound , and it is therefore an efficient estimator .this matches what has been stated in remark [ rm_ideal_low_snr ] . on the other hand for high snr, we observe a performance gap that is non negligible ( up to relative difference above the cr for , and above the cr for for arcsec ) .this is consistent with what has been argued in remark [ rm_ideal_high_snr ] .note that in this regime , the idealized scenario in which is valid ( see figure [ fig2 ] ) and , thus , , which is not strictly the case for the low snr regime ( although see figure [ fig7 ] , and the discussion that follows ) . to refine the relative performance analysis presented in figure [ fig4 ] , figure [ fig : inter ]shows the feasible range ( predicted by theorem [ ls_performances_bounds ] ) of performance gap considering the critical obtained in figure [ fig2 ] .we report four cases , from very low to very high snr regimes , to illustrate the trends . from this figure, we can see that the deviations from the nominal value are quite significant for the low snr regime , and that , from this perspective , the range obtained from theorem [ ls_performances_bounds ] is not sufficiently small to conclude about the goodness of the ls estimator in this context . on the other hand , in the high snr regime , the nominal comparison can be considered quite precise .the results of the previous paragraph motivate an empirical analysis to estimate the performance of the ls estimator empirically from the data , with the goal of resolving the low snr regime illustrated in figure [ fig : inter ] . for this purpose , realizations were considered for all the snr regimes and pixel sizes , and the performance of the ls estimator was computed using the empirical mse .we used a large number of samples to guarantee convergence to the true mse error as a consequence of the law of large numbers .remarkably , we observe in all cases that the estimated performance matches quite tightly the nominal characterized by theorem [ ls_performances_bounds ] .we illustrate this in figure [ fig7 ] , which considers the most critical low snr regime .consequently , from this numerical analysis , we can resolve the ambiguity present in the low snr regime , and conclude that the comparison with the nominal result reported in figure [ fig4 ] , and the derived conclusion about the ls estimator in the low and high snr regimes , can be considered valid .our simulations also show that the ls estimator is unbiased .overall , these results suggests that theorem [ ls_performances_bounds ] could be improved , perhaps by imposing milder sufficient conditions , in order to prove that is indeed a precise indicator of the mse of the ls estimator at any snr regime . for completeness, we show in figure [ fig_behave ] the behavior of the log - likelihood function ( computed using equation ( [ log - like ] ) ) and the ls function ( computed using equation ( [ eq_subsec_mse_of_ls_1 ] ) ) for our two extreme snr cases of our numerical simulations , namely snr= 12 and snr= 230 .in these figures the true astrometric position is at arcsec (= 400 pix with = 0.2 arcsec ) .these figures clearly show ( particularly the one at low snr ) the non - linear nature of both objective functions .our work provides results to characterize the performance of the widely used ls estimator as applied to the problem of astrometry when derived from digital discrete array detectors .the main result ( theorem [ ls_performances_bounds ] ) provides in closed - form a nominal value ( ) , and a range around it , for the mse of the ls estimator as well as its bias . from the predicted nominal value, we analyzed how efficient is the ls estimator in comparison with the mv cr bound .in particular , we show that the ls estimator is efficient in the regime of low snr ( a point source with a weak signal ) , in the sense that it approximates very closely the cr bound . on the other hand, we show that at high snr there is a significant gap in the performance of the ls estimator with respect to the mv bound .we believe that this sub - optimal behavior is caused by the poissonian nature of the detection process , in which the variance per pixel increases as the signal itself .since the ls method is very sensitive to outliers , the large excursions caused by the large pixel intensity variance at high snr make the ls method less efficient ( from the point of view of its mse ) , than allowed by the cr bound .these performance analyses complement and match what has been observed in photometric estimation , where only in the low snr regime the ls estimator has been shown to asymptotically achieve the cr bound .while our results are valid for an idealized linear ( one - dimensional ) array detector where intra - pixel response changes are neglected , and where flat - fielding is achieved with very high accuracy , our findings should motivate the exploration of alternative estimators in the high snr observational regime .regarding this last point , we note that an inspection of ( * ? ? ?* table 3 ) suggests that either a ( poisson variance- ) weighted ls or a ml approach do not exhibit this loss of optimality at high - snr , and should be preferred to the unweighted ls analyzed in this paper .this effect is clearly illustrated in figure [ fig_mlvsls ] , where we present a comparison between the standard deviation of the ls and ml methods derived from numerical simulations in the very high snr regime ( where the gap between cr and the ls is most significant ) .motivated by these results , a detailed analysis of the ml method will be presented in a forthcoming paper .we are indebted to an anonymous referee that read the draft carefully and in great detail , providing us with several suggestions and comments that have improved the legibility of the text significantly .in particular his / her suggestions have lead to the introduction of figures 7 , 8 and the corresponding discussion in the body of the paper .this material is based on work supported by a grant from conicyt - chile , fondecyt # 1151213 .in addition , the work of j. f. silva and m. orchard is supported by the advanced center for electrical and electronic engineering , basal project fb0008 .j. f. silva acknowledges support from a conicyt - fondecyt grant # 1140840 , and r. a. mendez acknowledges support from project ic120009 millennium institute of astrophysics ( mas ) of the iniciativa cientfica milenio del ministerio de economa , fomento y turismo de chile .r.a.m also acknowledges eso / chile for hosting him during his sabbatical - leave during 2014 .we use the well - known fact that the cr bound is achieved by an unbiased estimator , if and only if , the following decomposition holds where is the likelihood of the observation given , is a function of and alone ( i.e. , it does dot depend on the data ) , while is a function of the data exclusively ( i.e. , it does not depend on the parameter ) .furthermore , if the achievability condition in equation ( [ eq_proof_pro_achie_astrometry_1 ] ) is satisfied , then , and is an unbiased estimator of that achieves the cr bound .the proof follows by contradiction , assuming that equation ( [ eq_proof_pro_achie_astrometry_1 ] ) holds . first using equation ( [ eq_pre_5 ] ) , we have that : = \sum_{i=1}^{n } \frac{i_i}{\lambda_i(x_c ) } \cdot \frac{d \lambda_i(x_c)}{d x_c},\end{aligned}\ ] ] the last equality comes from the fact that from the assumption in equation ( [ eq_pre_4 ] ). then replacing equations ( [ eq_proof_pro_achie_astrometry_2 ] ) and ( [ fi_astrometry ] ) in equation ( [ eq_proof_pro_achie_astrometry_1 ] ) : which contradicts the assumption that should be a function of the data alone . furthermore , if we consider the extreme high snr regime , where for all , and the low snr regime , where for all , it follows that : ^{-1 } + x_c\ ] ] and ^{-1 } + x_c,\ ] ] respectively .therefore a contradiction remains even in these extreme snr regimes .recalling from equation ( [ eq_pre_3 ] ) that and assuming a gaussian psf of the form ( see section [ subsec_empirical ] ) by the mean value theorem and the hypothesis of small pixel ( ) , it is possible to state that : and then we have that : with the above approximation we have that : the term inside the summation in equation ( [ eq_proof_gap_3b ] ) can be approximated by an integral due to the small - pixel hypothesis . assuming that the source is well sampled by the detector ( see section [ sub_sec_astro_photo ] , equation ( [ eq_pre_4 ] ) ) we can obtain that : where equation ( [ eq_proof_gap_4b ] ) follows from the fact that the term inside the integral in equation ( [ eq_proof_gap_4 ] ) corresponds to the second moment of a normal random variable of mean and variance . by the same set of arguments used to approximate in equation ( [ eq_proof_gap_4b ] ) , we have that : where equation ( [ s_aprox ] ) follows from the fact that the term inside the integral in equation ( [ eq_proof_gap_5b ] ) is the second moment of a normal random variable of mean and variance .finally , for we proceed again in the same way , namely : where equation ( [ t_aprox ] ) follows from the fact that the term inside the integral in equation ( [ f_aprox ] ) corresponds to the second moment of a normal random variable of mean and variance . then adopting equations ( [ eq_proof_gap_4b ] ) , ( [ s_aprox ] ) and ( [ t_aprox ] ) in equations ( [ eq_subsec_mse_of_ls_4 ] ) and ( [ fi_astrometry ] ) , respectively , and assuming a uniform background underneath the psf, we have that : \approx \frac{\sigma^2}{\tilde{f}}\frac{8}{3\sqrt{3 } } \label{ap_ls_f } \label{eq_proof_gap_7b}\end{aligned}\ ] ] and , in the last step in equation ( [ eq_proof_gap_7b ] ) and the last step in equation ( [ eq_proof_gap_8 ] ) , we have used the assumption that .we note that the last step in equation ( [ eq_proof_gap_8 ] ) corresponds to the second line of equation ( 45 ) in .bastian , u. 2004 .gaia technical note , gaia - c3-tn - ari - bas-020 ( http://www.cosmos.esa.int/web/gaia/public-dpac-documents ) .available at http://www.rssd.esa.int/sys/docs/ll_transfers/project=pubdb&id=2939027.pdf ( last accessed on april 2015 ) .najim , m. 2008 , modeling , estimation and optimal filtering in signal processing , chapter a , 335 - 340 .available at http://onlinelibrary.wiley.com/book/10.1002/9780470611104 , last accessed on june 26th , 2015 .( dimensionless ) as given by equation ( [ eq_subsec_mse_of_ls_3b ] ) , as a function of the pixel resolution arcsec for different realistic snr scenarios ( function of ) encountered in ground - based astronomical observations .since the admissible range for is the interval ( 0,1 ) , the small computed values indicates that the bounded assumption in theorem 1 can be considered as valid under these conditions.,scaledwidth=70.0% ] ( dimensionless ) such that . in all the scenarios ( snr , and ) , realizations of the random variable are used to estimate the probability distribution for , and , from frequency counts .as decreases , we have a smaller bias ( see equations ( [ eq_subsec_mse_of_ls_2a ] ) and ( [ bias ] ) ) and a narrower range for the mse of the ls estimator ( equation ( [ eq_subsec_mse_of_ls_2b])).,scaledwidth=70.0% ] in theorem [ ls_performances_bounds ] ( equation ( [ nominal ] ) ) and the cr bound in proposition [ pro_fi_photometry_astrometry ] ( equation ( [ fi_astrometry ] ) ) .results are reported for different snr and pixel sizes .a significant performance gap between the ls technique and the cr bound is found for / ( good sampling of the psf ) at high snr , indicating that , in this regime , the ls method is sub - optimal , in agreement with proposition [ pro_gap_cr_ls_hsnr ] ( see also equation ( [ eq_subsec_mse_of_ls_7 ] ) ) .this gap becomes monotonically smaller as the snr decreases.,scaledwidth=70.0% ] , the performance range stipulated in theorem [ ls_performances_bounds ] , and the empirical estimation of from simulations for a low snr regime of e .the fact that the simulations follow closely the nominal value , even at low snr , justifies the use of as given by equations ( [ nominal ] ) and ( [ eq_subsec_mse_of_ls_4 ] ) as a benchmark of the ls method at any snr.,scaledwidth=70.0% ] derived from numerical simulations using the ml ( open circles , equation ( [ log - like ] ) ) and the ls method ( open squares , equation ( [ eq_subsec_mse_of_ls_1 ] ) ) for a high snr= 230 ( see , e.g. , right column of figure [ fig_behave ] ) , where the optimality loss ( performance gap , % in this case ) of the ls method ( proposition [ pro_gap_cr_ls_hsnr ] ) in this regime is clearly seen .the solid line is the nominal derived from our theorem ( equation ( [ nominal ] ) ) , while the dashed line is the cr limit , , given by equation ( [ fi_astrometry ] ) .as we have shown ( section ( [ subsec_achie ] ) ) , the cr limit can not be reached in our astrometric setting , but our ml simulations ( open circles ) show that they can follow very closely this limit ( see also ( * ? ? ?* table 3 ) ) .a detailed analytical study of the optimality of the ml method will be presented in a forthcoming paper.,scaledwidth=70.0% ]the approach of uses the fact that the objective function in equation ( [ eq_subsec_mse_of_ls_1 ] ) is two times differentiable , which is satisfied in our context . as a short - hand ,if we denote by the ls estimator solution , then the first order necessary condition for a local optimum requires that .the other key assumption in is that is in a close neighborhood of the true value . in our case , this has to do with the quality of the pixel - based data used for the inference , which we assume it offers a good estimation of the position ( see , e.g. , ) . then using a first order taylor expansion of around ,the following key approximation can be adopted ( * ? ? ?* their equation ( 4 ) ) : where .if we consider , then from equation ( [ eq_proof_1 ] ) : the second step in the approximation proposed by is to bound by . forthat we introduce the residual variable where .using the fact that is bounded almost surely ( see remark [ rem_1 ] , and section [ anal_bound ] ) : \right| \nonumber\\ & \leq & \left| \frac{j'(i^n , x_c ) } { \mathbb{e}_{i^n\sim f_{x_c } } \left\ { j''(i^n , x_c ) \right\ } } \right| \cdot \max_{w \in ( -\delta , \delta ) } \left|1-\frac{1}{1+w } \right| \nonumber\\ & \leq & \frac { \left| j'(i^n , x_c ) \right| } { \mathbb{e}_{i^n\sim f_{x_c } } \left\ { j''(i^n , x_c ) \right\ } } \cdot \frac{\delta}{1-\delta } , \end{aligned}\ ] ] the last step uses the fact that ( see remark [ rem_1 ] ) . on the other hand , _ jensen s inequality _ guarantees that : where the last inequality comes from equation ( [ eq_proof_3 ] ). then we use that , and consequently .then from equations ( [ eq_proof_4 ] ) and ( [ eq_proof_2 ] ) , we have that : which leads to equation ( [ eq_subsec_mse_of_ls_2a ] ) . concerning the mse , from the hypothesis on we have that : almost surely .then taking the expected value in equation ( [ eq_proof_6 ] ) and using equation ( [ eq_proof_2 ] ) for the central term , it follows that : which concludes the result .
|
we characterize the performance of the widely - used least - squares estimator in astrometry in terms of a comparison with the cramr - rao lower variance bound . in this inference context the performance of the least - squares estimator does not offer a closed - form expression , but a new result is presented ( theorem [ ls_performances_bounds ] ) where both the bias and the mean - square - error of the least - squares estimator are bounded and approximated analytically , in the latter case in terms of a _ nominal value _ and an interval around it . from the predicted nominal value we analyze how efficient is the least - squares estimator in comparison with the minimum variance cramr - rao bound . based on our results , we show that , for the high signal - to - noise ratio regime , the performance of the least - squares estimator is significantly poorer than the cramr - rao bound , and we characterize this gap analytically . on the positive side , we show that for the challenging low signal - to - noise regime ( attributed to either a weak astronomical signal or a noise - dominated condition ) the least - squares estimator is near optimal , as its performance asymptotically approaches the cramr - rao bound . however , we also demonstrate that , in general , there is no unbiased estimator for the astrometric position that can precisely reach the cramr - rao bound . we validate our theoretical analysis through simulated digital - detector observations under typical observing conditions . we show that the _ nominal value _ for the mean - square - error of the least - squares estimator ( obtained from our theorem ) can be used as a benchmark indicator of the expected statistical performance of the least - squares method under a wide range of conditions . our results are valid for an idealized linear ( one - dimensional ) array detector where intra - pixel response changes are neglected , and where flat - fielding is achieved with very high accuracy .
|
general relativity ( henceforth ` gr ' ) differs markedly in many structural aspects from all other theories of fundamental interactions , which are all formulated as poincar invariant theories in the framework of special relativity ( henceforth ` sr ' ) .the characterisation of this difference has been a central theme not only for physicists , but also for philosophers and historians of science .einstein himself emphasised in later ( 1933 ) recollections the importance of his failure to formulate a viable special - relativistic theory of gravity for the understanding of the genesis of gr .any attempt to give such a characterisation should clearly include a precise description of the constraints that prevent gravity from also fitting into the framework of sr .in modern terminology , a natural way to proceed would be to consider fields according to mass and spin , discuss their possible equations , the inner consistency of the mathematical schemes so obtained , and finally their experimental consequences .since gravity is a classical , macroscopically observable , and long - ranged field , one usually assumes right at the beginning the spin to be integral and the mass parameter to be zero .the first thing to consider would therefore be a massless scalar field .what goes wrong with such a theory ?when one investigates this question , anticipating that something does indeed go wrong , one should clearly distinguish between the following two types of reasonings : * the theory is internally inconsistent . in a trivial sensethis may mean that it is mathematically contradictory , in which case this is the end of the story . on a more sophisticated levelit might also mean that the theory violates accepted fundamental physical principles , like , e.g. , that of energy conservation , without being plainly mathematically contradictory .* the theory is formally consistent and in accord with basic physical principles .however , it is refuted by experiments .note that , generically , it does not make much sense to claim both shortcomings simultaneously , since ` predictions ' of inconsistent theories should not be trusted .the question to be addressed here is whether special - relativistic theories of scalar gravity fall under the first category , i.e. whether they can be refuted on the basis of formal arguments alone without reference to specific experiments .many people think that it can , following a.einstein who accused scalar theories to * violate some form of the principle of universality of free fall , * violate energy conservation . the purpose of this paper is to investigate these statements in detail .we will proceed by the standard ( lagrangian ) methods of modern field theory and take what we perceive as the obvious route when working from first principles .as already stressed , the abandonment of scalar theories of gravity by einstein is intimately linked with the birth of gr , in particular with his conviction that general covariance must replace the principle of relativity as used in sr .i will focus on two historical sources in which einstein complains about scalar gravity not being adequate .one is his joint paper with marcel grossman on the so - called ` entwurf theory ' ( , vol.4 , doc.13 , henceforth called the ` entwurf paper ' ) , of which grossmann wrote the `` mathematical part '' and einstein the `` physical part '' .einstein finished with 7 , whose title asks : `` can the gravitational field be reduced to a scalar?''(german original : `` kann das gravitationsfeld auf einen skalar zurckgefhrt werden ? '' ) . in this paragraphhe presented a gedankenexperiment - based argument which allegedly shows that any special - relativistic scalar theory of gravity , in which the gravitational field couples exclusively to the matter via the trace of its energy - momentum tensor , necessarily violates energy conservation and is hence physically inconsistent .this he presented as plausibility argument why gravity has to be described by a more complex quantity , like the of the entwurf paper , where he and grossmann consider ` generally covariant ' equations for the first time .after having presented his argument , he ends 7 ( and his contribution ) with the following sentences , expressing his conviction in the validity of the principle of general covariance : [ quote : einstein1 ] ich mu freilich zugeben , da fr mich das wirksamste argument dafr , da eine derartige theorie [ eine skalare gravitationstheorie ] zu verwerfen sei , auf der berzeugung beruht , da die relativitt nicht nur orthogonalen linearen substitutionen gegenber besteht , sondern einer viel weitere substitutionsgruppe gegenber .aber wir sind schon desshalb nicht berechtigt , dieses argument geltend zu machen , weil wir nicht imstande waren , die ( allgemeinste ) substitutionsgruppe ausfindig zu machen , welche zu unseren gravitationsgleichungen gehrt . has to be abandoned rests on the conviction that relativity holds with respect to a much wider group of substitutions than just the linear - orthogonal ones .however , we are not justified to push this argument since we were not able to determine the ( most general ) group of substitutions which belongs to our gravitational equations . _ ] ( , vol.4 , doc.13 , p.323 ) the other source where einstein reports in more detail on his earlier experiences with scalar gravity is his manuscript entitled `` einiges ber die entstehung der allgemeinen relativittstheorie '' , dated june20th 1933 , reprinted in ( , pp.176 - 193 ) .there he describes in words ( no formulae are given ) how the ` obvious ' special - relativistic generalisation of the poisson equation , [ eq : newtongravity ] together with a ( slightly less obvious ) special - relativistic generalisation of the equation of motion , lead to a theory in which the vertical acceleration of a test particle in a static homogeneous vertical gravitational field depends on its initial horizontal velocity and also on its internal energy content . in his own words : [ quote : einstein2 ] solche untersuchungen fhrten aber zu einem ergebnis , das mich in hohem ma mitrauisch machte .gem der klassischen mechanik ist nmlich die vertikalbeschleunigung eines krpers i m vertikalen schwerefeld von der horizontalkomponente der geschwindigkeit unabhngig .hiermit hngt es zusammmen , da die vertikalbeschleunigung eines mechanischen systems bzw .dessen schwerpunktes in einem solchen schwerefeld unabhngig herauskommt von dessen innerer kinetischer energie .nach der von mir versuchten theorie war aber die unabhngigkeit der fallbeschleunigung von der horizontalgeschwindigkeit bzw .der inneren energie eines systems nicht vorhanden . dies pate nicht zu der alten erfahrung , da die krper alle dieselbe beschleunigung in einem gravitationsfeld erfahren .dieser satz , der auch als satz ber die gleichheit der trgen und schweren masse formuliert werden kann , leuchtete mir nun in seiner tiefen bedeutung ein .ich wunderte mich i m hchsten grade ber sein bestehen und vermutete , da in ihm der schlssel fr ein tieferes verstndnis der trgheit und gravitation liegen msse .an seiner strengen gltigkeit habe ich auch ohne kenntnis des resultates der schnen versuche von etvs , die mir wenn ich mich richtig erinnere erst spter bekannt wurden , nicht ernsthaft gezweifelt .nun verwarf ich den versuch der oben angedeuteten behandlung des gravitationsproblems i m rahmer der speziellen relativittstheorie als inadquat .er wurde offenbar gerade der fundamentalsten eigenschaft der gravitation nicht gerecht .[ ... ] wichtig war zunchst nur die erkenntnis , da eine vernnftige theorie der gravitation nur von einer erweiterung des relativittsprinzips zu erwarten war .the important insight at this stage was that a reasonable theory of gravitation could only be expected from an extension of the principle of relativity . _ ] ( einstein , 2005 , pp.178 - 179 ) einstein s belief , that scalar theories of gravity are ruled out , placed him in this respect in opposition to most of his colleagues , like nordstrm , abraham , mie , and laue , who took part in the search for a ( special- ) relativistic theory of gravity .( concerning nordstrms theory and the einstein - nordstrm interaction , compare the beautiful discussions by norton .some of them were not convinced , it seems , by einstein s inconsistency argument .for example , even after gr was completed , laue wrote a comprehensive review paper on nordstrms theory , thereby at least implicitly claiming inner consistency .remarkably , this paper of laue s is not contained in his collected writings .on the other hand , modern commentators seem to be content with a discussion of the key rle that einstein s arguments undoubtedly played in the development of gr and , in particular , the requirement of general covariance .in fact , already in his famous vienna lecture ( , vol.4 , doc.17 ) held on september 23rd 1913 , less than half a year after the submission of the entwurf paper , einstein admits the possibility to sidestep the energy - violation argument given in the latter , if one drops the relation between space - time distances as given by the minkowski metric on one hand , and physically measured times and lengths on the other .einstein distinguishes between `` coordinate distances '' ( german original : `` koordinatenabstand '' ) , measured by the minkowski metric , and `` natural distances '' ( german original : `` natrliche abstnde '' ) , as measured by rods and clocks ( , vol.4 , doc.17 , p.490 ) .the relation between these two notions of distance is that of a conformal equivalence for the underlying metrics , where the `` natural '' metric is obtained from the minkowski metric by multiplying it with a factor that is proportional to the square of the scalar gravitational potential .accordingly , the re - publication in january 1914 of the entwurf paper includes additional comments , the last one of which acknowledges this possibility to sidestep the original argument against special - relativistic scalar theories of gravity ( , vol.4 , doc.26 , p.581 ) .this is sometimes interpreted as a `` retraction '' by einstein of his earlier argument ( , vol.4 , doc.13 , p. 342 , editors comment [ 42 ] ) though einstein himself speaks more appropriately of `` evading '' or `` sidestepping '' ( german original : `` entgehen '' ) .in fact , einstein does not say that his original argument was erroneous , but rather points out an escape route that effectively changes the hypotheses on which it was based .indeed , einstein s re - interpretation of space - time distances prevents the poincar transformations from being isometries of space - time , though they formally remain symmetries of the field equations .the new interpretation therefore pushes the theory outside the realm of sr . hence einstein s original claim , that a special - relativistic scalar theory of gravity is inconsistent , is _ not _ withdrawn by that re - interpretation .unfortunately , einstein s recollections do not provide sufficient details to point towards a unique theory against which his original claim may be tested . but guided by einstein s remarks and simple first principles one can write down a special - relativistic scalar theory and check whether it really suffers from the shortcomings of the type mentioned by einstein .this we shall do in the main body of this paper .we shall find that , as far as its formal consistency is concerned , the theory is much better behaved than suggested by einstein .we end by suggesting another rationale ( than violation of energy conservation ) , which is also purely intrinsic to the theory discussed here , for going beyond minkowski geometry .in this section we show how to construct a special - relativistic theory for a scalar gravitational field , , coupled to matter .before we will do so in a systematic manner , using variational methods in form of a principle of stationary action , we will mention the obvious first and naive guesses for a poincar invariant generalisation of formulae ( [ eq : newtongravity ] ) and point out their deficiencies .our conventions for the minkowski metric are ` mostly minus ' , that is , . given a worldline , , where is some arbitrary parameter , its derivative with respect to its eigentime , , is written by an overdot , , where . denotes the velocity of light in vacuum ( which we do not set equal to unity ) .there is an obvious way to generalise the left hand side of ( [ eq : newtongravity1 ] ) , namely to replace the laplace operator by minus ( due to our ` mostly minus ' convention ) the dalembert operator : this is precisely what einstein reported : [ quote : einstein3 ] das einfachste war natrlich , das laplacesche skalare potential der gravitation beizubehalten und die poisson gleichung durch ein nach der zeit differenziertes glied in naheliegender weise so zu ergnzen , da der speziellen relativittstheorie genge geleistet wurde .( einstein , 2005 , p.177 ) also , the right hand side of ( [ eq : newtongravity1 ] ) need to be replaced by a suitable scalar quantity ( is not a scalar ) . in srthe energy density is the -component of the energy - momentum tensor , which corresponds to a mass density .hence a sensible replacement for the right - hand side of ( [ eq : newtongravity1 ] ) is : so that ( [ eq : newtongravity1 ] ) translates to the replacement ( [ eq : rhotot ] ) is not discussed in einstein s 1933 recollections , but mentioned explicitly as the most natural one for scalar gravity in einstein s part of the entwurf paper ( , vol.4 , doc.13 , p.322 ) and also in his vienna lecture ( , vol.4 , doc.17 , p.491 ) . in both cases he acknowledges laue as being the one to draw his attention to as being a natural choice for the scalar potential s source .the next step is to generalise ( [ eq : newtongravity2 ] ) . with respect to this problem einsteinremarks : [ quote : einstein4 ] auch mute das bewegungsgesetz des massenpunktes i m gravitationsfeld der speziellen relativittstheorie angepat werden .der weg hierfr war weniger eindeutig vorgeschrieben , weil ja die trge masse eines krpers vom gravitationspotential abhngen konnte .dies war sogar wegen des satzes von der trgheit der energie zu erwarten .( einstein , 2005 , p.177 ) it should be clear that the structurally obvious choice , for . ] can not work .four - velocities are normed , so that hence ( [ eq : supernaiveeqmot ] ) implies the integrability condition , saying that must stay constant along the worldline of the particle , with renders ( [ eq : supernaiveeqmot ] ) physically totally useless .the reason for this failure lies in the fact that we replaced the three independent equations ( [ eq : newtongravity2 ] ) by four equations .this leads to an over - determination , since the four - velocity still represents only three independent functions , due to the kinematical constraint ( [ eq : fourvelsquare ] ) .more specifically , it is the component parallel to the four - velocity of the four - vector equation ( [ eq : supernaiveeqmot ] ) that leads to the unwanted restriction . the obvious way out it to just retain the part of ( [ eq : supernaiveeqmot ] ) perpendicular to : [ eq : naiveeqmot ] where and is the one - parameter family of projectors orthogonal to the four - velocity , one at each point of the particle s worldline .hence , by construction , this modified equation of motion avoids the difficulty just mentioned .we will call the theory based on ( [ eq : fieldeq ] ) and ( [ eq : naiveeqmot ] ) the _ naive theory_. we also note that ( [ eq : naiveeqmot ] ) is equivalent to where is a spacetime dependent mass , given by here is a constant , corresponding to the value of at gravitational potential , e.g. , . we could now work out consequences of this theory .however , before doing this , we would rather put the reasoning employed so far on a more systematic basis as provided by variational principles .this also allows us to discuss general matter couplings and check whether the matter coupling that the field equation ( [ eq : fieldeq ] ) expresses is consistent with the coupling to the point particle , represented by the equation of motion ( [ eq : naiveeqmot ] ) .this has to be asked for if we wish to implement the equivalence principle in the following form : all forms of matter ( including test particles ) couple to the gravitational field in a universal fashion .we will see that in this respect the naive theory is not quite correct .we stress the importance of coupling schemes , without which there is no logical relation between the field equation and the equation of motion for ( test- ) bodies .this is often not sufficiently taken into account in discussions of scalar theories of gravity ; compare .let us now employ standard variational techniques to establish poincar - invariant equations for the scalar gravitational field , , and for the motion of a test particle , so that the principle of universal coupling is duly taken care of .we start by assuming the field equation ( [ eq : fieldeq ] ) .an action whose euler - lagrange equation is ( [ eq : fieldeq ] ) is easy to guess has the physical dimension of a squared velocity , that of length - over - mass .the pre - factor gives the right hand side of ( [ eq : actionfieldint ] ) the physical dimension of an action .the overall signs are chosen according to the general scheme for lagrangians : kinetic minus potential energy .] : where , given by the first term , is the action for the gravitational field and , given by the second term , accounts for the interaction with matter . to thiswe have to add the action for the matter , , which we only specify insofar as we we assume that the matter consists of a point particle of rest - mass , since in the sequel we never need to distinguish between rest- and dynamical mass. from now on will always refer to rest mass . ] and a ` rest ' of matter that needs not be specified further for our purposes here .hence ( rom = rest of matter ) , where we now invoke the principle of universal coupling to find the particle s interaction with the gravitational field. it must be of the form , where is the trace of the particle s energy momentum tensor .the latter is given by so that the particle s contribution to the interaction term in ( [ eq : actionfieldint ] ) is hence the total action can be written in the following form : by construction , the field equation that follows from this action is ( [ eq : fieldeq ] ) , where the energy momentum - tensor refers to the matter without the test particle ( the self - gravitational field of a _ test _particle is always neglected ) .the equations of motion for the test particle then turn out to be [ eq : particlemotion ] three things are worth remarking at this point : * the projector now appears naturally . * the difference between ( [ eq : naiveeqmot ] ) and ( [ eq : particlemotion ] ) is that in the latter it is rather than that drives the four acceleration .this ( only ) difference to the naive theory was imposed upon us by the principle of universal coupling , which , as we have just seen , determined the motion of the test particle .this difference is small for small , since , according to ( [ eq : particlemotion3 ] ) , .but it becomes essential if gets close to , where diverges and the equations of motion become singular .we will see below that the existence of the critical value is not necessarily a deficiency and that it is , in fact , the naive theory which displays an unexpected singular behaviour ( cf .section[sec : naivescalar ] ) . *the universal coupling of the gravitational field to matter only involves the trace of energy - momentum tensor of the latter . as a consequence of the tracelessness of the pure electromagnetic energy - momentum tensor, there is no coupling of gravity to the _ free _ electromagnetic field , like , e.g. , a light wave in otherwise empty space .a travelling electromagnetic wave will not be influenced by gravitational fields .hence this theory predicts no deflection of light - rays that pass the neighbourhoods of stars of other massive objects , in disagreement with experimental observations .note however that the interaction of the electromagnetic field with other matter will change the trace of the energy - momentum tensor of the latter .for example , electromagnetic waves trapped in a material box with mirrored walls will induce additional stresses in the box s walls due to radiation pressure .this will increase the weight of the box corresponding to an additional mass , where is the energy of the radiation field . in this sense_ bound _ electromagnetic fields _ do _ carry weight .let us now focus on the equations of motion specialised to static situations .that is , we assume that there exists some inertial coordinate system with respect to which and hence are static , i.e. , .we have [ prop : scaltheqmotnewtonianform1 ] for static potentials ( [ eq : particlemotion ] ) is equivalent to where here and below we write a prime for and use the standard shorthands , , , and .we write in the usual four - vector component notation : . using and , we have on one side with . and are , respectively , the spatial projections of parallel and perpendicular to the velocity . on the other hand , we have so that where and are the projections of the gradient parallel and perpendicular to respectively . equating ( [ eq : thmstatfieldproof1 ] ) and ( [ eq : thmstatfieldproof4 ] ) results in since ( [ eq:3dimeqmot1 ] ) is trivially implied by ( [ eq:3dimeqmot2 ] ) , ( [ eq:3dimeqmot2 ] ) alone is equivalent to ( [ eq : particlemotion ] ) in the static case , as was to be shown .einstein s second quote suggests that he also arrived at an equation like ( [ eq : scaltheqmotstat1 ] ) , which clearly displays the dependence of the acceleration in the direction of the gravitational field on the transversal velocity .we will come back to this in the discussion section. we can still reformulate ( [ eq : scaltheqmotstat1 ] ) so as to look perfectly newtonian ( i.e. equals a gradient field ) .this will later be convenient for calculating the periapsis precession ( cf .sections [ sec : peripressscalarmodel ] and [ sec : peripressnaivescalar ] ) .[ prop : scaltheqmotnewtonianform2 ] let be the rest - mass of the point particle . then ( [ eq : scaltheqmotstat1 ] ) implies where is an integration constant . scalar multiplication of ( [ eq : scaltheqmotstat1 ] ) with leads to which integrates to where is a constant . using this equation to eliminate the on the right hand side of ( [ eq : scaltheqmotstat1 ] ) the latter assumes the form ( [ eq : scaltheqmotstat2 ] )we recall that in quote2 scalar gravity was accused of violating a particular form of the principle of the universality of free fall , which einstein called `` the most fundamental property of gravitation '' . in this sectionwe will investigate the meaning and correctness of this claim in some detail. it will be instructive to compare the results for the scalar theory with that of a vector theory in order to highlight the special behaviour of the former , which , in a sense explained below , is just opposite to what einstein accuses it of .we also deal with the naive scalar theory for comparison and also to show aspects of its singular behaviour that we already mentioned above .suppose that with respect to some inertial reference frame with coordinates the gravitational potential just depends on .let at time a body be released at the origin , , with proper velocity , , and ( so as to obey ( [ eq : fourvelsquare ] ) ) . as usual is the ordinary velocity and .we take the gravitational field to point into the negative direction so that is a function of with positive derivative .note that for which we simply write with the usual abuse of notion ( i.e. taking to mean ) . finally , we normalise such that . the equations of motion ( [ eq : particlemotion1 ] ) now simply read [ eq : eqmotvertdrop ] the first integrals of the first three equations , keeping in mind the initial conditions , are further integration requires the knowledge of , that is , the horizontal motion couples to the vertical one if expressed in proper time .. ] fortunately , the vertical motion does _ not _ likewise couple to the horizontal one , that is , the right hand side of ( [ eq : eqmotvertdrop4 ] ) just depends on .writing it in the form immediately allows integration . for and ( so that ) we get from this the eigentime for dropping from to with follows by one further integration , showing already at this point its independence of the initial horizontal velocity . herewe wish to be more explicit and solve the equations of motion for the one - parameter family of solutions to ( [ eq : fieldeq ] ) for and a that just depends on , namely , for some constant that has the physical dimension of an acceleration .as already announced we normalise such that .these solutions correspond to what one would call a ` homogeneous gravitational field ' .but note that these solutions are _ not _ globally regular since exists only for and it is the quantity rather than that corresponds to the newtonian potential ( i.e. whose negative gradient gives the local acceleration ) . upon insertion of , ( [ eq: eqmotvertdrop6 ] ) can be integrated to give .likewise , from ( [ eq : eqmotvertdrop6 ] ) and ( [ eq : eqmotdropint ] ) we can form and which integrate to and respectively .the results are [ eq : scalsoleqmot ] for completeness we mention that direct integration of ( [ eq : eqmotdropint ] ) gives for the other component functions , taking into account the initial conditions : the relation between and is inversion of ( [ eq : scalsoleqmot1 ] ) and ( [ eq : scalsoleqmot2 ] ) leads , respectively , to the proper time , , and coordinate time , , that it takes the body to drop from to : [ eq : scaldroptimes ] the approximations indicated by refer to the leading order contributions for small values of ( and any value of ) .the appearance of in ( [ eq : scaldropcoordtime ] ) signifies the quadratic dependence on the initial horizontal velocity : the greater the inertial horizontal velocity , the longer the span in inertial time for dropping from to .this seems to be einstein s point ( cf .quote[quote : einstein2 ] ) .in contrast , there is no such dependence in ( [ eq : scaldropeigentime ] ) , showing the independence of the span in _ _eigen__time from the initial horizontal velocity .the eigentime for dropping into the singularity at is .in particular , it is finite , so that a freely falling observer experiences the singularity of the gravitational field in finite proper time .we note that this singularity is also present in the static spherically symmetric vacuum solution to ( [ eq : fieldeq ] ) , for which exists only for , i.e. . the newtonian acceleration diverges as approaches this value from above , which means that stars of radius smaller than that critical value can not exist because no internal pressure can support the infinite inward pointing gravitational pull . knowing gr, this type of behaviour does not seem too surprising after all .note that we are here dealing with a non - liner theory , since the field equations ( [ eq : fieldeq ] ) become non - liner if expressed in terms of according to ( [ eq : particlemotion3 ] ) .let us for the moment return to the naive theory , given by ( [ eq : fieldeq ] ) and ( [ eq : naiveeqmot ] ) .its equations of motion in a static and homogeneous vertical field are obtained from ( [ eq : eqmotvertdrop ] ) by setting .insertion into ( [ eq : eqmotvertdrop6 ] ) leads to .the expressions and are best determined directly by integrating using ( [ eq : eqmotvertdrop6 ] ) and ( [ eq : eqmotvertdrop9 ] ) .one obtains [ eq : naivescalsoleqmot ] the proper time and coordinate time for dropping from to are therefore given by [ eq : naivescaldroptimes ] where gives again the leading order contributions for small .. ] the general relation between and is obtained by inserting ( [ eq : naivescalsoleqmot1 ] ) into the expression ( [ eq : eqmotdropint ] ) for and integration : note that ( [ eq : naivescaldropeigentime ] ) is again independent of the initial horizontal velocity , whereas ( [ eq : naivescaldropcoordtime ] ) again is not .moreover , the really surprising feature of ( [ eq : naivescaldropeigentime ] ) is that stays finite for .in fact , .so even though the solution is globally regular , the solution to the equations of motion is in a certain sense not , since the freely falling particle reaches the ` end of spacetime ' in finite proper time .this is akin to ` timelike geodesic incompleteness ' , which indicates singular space - times in gr .note that it need not be associated with a singularity of the gravitational field itself , except perhaps for the fact that the very notion of an infinitely extended homogeneous field is itself regarded as unphysical . for comparisonit is instructive to look at the corresponding problem in a vector ( spin1 ) theory , which we here do not wish to discuss in detail .it is essentially given by maxwell s equations with appropriate sign changes to account for the attractivity of like ` charges ' ( here masses ) .this causes problems , like that of runaway solutions , due to the possibility to radiate away negative energy .but the problem of free fall in a homogeneous gravitoelectric field can be addressed , which is formally identical to that of free fall of a charge and mass in a static and homogeneous electric field .so let us first look at the electrodynamical problem .the equations of motion ( the lorentz force law ) are where and all other components vanish . hence ,writing we have [ eq : vecteqmot ] with the same initial conditions as in the scalar case we immediately have ( [ eq : vecteqmot1 ] ) and ( [ eq : vecteqmot4 ] ) are equivalent to which twice integrated lead to where , and are four constants of integration .they are determined by and , leading to and also [ eq : vectsoleqmot ] using ( [ eq : vectmotreltaut ] ) and ( [ eq : vectmotsol1 ] ) to eliminate in favour of or respectively in ( [ eq : vectsoleqmot1 ] ) gives inverting ( [ eq : vectsoleqmot1 ] ) and ( [ eq : vectsoleqmot2 ] ) gives the expressions for the spans of eigentime and inertial time , respectively , that it takes for the body to drop from to : [ eq : vectdroptimes ] this is the full solution to our problem in electrodynamics , of which we basically just used the lorentz force law .it is literally the same in a vector theory of gravity , we just have to keep in mind that the ` charge ' is now interpreted as gravitational mass , which is to be set equal to the inertial mass , so that . then becomes equal to the ` gravitoelectric ' field strength , which directly corresponds to the strength of the scalar gravitational field .having said this , we can directly compare ( [ eq : vectdroptimes ] ) with ( [ eq : scaldroptimes ] ) . for small field strengthwe see that in both cases is larger by a factor of than , which just reflects ordinary time dilation .however , unlike in the scalar case , the eigentime span also depends on in the vector case .the independence of on the initial horizontal velocity is therefore a special feature of the scalar theory .let us reconsider einstein s statements in quote[quote : einstein2 ] , in which he dismisses scalar gravity for it predicting an unwanted dependence on the vertical acceleration on the initial horizontal velocity .as already noted , we do not know exactly in which formal context einstein derived this result ( i.e. what the `` von mir versuchten theorie '' mentioned in quote[quote : einstein2 ] actually was ) , but is seems most likely that he arrived at an equation like ( [ eq : scaltheqmotstat1 ] ) , which clearly displays the alleged behaviour . in any case , the diminishing effect of horizontal velocity onto vertical acceleration is at most of _ quadratic _ order in .[ rem : whyeinsteinsconviction ] how could einstein be so convinced that such an effect did not exist ? certainly there were no experiments at the time to support this .and yet he asserted that such a prediction `` did not fit with the _ old experience _ [ my italics ] that all bodies experience the same acceleration in a gravitational field '' ( cf .quote[quote : einstein2 ] ) .what was it based on ?one way to rephrase / interpret einstein s requirement is this : the time it takes for a body in free fall to drop from a height to the ground should be independent of its initial horizontal velocity .more precisely , if you drop two otherwise identical bodies in a static homogeneous vertical gravitational field at the same time from the same location , one body with vanishing initial velocity , the other with purely horizontal initial velocity , they should hit the ground simultaneously .but that is clearly impossible to fulfil in _ any _ special - relativistic theory of gravity based on a scalar field .the reason is this : suppose is the gravitational field in one inertial frame .then it takes exactly the same form in any other inertial frame which differs form the first one by 1 ) spacetime translations , 2 ) rotations about the axis , 3 ) boosts in any direction within the -plane .so consider a situation where with respect to an inertial frame , , body1 and body2 are simultaneously released at time from the origin , , with initial velocities and respectively .one is interested whether the bodies hit the ground simultaneously .the ` ground ' is represented in spacetime by the hyperplane and ` hitting the ground ' is taken to mean that the word - line of the particle in question intersects this hyperplane .let another inertial frame , , move with respect to at speed along the axis .with respect to both bodies are likewise simultaneously released at time from the origin , , with initial velocities and respectively , according to the relativistic law of velocity addition . the field is still static , homogeneous , and vertical with respect to .this is a special feature of scalar theories .for example , in a vector theory , in which in the field is static homogeneous with a vertical electric component and no magnetic component , we would have a static homogeneous and vertical electric component in , but also a static homogeneous _horizontal _ magnetic component in -direction . ] in the ` ground ' is defined by , which defines the _ same _ hyperplane in spacetime as .this is true since and merely differ by a boost in , so that the and coordinates coincide . hence ` hitting the ground 'has an invariant meaning in the class of inertial systems considered here .however , if ` hitting the ground ' are simultaneous events in they can not be simultaneous in and vice versa , since these events differ in their coordinates .this leads us to the following [ rem : nosimhitground ] due to the usual relativity of simultaneity , the requirement of ` hitting the ground simultaneously ' can not be fulfilled in any poincar invariant scalar theory of gravity .but there is an obvious reinterpretation of ` hitting the ground simultaneously ' , which makes perfect invariant sense in sr , namely the condition of ` hitting the ground after the same lapse of eigentime ' .as we have discussed in detail above , the scalar theory does indeed fulfil this requirement ( independence of ( [ eq : scaldropeigentime ] ) from ) whereas the vector theory does not ( dependence of ( [ eq : vectdropeigentime ] ) on ) .[ rem : scalardistinguished ] the scalar theory is distinguished by its property that the eigentime for free fall from a given altitude does _ not _ depend on the initial horizontal velocity . in general , with regard to this requirement ,the following should be mentioned : [ rem : einsteins reqnotimplied ] einstein s requirement is ( for good reasons ) not implied by any of the modern formulations of the ( weak ) equivalence principle , according to which the worldline of a freely falling test - body ( without higher mass - multipole - moments and without charge and spin ) is determined by its initial spacetime point and four velocity , i.e. independent of the further constitution of the test body . in contrast , einstein s requirement relates two motions with _different _ initial velocities .finally we comment on einstein s additional claim in quote[quote : einstein2 ] , that there is also a similar dependence on the vertical acceleration on the internal energy .this claim , too , does not survive closer scrutiny .indeed , one might think at first that ( [ eq : scaltheqmotstat1 ] ) also predicts that , for example , the gravitational acceleration of a box filled with a gas decreases as temperature increases , due to the increasing velocities of the gas molecules . butthis arguments incorrectly neglects the walls of the box which gain in stress due to the rising gas pressure .according to ( [ eq : fieldeq ] ) more stress means more weight .in fact , a general argument due to laue shows that these effects precisely cancel .this has been lucidly discussed by norton and need not be repeated here .we already mentioned that the scalar theory does not predict any deflection of light in a gravitational field , in violation to experimental results .but in order to stay self contained it is also of interest to see directly that the system given by the field equation ( [ eq : fieldeq ] ) and the equation of motion for a test particle ( [ eq : particlemotion ] ) violates experimental data .this is the case if applied to planetary motion , more precisely to the precession of the perihelion .recall that the newtonian laws of motion predict that the line of apsides remains fixed relative to absolute space for the motion of a body in a potential with .any deviation from the latter causes a rotation of the line of apsides within the orbital plane .this may also be referred to as precession of the periapsis , the orbital point of closest approach to the centre of force , which is called the perihelion if the central body happens to be the sun .again we compare the result of our scalar theory with that of the naive scalar theory and also with that of the vector theory .there exist comprehensive treatments of periapsis precession in various theories of gravity , like .but rather than trying to figure out which ( if any ) of these ( rather complicated ) calculations apply to our theory , at least in a leading order approximation , it turns out to be easier , more instructive , and mathematically more transparent to do these calculations from scratch .a convenient way to compute the periapsis precession in perturbed is provided by the following proposition , which establishes a convenient and powerful technique for calculating the periapsis precession in a large variety of cases .[ prop : ll - periadvformula ] consider the newtonian equations of motion for a test particle of mass in a perturbed newtonian potential where and is the perturbation .the potential is normalised so that it tends to zero at infinity , i.e. .let denote the increase of the polar angle between two successive occurrences of periapsis .hence represents the excess over a full turn , also called the ` periapsis shift per revolution ' .then the first - order contribution of to is given by here is the solution of the unperturbed problem ( kepler orbit ) with angular momentum and energy .( as we are interested in bound orbits , we have . )it is given by [ eq : keplerorbit ] where note that the expression in curly brackets on the right hand side of ( [ eq : ll - periadvformula ] ) is understood as function of and , so that the partial differentiation is to be taken at constant . in the newtonian setting , the conserved quantities of energy and angular momentum for the motion in a plane coordinatised by polar coordinates , are given by [ eq : newtonianenergyangmom ] where a prime represents a -derivative .eliminating in ( [ eq : newtonianenergy1 ] ) via ( [ eq : newtonianangmom ] ) and also using ( [ eq : newtonianangmom ] ) to re - express -derivatives in terms of -derivatives , we get this can also be written in differential form , whose integral is just given by ( [ eq : keplerorbit ] ) . now , the angular change between two successive occurrences of periapsis is twice the angular change between periapsis , , and apoapsis , : where the term in curly brackets is considered as function of and and the partial derivative is for constant .formula ( [ eq : newtonianangularshift ] ) is exact .its sought - after approximation is obtained by writing and expanding the integrand to linear order in .taking into account that the zeroth order term just cancels the on the left hand side , we get : in the second step we converted the into an integration over the azimuthal angle .this we achieved by making use of the identity that one obtains from ( [ eq : newtonianenergyalt ] ) with and set equal to the keplerian solution curve for the given parameters and . accordingly , we replaced the integral limits and by the corresponding angles and respectively .since the integrand is already of order , we were allowed to replace the upper limit by , so that the integral limits now correspond to the angles for the minimal and maximal radius of the unperturbed kepler orbit given by ( [ eq : keplerorbit1 ] ) .let us apply this proposition to the general class of cases where with [ eq : potentialperturb ] in the present linear approximation in the effects of both perturbations to simply add , so that .the contributions and are very easy to calculate from ( [ eq : ll - periadvformula ] ) .the integrals are trivial and give and respectively . using ( [ eq : keplerorbit2 ] ) in the second case to express as function of , then doing the -differentiation , and finally eliminating again in favour of using ( [ eq : keplerorbit2 ] ) , we get [ eq : periastronadvanceformula ] & & \,=\,-\,2\pi\,\left[\frac{\delta_2/\alpha}{a(1-\varepsilon^2)}\right]\,,\\ \label{eq : periastronadvanceformula2 } & \delta_3\varphi & & \,=\,-\,6\pi\,\left[\frac{\delta_3/\alpha}{p^2}\right ] & & \,=\,-\,6\pi\,\left[\frac{\delta_3/\alpha}{a^2(1-\varepsilon^2)^2}\right]\,,\end{aligned}\ ] ] were we also expressed in terms of the semi - major axis and the eccentricity via , as it is usually done .clearly this method allows to calculate in a straightforward manner the periapsis shifts for general perturbations .for example , the case is related to the contribution from the quadrupole moment of the central body .all this applies directly to the scalar theory if its equation of motion is written in the newtonian form ( [ eq : scaltheqmotstat2 ] ) .the static and rotationally symmetric solution to ( [ eq : fieldeq ] ) outside the point source is , so that in order to normalize the potential so that it assumes the value zero at spatial infinity we just need to drop the constant term .this leads to to [ eq : scalthpointmassexpcoef ] so that = -\,\tfrac{1}{6}\delta_{{\scriptscriptstyle}\rm gr}\varphi\,,\ ] ] where is the value predicted by gr .hence scalar gravity leads to a _ retrograde _ periapsis precession in the naive scalar theory we have in ( [ eq : scaltheqmotstat2 ] ) and therefore again we subtract the constant term to normalize the potential so as to assume the value zero at infinity .then we simply read off the coefficients , and : [ eq : naivescalthpointmassexpcoef ] hence we have [ eq : naivescalthperiastronshift ] where \,,\\ \label{eq : naivescalthperiastronshift3 } & \delta_3\varphi&&\,=\,+\,4\pi\ , & & \left[\frac{gm / c^2}{a(1-\varepsilon^2)}\right]^2\,.\end{aligned}\ ] ] recall that ( [ eq : periastronadvanceformula ] ) neglects quadratic and higher order terms in .if we expand in powers of , as done in ( [ eq : naivescalthpointmasssol ] ) , it would be inconsistent to go further than to third order because starts with the quadratic term so that the neglected corrections of order start with fourth powers in .hence ( [ eq : naivescalthperiastronshift ] ) gives the optimal accuracy obtainable with ( [ eq : ll - periadvformula ] ) .for solar - system applications is of the order of so that the quadratic term ( [ eq : naivescalthperiastronshift3 ] ) can be safely neglected .comparison of ( [ eq : naivescalthperiastronshift2 ] ) with ( [ eq : scalthperiastronshift ] ) shows that the naive scalar theory gives a value twice as large as that of the consistent model - theory , that is , times the correct value ( predicted by gr ) .we start from the following the equations of motion ( [ eq : lorentzeqmotion ] ) for a purely ` electric ' field , where and all other components of vanish , is equivalent to where again the prime denotes , , and .we have , .now , so that ( [ eq : lorentzeqmotion ] ) is equivalent to [ eq : lorentzeqmotionsplit ] where and refer to the projections parallel and perpendicular to respectively . since ( [ eq : lorentzeqmotionsplit2 ] ) implies ( [ eq : lorentzeqmotionsplit1 ] ) , ( [ eq : lorentzeqmotion ] )is equivalent to the former .we apply this to a spherically symmetric field , where with .this implies conservation of angular momentum , the modulus of which is now given by note the explicit appearance of , which , e.g. , is not present in the scalar case , as one immediately infers from ( [ eq : scaltheqmotstat1 ] ) .this fact makes proposition[prop : ll - periadvformula ] not immediately applicable .we proceed as follows : scalar multiplication of ( [ eq : eqmotvectelec ] ) with and leads to the following expression for the conserved energy : where .this we write in the form on the other hand , we have where we used ( [ eq : vectconsangmom ] ) to eliminate and convert into , which also led to a cancellation of the factors of .equating ( [ eq : vectconsgammasquared1 ] ) and ( [ eq : vectconsgammasquared2 ] ) , we get where [ eq : vectthnewtonianform2 ] equation ( [ eq : vectthnewtonianform1 ] ) is just of the form ( [ eq : newtonianenergy2 ] ) with and replacing and . in particularwe have for : with [ eq : vectthpotentialcorr2 ] in leading approximation for small we have .the advance of the periapsis per revolution can now be simply read off ( [ eq : periastronadvanceformula1 ] ) : = \tfrac{1}{6}\delta_{{\scriptscriptstyle}\rm gr}\varphi\,.\ ] ] this is the same amount as in the scalar model - theory ( compare ( [ eq : scalthperiastronshift ] ) ) but of opposite sign , corresponding to a _ prograde _ periapsis precession of 1/6 the value predicted by gr .in this section we finally turn to einsteins argument of the entwurf paper concerning energy conservation . from a modern viewpoint ,einstein s claim of the violation of energy conservation seems to fly in the face of the very concept of poincar invariance .after all , time translations are among the symmetries of the poincar group , thus giving rise to a corresponding conserved noether charge .its conservation is a theorem and can not be questioned .the only thing that seems logically questionable is whether this quantity does indeed represent physical energy .so how could einstein arrive at his conclusion ?einstein first pointed out that the source for the gravitational field must be a scalar built from the matter quantities alone , and that the only such scalar is the trace of the energy - momentum tensor ( as pointed out to einstein by laue , as einstein acknowledges , calling the `` laue scalar '' ) . moreover , for _ closed stationary systems _ , the so - called laue - theorem for static systems ( later slightly generalised to stationary ones ) states that the space integral of must vanish , except for ; hence the space integral of equals that of , which means that the total ( active and passive ) gravitational mass of a closed stationary system equals its inertial mass .however , if the system is not closed , the weight depends on the stresses ( the spatial components ) .r0.3 ( -5,17) ( -57,58) strut ( -72,56)_b _ ( -30,98 ) 90shaft his argument proper is then as follows ( compare fig.[fig : slidingbox ] ) : consider a box , , filled with electromagnetic radiation of total energy .we idealise the walls of the box to be inwardly perfectly mirrored and of infinite stiffness , so as to be able to support normal stresses ( pressure ) without suffering any deformation .the box has an additional vertical strut in the middle , connecting top and bottom walls , which supports all the vertical material stresses that counterbalance the radiation pressure , so that the side walls merely sustain normal and no tangential stresses .the box can slide without friction along a vertical shaft whose cross section corresponds exactly to that of the box .the walls of the shaft are likewise idealised to be inwardly perfectly mirrored and of infinite stiffness .the whole system of shaft and box is finally placed in a homogeneous static gravitational field , , which points vertically downward .now we perform the following process .we start with the box being placed in the shaft in the upper position. then we slide it down to the lower position ; see fig.[fig : loweringbox ] .there we remove the side walls of the box without any radiation leaking out such that the sideways pointing pressures are now provided by the shaft walls .the strut in the middle is left in position to further support all the vertical stresses , as before .then the box together with the detached side walls are pulled up to their original positions ; see fig.[fig : raisingbox ] .finally the system is reassembled so that it assumes its initial state .einstein s claim is now that in a very general class of imaginable scalar theories the process of pulling up the parts needs less work than what is gained in energy in letting the box ( with side walls attached ) down .hence he concluded that such theories necessarily violate energy conservation .indeed , radiation - plus - box is a closed stationary system in laue s sense .hence the weight of the total system is proportional to its total energy , , which we may pretend to be given by the radiation energy alone since the contributions from the rest masses of the walls will cancel in the final energy balance , so that we may formally set them to zero at this point . lowering this box by an amount in a static homogeneous gravitational field of strength results in an energy gain of .so despite the fact that radiation has a traceless energy - momentum tensor , _ trapped _ radiation has a weight given by .this is due to the radiation pressure which puts the walls of the trapping box under tension . for each parallel pair of side - wallsthe tension is just the radiation pressure , which is one - third of the energy density .so each pair of side - walls contribute to the ( passive ) gravitational mass ( over and above their rest mass , which we set to zero ) in the lowering process when stressed , and zero in the raising process when unstressed .hence , einstein concluded , there is a net gain in energy of ( there are two pairs of side walls ) .but it seems that einstein neglects a crucial contribution to the energy balance .in contrast to the lowering process , the state of the shaft _ is _ changed during the lifting process , and it is this additional contribution which just renders einstein s argument inconclusive .indeed , when the side walls are first removed in the lower position , the walls of the shaft necessarily come under stress because they now need to provide the horizontal balancing pressures . in the raising process that stress distribution of the shaftis translated upwards .but that _ does _ cost energy in the theory discussed here , even though it is not associated with any proper transport of the material the shaft is made from .as already pointed out , stresses make their own contribution to weight , independent of the nature of the material that supports them .in particular , a redistribution of stresses in a material immersed in a gravitational field generally makes a non - vanishing contribution to the energy balance , even if the material does not move .this will be seen explicitly below .there seems to be only one paper which explicitly expresses some uneasiness with einstein s argument , due to the negligence of `` edge effects '' ( , p.37 ) , however without going into any details , letting alone establishing energy expressions and corresponding balance equations .there are 10 conserved currents corresponding to poincar - invariance .in particular , the total energy relative to an inertial system is conserved . for a particle coupled to gravityit is easily calculated and consists of three contributions corresponding to the gravitational field , the particle , and the interaction - energy shared by the particle and the field : [ eq : energies ] let us return to general matter models and let be the total stress - energy tensor of the gravity - matter - system .it is the sum of three contributions : where , it is given by , which here ( generally for scalar fields ) gives rise to a symmetric tensor , . ][ eq : em - tensors ] energy - momentum - conservation is expressed by where is the four - force of a possible _ external _ agent .the 0-component of it ( i.e. energy conservation ) can be rewritten in the form for any bounded spatial region .if the matter system is itself of finite spatial extent , meaning that outside some bounded spatial region , , vanishes identically , and if we further assume that no gravitational radiation escapes to infinity , the surface integral in ( [ em - conservation2 ] ) vanishes identically . integrating ( [ em - conservation2 ] ) over timewe then get with and where denotes the difference between the initial and final value of . if we apply this to a process that leaves the _ internal _ energies of the gravitational field and the matter system unchanged , for example a processes where the matter system , or at least the relevant parts of it , are _ rigidly _ moved in the gravitational field , like in einstein s gedankenexperiment of the ` radiation - shaft - system ' , we get now , my understanding of what a valid claim of energy non - conservation in the present context would be is to show that _ this _ equation can be violated . butthis is _ not _ what einstein did ( compare conclusions ) .if the matter system stretches out to infinity and conducts energy and momentum to infinity , then the surface term that was neglected above gives a non - zero contribution that must be included in ( [ em - conservation4 ] ). then a proof of violation of energy conservation must disprove this modified equation .( energy conduction to infinity as such is not in any disagreement with energy conservation ; you have to prove that they do not balance in the form predicted by the theory . ) for the discussion of einstein s gedankenexperiment the term ( [ interaction - energy ] ) is the relevant one .it accounts for the _ weight of stress_. pulling up a radiation - filled box inside a shaft also moves up the stresses in the shaft walls that must act sideways to balance the radiation pressure .this lifting of stresses to higher gravitational potential costs energy , according to the theory presented here .this energy was neglected by einstein , apparently because it is not associated with a transport of matter .he included it in the lowering phase , where the side - walls of the box are attached to the box and move with it , but neglected them in the raising phase , where the side walls are replaced by the shaft , which does not move .but as far as the ` weight of stresses ' is concerned , this difference is irrelevant .what ( [ interaction - energy ] ) tells us is that raising stresses in an ambient gravitational potential costs energy , irrespectively of whether it is associated with an actual transport of the stressed matter or not .this would be just the same for the transport of heat in a heat - conducting material .raising the heat distribution against the gravitational field costs energy , even if the material itself does not move .from the foregoing i conclude that , taken on face value , neither of einstein s reasonings that led him to dismiss scalar theories of gravity prior to being checked against experiments are convincing .first , energy as defined by noether s theorem_is _ conserved in our model - theory . note also that the energy of the free gravitational field is positive definite in this theory .second , the _ _eigen__time for free fall in a homogeneous static gravitational field _ is _ independent of the initial horizontal velocity .hence our model - theory serves as an example of an internally consistent theory which , however , is experimentally ruled out . as we have seen , it predicts times the right perihelion advance of mercury and also no light deflection ( not to mention shapiro time - delay , gravitational red - shift , as well as other accurately measured effects which are correctly described by gr ) .the situation is slightly different in a special - relativistic vector theory of gravity ( spin1 , mass0 ) . herethe energy is clearly still conserved ( as in any poincar invariant theory ) , but the energy of the radiation field is negative definite due to a sign change in maxwell s equations which is necessary to make like charges ( i.e. masses ) attract rather than repel each other .hence there exist runaway solutions in which a massive particle self - accelerates unboundedly by radiating negative gravitational radiation .also , the free - fall eigentime now does depend on the horizontal velocity , as we have seen .hence , concerning these theoretical aspects , scalar gravity is much better behaved .this leaves the question unanswered why einstein thought it necessary to give up the identification of minkowski geometry with the physical geometry , as directly measured with physical clocks and rods ( cf .the discussion at the end of section[sec : histbackground ] ) .einstein made it sound as if this was the only way to save energy conservation .this , as we have seen , is not true .but there may well be other reasons to contemplate more general geometries than that of minkowski space from considerations of scalar gravity as presented here , merely by looking at the gravitational interaction of models for ` clocks ' and ` rods ' .a simple such model would be given by an electromagnetically bound system , like an atom , where ( classically speaking ) an electron orbits a charged nucleus ( both modelled as point masses ) .place this system in a gravitational field that varies negligibly over the spatial extent of the atom and over the time of observation . the electromagnetic field produced by the chargeswill be unaffected by the gravitational field ( due to its traceless energy momentum tensor ) .however , ( [ eq : scalgrav4 ] ) tells us that the dynamics of the particle is influenced by the gravitational field .the effect can be conveniently summarised by saying that the masses of point particles scale by a factor of when placed in the potential .this carries over to quantum mechanics so that atomic length scales , like the bohr radius ( in mksa units ) and time scales , like the rydberg period ( inverse rydberg frequency ) change by a factor due to their inverse proportionality to the electron mass ( is planck s constant , the electron charge , and the vacuum permittivity ) .this means that , relative to the units on which the minkowski metric is based , atomic units of length and time vary in a way depending on the potential . transporting the atom to a spacetime position in which the gravitational potential differs by an amount results in a diminishment ( if ) or enlargement ( if ) of its size and period _ relative to minkowskian units_. this effect is universal for all atoms .the question then arises as to the physical significance of the minkowski metric .should we not rather _ define _ spacetime lengths by what is measured using atoms ?after all , as einstein repeatedly remarked , physical notions of spatial lengths and times should be based on physically constructed rods and clocks which are consistent with our dynamical equations .the minkowski metric would then merely turn into a redundant structure with no _ direct _ observational significance . from that perspectiveone may indeed criticise special - relativistic scalar gravity for making essential use of dispensable absolute structures , which eventually should be eliminated , just like in the ` flat - spacetime - approach ' to gr ; compare and sect.5.2 in .in view of quote[quote : einstein1 ] one might conjecture that this more sophisticated point was behind einstein s criticism .if so , it is well taken .but physically it should be clearly separated from the other explicit accusations which we discussed here .* acknowledgements : * i thank two anonymous referees for making various suggestions for improvements and john norton for asking a question that led to the remarks in the second part of section[sec : conclusion ] .i am also indebted to olivier darrigol for pointing out that the argument leading to remark[rem : nosimhitground ] in section[sec : discussion ] does not generalise to vector theories of gravity , as originally proposed in an earlier version of this paper ; cf .footnote[fnote : oliviersremark ] .john norton . andnordstrm : some lesser known thought experiments in gravitation . in john earman , michel janssen , and john norton , editors , _ the attraction of gravitation : new studies in history of general relativity _ , pages 329 , boston , ma , 1993 .birkuser verlag .
|
on his way to general relativity , einstein gave several arguments as to why a special - relativistic theory of gravity based on a massless scalar field could be ruled out merely on grounds of theoretical considerations . we re - investigate his two main arguments , which relate to energy conservation and some form of the principle of the universality of free fall . we find that such a theory - based _ apriori _ abandonment not to be justified . rather , the theory seems formally perfectly viable , though in clear contradiction with ( later ) experiments .
|
in the study of quantum information theory it is often assumed that classical information is effectively noiseless , free and unlimited . in this contextmany problems become trivial . for example , consider a situation in which alice wants to ` teleport ' a quantum state , whose identity is known to her , to bob ( this has come to be known as ` remote state preparation ' ) . if classical information is considered to be free , then no teleportation - type procedure is actually needed .alice can simply call bob on the telephone and tell him what the state is .if they do nt care how long the call lasts then bob can construct a state arbitrarily close to alice s original .remote state preparation becomes non - trivial if we wish to restrict the amount of classical information that alice can send to bob .of course , if alice and bob share a perfect singlet ( or one ebit ) , then they can achieve perfect teleportation with the transmission of only two classical bits ( cbits ) .but in , it is shown that if alice and bob do nt mind using up a large amount of entanglement , then in the asymptotic limit as the number of states being teleported tends to infinity , they can get away with sending only one cbit per qubit and still retain arbitrarily good fidelity .` large amount ' of entanglement means that the amount needed increases exponentially with the number of qubits being sent .it is also shown that this exponential increase becomes a mere multiplying factor if we allow cbits to be sent from bob to alice .further , an upper bound is plotted for how many cbits must be sent if we wish to use less than one ebit per qubit transmitted . in , an optimal procedure is given for this less - than - one - ebit case .other issues will arise if we consider all quantum channels to be noisy and thus prevent the sharing of perfect singlets .we might consider , in the context of some given situation , how the transmission of extra cbits can offset this problem .of course , if imperfect singlets are shared then one option is always to try to distill better singlets .but note that ( i ) in general it is not possible to distill a perfect singlet from a finite number of mixed states , so the resulting states are still noisy , ( ii ) some states only admit distillation if collective operations are allowed , that is operations on more than one pair at once , ( for example this is true of werner states ) and this may be impractical in a given situation and most importantly ( iii ) distillation itself involves the sending of cbits and if this is expensive , distillation may not be the best option. it may rarely be the case that the sending of ( relatively noiseless ) cbits is expensive compared with the sending of ( potentially noisy ) qubits .even if so , by assuming always that classical information is effectively free and thereby not bothering to count it , we may miss out on interesting theoretical relations between quantum and classical information .with the above in mind , we consider the following problem .alice and bob are separated by a noisy quantum channel .alice sends into the channel some quantum state , drawn from an ensemble ( where the state is drawn with probability and alice and bob both know the ensemble ) .alice is given a classical description of which state went into the channel and can also send noiseless classical bits which encode part of this information .bob then performs some operation on the state which he receives in an effort to undo the effects of the noise .if bob s eventual state , given that was sent , is , then the average fidelity is given by .we wish to describe a scheme for alice and bob which will optimize this quantity .we note that one might consider an alternative scenario in which alice is trying to prepare states remotely and can generate any state she wants to be sent into the channel .the difference lies in the fact that alice may , in this alternative case , generate and send a state which is different from the one which she wants bob ultimately to end up with .here we do not investigate this possibility but concentrate only on the scenario in which the state entering the channel is always identical with the state alice wishes bob to prepare .this would be the case if alice has no control over what goes into the channel but is simply given a classical description . orif alice sends the state she wants bob to have into the channel expecting it to be noiseless and only finds out about the noise later , at which point she decides to send some additional classical information .we start by describing how the most general possible scheme will work .it is well known that the most general evolution which a quantum state can undergo ( assuming that if measurements are performed , their results are to be averaged over ) corresponds to a completely positive trace - preserving map . in the kraus representation , this can be written as : where and is the identity .we refer to such a map as a quantum operation .both the noise experienced by the state as it passes down the channel and bob s operation take this form .now consider that the experiment described above is repeated many times - each time , alice sends a quantum state down the noisy channel , and classical bits , and bob performs some quantum operation .focus attention on those runs of the experiment in which the classical bit - string has a certain value , say .we may as well regard bob as performing the same quantum operation on each of these runs .we use the fact that a probabilistic mixture of quantum operations is itself a quantum operation .so we can stipulate without loss of generality that which quantum operation bob applies depends deterministically on the values of the cbits .we can also stipulate without loss of generality that the values of the cbits sent by alice depend deterministically on which quantum state she is sending .suppose , to the contrary , that a particular quantum state determines only probabilistically the values of the cbits .then , instead of regarding alice and bob as using a probabilistic scheme , one might regard them as using one from several deterministic schemes , with certain probabilities .but then the average fidelity obtained will be the average over that obtained for each of the deterministic schemes and we would do better simply to use whichever of these is the best .it follows from the above that we lose no generality if we restrict ourselves to schemes which work as follows .the ensemble is divided up into sub - ensembles .alice uses the classical bits to tell bob which sub - ensemble the state she is sending lies in .bob has a choice of possible quantum operations to perform . which one he performs is determined by the values of the classical bits .the problem is to find the scheme which leads to the maximum value for .we can split this problem into two .the first part is to determine , for a general ensemble of quantum states , which undergo some noise process of the form , where is a quantum operation : what is the best operation to perform in order to undo this noise as well as possible ?in other words , we wish to find an operation such that is maximized .the second part is to determine the best way for alice to divide the initial ensemble into sub - ensembles , given that an answer to the first part will determine for bob an operation to perform on each sub - ensemble .unfortunately , even the first of these appears to be a difficult problem in itself .some progress is made by barnum and knill in , but they are concerned with maximizing entanglement fidelity and their results are only valid for ensembles of commuting density operators and so are not immediately useful for our problem . for the rest of this paper , we are less ambitious .we consider only a very simple instance of the problem in which alice sends just one cbit and the pure states she sends are qubit states drawn from a distribution which is uniform over the bloch sphere .the noisy quantum channel is a depolarizing channel , which acts as : where .we will see that the solution even to this seemingly trivial problem involves a surprising amount of structure - suggesting that relationships between classical and quantum information in general may well be very intricate .consider the scenario in which alice sends to bob a pure state drawn from a uniform distribution over the bloch sphere , which gets depolarized on the way , and a single noiseless classical bit . from the above ,we know that alice must divide the surface of the bloch sphere into two subsets , and , which correspond to the cbit taking the value ` 0 ' or ` 1 ' .we must then find , in each case , the optimal quantum operation for bob to perform , given that the depolarized qubit lies in that particular subset .we begin by assuming that alice divides up the bloch sphere in the following fashion : [ assumption ] for a general state , , we have that , where is some fixed basis state corresponding to the point , or the north pole , on the bloch sphere and .otherwise .we conjecture that this assumption leads to an optimal scheme ( it seems very likely , for example , that in the optimal scheme the sets and will be simply connected and unlikely that the optimal scheme will be less symmetric than the one presented ) . in the rest of this section, we derive the optimal quantum operations for bob to perform , in the cases that and .it is helpful to write quantum operations in a different way .suppose that a general qubit density matrix is written where is a real 3-vector and .then , from the fact that a quantum operation is linear , we can write it in the form : where is a real matrix and is a real 3-vector .we have also automatically included the conditions that a quantum operation must be trace - preserving and positive .the condition of complete positivity imposes further constraints on and . of course we must also have that , we can write in the form , where is orthogonal ( i.e. a rotation ) and is symmetric .so we can view a quantum operation as a deformation of the bloch sphere along principal axes determined by s , followed by a rotation , followed by a translation .suppose now that .bob performs an operation characterized by and . from the symmetry of the problem, it follows that the fidelity obtained ( averaged over all such that ) is unchanged if bob performs a different operation , characterized by and , where and and where is a rotation of angle about the z - axis .it follows from this that the fidelity is also unchanged if bob performs an operation characterized by and , where : and this means that without loss of generality , we can restrict bob to actions of the form , where , and is a fixed rotation about the z - axis . from the condition that , we get .quantum operations are contractions on the bloch sphere .recall that the qubit which bob receives has been depolarized .we can write its density matrix in the form , where .ideally , bob would like an operation which takes , at least for those states belonging to , but this is not allowed ( such an operation is not a contraction ) .bob s operation will in fact consist of a translation in the z - direction and contractions parametrized by and .it is clear geometrically that in the optimum scheme , , where is the identity .our aim is now , for fixed , and , to find the optimum values of and , consistently with their describing a genuine quantum operation ( which , recall , must correspond to a completely positive map on the set of density matrices ) .in fact , one can show that complete positivity implies that : and these conditions are necessary but not sufficient .the actual derivation of these conditions is unenlightening , so we do not reproduce it here .it is easy to see now that bob s best operation will be characterized by setting and .this gives : and in fact , remarkably , this corresponds to an already well known quantum operation , usually described as an ` amplitude damping channel ' .amplitude damping is usually studied for its physical relevance - it corresponds to many natural physical processes .for example , it may describe an atom coupled to a single mode of electromagnetic radiation undergoing spontaneous emission , or a single photon mode from which a photon may be scattered by a beam splitter .this suggests that our scheme should be easily implementable experimentally .one can run through similar arguments for the case .again , it turns out that bob s optimal operation is essentially an amplitude damping operation , except that in this case , the vector will point in the opposite direction i.e. bob s operation will involve a translation of the bloch sphere downwards , towards the south pole , as well as some contraction . for the rest of this paper we calculate the optimum fidelity that bob can achieve for a given . for fixed , one can optimize over the value of separately for the cases and ( the optimum value of may sometimes be zero implying that bob s best operation is to do nothing ) .one can then optimize over .after the action of the depolarizing channel and bob s quantum operation , we have that : the average fidelity is given by : where is bob s quantum operation parameter in the case that and is defined so that and .optimizing over , and numerically leads to the graph shown in figure [ fidelitygraph ] which shows the achievable fidelity for a depolarizing channel parametrized by . also shown on the graphis the fidelity obtained in the case that alice sends no cbit and bob performs no quantum operation .we finish this section by noting some features of the graph . 1 .as we might expect , our scheme always yields an advantage when compared with doing nothing .2 . if alice can send to bob one cbit but can not use a quantum channel , then the best obtainable fidelity is ( alice tells bob ` upper ' or ` lower ' hemisphere and bob prepares a state which is spin up or spin down accordingly ) . with our schemewe have if .thus the quantum channel is some use for any .3 . there is a kink in the graph at .further numerical investigations reveal why this is the case .denote the optimum value of ( the angle which describes how alice is dividing up the bloch sphere ) by .below this value of , we have . at , suddenly jumps to and then decreases as increases .this is shown in figure [ anglegraph ] .4 . in the region ,where , we can calculate analytically yielding .5 . as , bob s operation tends towards a simple ` swap ' operation which maps all points in the bloch sphere to one of the poles depending on which hemisphere the qubit lies in .if , then and alice and bob , if they can , would do better to use a protocol due to gisin in which alice sends two noiseless cbits and no quantum information .we have considered situations in which alice and bob wish to use noiseless classical information to offset quantum noise - a kind of error correction .an important feature is that alice possesses a classical description of the quantum states she wishes to send . after considering these situations in generality, we turned to consider a very specific scenario in which alice sends one qubit which passes through a depolarizing channel accompanied by a noiseless classical bit .we described a scheme which we conjecture is optimal which involves alice dividing up the bloch sphere as in assumption [ assumption ] above and bob performing ` amplitude damping ' operations .our results for this scheme were obtained by brute force .clearly a more principled approach is desirable .one idea might be to regard the depolarization as actually coming about through the actions of an eavesdropper , eve .eve gains some information about the quantum state passing through and must therefore gain some information about its identity .it follows that even after bob s recovery operation , some disturbance to the state is inevitable .this way , one might be able to derive an upper bound on bob s achievable fidelity for more general scenarios than the one considered here .i am grateful to trinity college , cambridge for support , cern for hospitality and to the european grant equip for partial support .i would like to thank adrian kent and sandu popescu for useful discussions .c. h. bennett , g. brassard , c. crepeau , _ et al .lett . * 70 * , 1895 ( 1993 ) . c. h. bennett , d. p. divincenzo , p. w. shor , _ et al .lett . * 87 * , 077902 ( 2001 ) .a. k. pati , phys .a * 63 * , 014302 ( 2001 ) .lo , phys .a * 62 * , 012313 ( 2000 ) .i. devetak and t. berger , phys .lett . * 87 * , 197901 ( 2001 ) . c. h. bennett , h. j. bernstein , s. popescu , _ et al .rev . a * 53 * , 2046 ( 1996 ) . c. h. bennett , g. brassard , s. popescu , _ et al .lett . * 76 * , 722 ( 1996 ) .a. kent , phys .* 81 * , 2839 ( 1998 ) .n. linden , s. massar and s. popescu , phys .* 81 * , 3279 ( 1998 ) .m. nielsen and i. chuang , `` quantum computation and quantum information '' ( cambridge university press , 2000 ) .h. barnum and e. knill , quant - ph/0004088 .n. gisin , phys .a * 210 * , 157 ( 1996 ) . c. fuchs , fortschr .phys . * 46 * , 535 ( 1998 ) .
|
we consider situations in which i ) alice wishes to send quantum information to bob via a noisy quantum channel , ii ) alice has a classical description of the states she wishes to send and iii ) alice can make use of a finite amount of noiseless classical information . after setting up the problem in general , we focus attention on one specific scenario in which alice sends a known qubit down a depolarizing channel along with a noiseless cbit . we describe a protocol which we conjecture is optimal and calculate the average fidelity obtained . a surprising amount of structure is revealed even for this simple case which suggests that relationships between quantum and classical information could in general be very intricate . pacs number(s ) : 03.67.-a # 1| # 1 # 1#1 | 2
|
a probabilistic model over a discrete state space is classified as energy - based if it can be written in the form where the _ energy _ is a computationally tractable function of the system s configuration , is a set of parameters to be learned from data , and is a normalization constant also known as the _ partition function_. many methods for learning energy - based models exist ( e.g. ) which makes them useful in a wide variety of fields , most recently as data analysis tools in neuroscience . a popular way of parametrizing the energy function is by decomposing it into a sum of _ potentials _ representing interactions among different groups of variables , i.e. the resulting models , also termed gibbs random fields , are easy to interpret but , even for moderate , learning the potential functions from data is intractable unless we can a priori set most of them to zero , or we know how parameters of multiple potentials relate to each other . a common assumption is to consider single- and two - variable potentials only , but many tasks require an efficient parametrization of higher - order interactions .a powerful way of modeling higher - order dependencies is to assume that they are mediated through hidden variables coupled to the observed system .many hidden variable models are , however , notoriously hard to learn ( e.g. boltzmann machines ) , and their distributions over observed variables can not be represented with tractable energy functions . an important exception is the restricted boltzmann machine ( rbm ) which is simple to learn even when the dimension of the data is large , and which has proven effective in many applications .this paper considers a new alternative for modeling higher - order interactions .we generalize any model of the form to where is an arbitrary strictly increasing and twice differentiable function which needs to be learned from data together with the parameters .while this defines a new energy - based model with an energy function , we will keep referring to as the energy function .this terminology reflects our interpretation that should parametrize local interactions between small groups of variables , e.g. low - order terms in eq . , while the function globally couples the whole system .we will formalize this intuition in section [ why_it_works ] . since setting recovers, we will refer to simply as _the nonlinearity_. generalized energy - based models have been previously studied in the physics literature on nonextensive statistical mechanics but , to our knowledge , they have never been considered as data - driven generative models .if is a continuous rather than a discrete vector , then models are related to elliptically symmetric distributions .we wish to make as few prior assumptions about the shape of the nonlinearity as possible . we restrict ourselves to the class of strictly monotone twice differentiable functions for which is square - integrable .it is proved in that any such function can be represented in terms of a square - integrable function and two constants and as where is arbitrary and sets the constants to , .is a solution to the differential equation , and so is a measure of the local curvature of . in particular, is a linear function on any interval on which .the advantage of writing the nonlinearity in the form is that we can parametrize it by expanding in an arbitrary basis without imposing any constraints on the coefficients of the basis vectors .this will allow us to use unconstrained optimization techniques during learning .we will use piecewise constant functions to parametrize .let ] into non - overlapping bins of the same width with indicator functions , i.e. if is in the bin , otherwise , and we set .the integrals in eq . can be carried out analytically for this choice of yielding an exact expression for as a function of and , as well as for its gradient with respect to these parameters ( see appendix a ). the range ] be an interval which contains the set , and which is divided into non - overlapping bins ] denotes an average over the list of states , and represents applications ( i.e. neurons are switched ) of the gibbs sampling transition operator . in the case of semiparametric pairwise models ,persistent contrastive divergence was used as a part of an alternating maximization algorithm in which we learned the nonlinearity by maximizing the approximate likelihood while keeping the couplings fixed , and then learned the couplings using persistent contrastive divergence with the nonlinearity fixed .details of the learning algorithms for all models are described in appendix b. the most interesting metaparameter is the number of bins necessary to model the nonlinearity .we settled on but we observed that decreasing it to would not significantly change the training likelihood .however , yielded more consistent nonlinearities over different subgroups . estimating likelihood of our data is easy because the state occurs with high probability ( ) , and all the inferred models retain this propertytherefore , for any energy - based model , we estimated by drawing samples using gibbs sampling , calculated the partition function as , and used it to calculate the likelihood .the models do not have any explicit regularization .we tried to add a smoothed version of l1 regularization on coupling matrices but we did not see any improvement in generalization using a cross - validation on one of the training datasets .certain amount of regularization is due to sampling noise in the estimates of likelihood gradients , helping us to avoid overfitting .perhaps surprisingly , the addition of a simple nonlinearity to the pairwise energy function significantly improves the fit to data .here we give heuristic arguments that this should be expected whenever the underlying system is globally coupled .let be a sequence of positive probabilistic models ( is of dimension ) , and suppose that can be ( asymptotically ) factorized into subsystems statistically independent of each other whose number is proportional to .then is an average of independent random variables , and we expect its standard deviation to vanish in the limit . alternatively ,if , then the system can not be decomposed into independent subsystems , and there must be some mechanism globally coupling the system together .it has been argued that many natural systems including luminance in natural images , amino acid sequences of proteins , and neural activity such as the one studied in sec .[ neurons ] belong to the class of models whose log - probabilities per dimension have large variance even though their dimensionality is big .therefore , models of such systems should reflect the prior expectation that there is a mechanism which couples the whole system together . in our case , this mechanism is the nonlinearity .recent work attributes the strong coupling observed in many systems to the presence of latent variables ( ) .we can rewrite the model in terms of a latent variable considered in if we assume that is a totally monotone function , i.e. that it is continuous for ( we assume , without loss of generality , that ) , infinitely differentiable for , and that for .theorem then asserts that we can rewrite the model as where is a probability density ( possibly containing delta functions ) .suppose that the energy function has the form. then we can interpret as a latent variable being coupled to every group of interacting variables , and hence inducing a coupling between the whole system whose strength depends on the size of the fluctuations of .while the class of positive , twice differentiable , and decreasing functions that we consider is more general than the class of totally monotone functions , we can find the maximum likelihood densities which correspond to the pairwise energy functions inferred in section [ neurons ] using the semiparametric pairwise model .we model as a histogram , and maximize the likelihood under the model by estimating using the approximation .the maximum likelihood densities are shown in figure [ fig2]b for one particular sequence of networks of increasing size .the units of the latent variables are arbitrary , and set by the scale of which we normalize so that .the bimodal structure of the latent variables is observed across all datasets .we do not observe a significant decrease in likelihood by replacing the nonlinearity with the integral form .therefore , at least for the data in sec .[ neurons ] , the nonlinearity can be interpreted as a latent variable globally coupling the system .suppose that the true system which we want to model with satisfies .then we also expect that energy functions which accurately model the system satisfy . in the limit , the approximation of the likelihood becomes exact when is differentiable ( has to be appropriately scaled with ) , andarguments which led to can be reformulated in this limit to yield where is the density of states , and is now the probability density of under the true model . in statistical mechanics , the first term termed the _ microcanonical entropy _ , and is expected to scale linearly with . on the other hand , because , we expect the second term to scale at most as .thus we make the prediction that if the underlying system can not be decomposed into independent subsystems , then the maximum likelihood nonlinearity satisfies up to an arbitrary constant . in figure[ fig3]a we show a scatter plot of the inferred nonlinearity for one sequence of subnetworks in sec .[ neurons ] vs the microcanonical entropy estimated using the wang and landau algorithm . while the convergence is slow , the plot suggests that these functions approach each other as the network size increases . to demonstrate this prediction on yet another system, we used the approach in to fit the semiparametric pairwise model with to patches of pixel log - luminances in natural scenes from the database .this model has an analytically tractable density of states which makes the inference simple .figure [ fig3]b shows the relationship between the inferred nonlinearity and the microcanonical entropy for collections of patches which increase in size , confirming our prediction that the nonlinearity should be given by the microcanonical entropy .we presented a tractable extension of any energy - based model which can be interpreted as augmenting the original model with a latent variable .as demonstrated on the retinal activity data , this extension can yield a substantially better fit to data even though the number of additional parameters is negligible compared to the number of parameters of the original model . in light of our results, we hypothesize that combing a nonlinearity with the energy function of a restricted boltzmann machine might yield a model of retinal activity which is not only accurate , but also simple as measured by the number of parameters .simplicity is an important factor in neuroscience because of experimental limitations on the number of samples .we plan to pursue this hypothesis in future work .our models are expected to be useful whenever the underlying system can not be decomposed into independent components .this phenomenon has been observed in many natural systems , and the origins of this global coupling , and especially its analogy to physical systems at critical points , have been hotly debated .our models effectively incorporate the prior expectations of a global coupling in a simple nonlinearity , making them superior to models based on gibbs random fields which might need a large number of parameters to capture the same dependency structure .we thank david schwab , elad schneidman , and william bialek for helpful discussions .this work was supported in part by hfsp program grant rgp0065/2012 and austrian science fund ( fwf ) grant p25651 .let where are the indicator functions defined in the main text .for , we have . for we have , where - 1 } \exp\left(\delta \sum_{j=1}^{i-1 } \beta_j \right ) \frac{\exp(\delta \beta_i)-1}{\beta_i}+ \exp \left(\delta \sum_{j=1}^{[e]-1 } \beta_j \right ) \frac{\exp(\beta_{[e]}(e - ( [ e]-1)\delta ) ) -1}{\beta_{[e]}}. \end{aligned}\ ] ] we define ] , and . the gradient is if ] , then - 1 } \beta_j \right)\frac{\exp ( \delta \beta_{[e]})\delta \beta_{[e ] } - \exp ( \delta \beta_{[e ] } ) + 1}{\beta_{[e]}^2}.\ ] ] if $ ] , then - 1 } \exp\left(\delta \sum_{j=1}^{i-1 } \beta_j \right ) \frac{\exp(\delta \beta_i)-1}{\beta_i } \\ &\qquad + \delta \exp\left(\delta \sum_{j=1}^{[e]-1 } \beta_j \right ) \frac{\exp(\beta_{[e]})(e - ( [ e]-1)\delta)-1}{\beta_{[e]}}. \end{aligned}\ ] ]pairwise models , k - pairwise models , and rbms were all trained using persistent contrastive divergence with , , and with initial parameters drawn from a normal distribution with mean and standard deviation .we iterated the algorithm three times , first with , then with , and finally with ( the last step had for k - pairwise models , and also for rbms when ) .we initialized the coupling matrix as the one learned using a pairwise model .the and metaparameters of were set to the minimum and maximum energy observed in the training set .we set , and we initialized the parameters and of the nonlinearity by maximizing the approximate likelihood with fixed .the metaparameters and for the approximate likelihood were set to the minimum , and twice the maximum of over the training set . was set between and .the density of states was estimated with a variation of the algorithm described in with accuracy . starting with these initial parameters , we ran two iterations of persistent contrastive divergence with , simultaneously learning the coupling matrix and the nonlinearity .the first iteration had , and the second one . in order for the learning to be stable, we had to choose different learning rates for the coupling matrix ( ) , and for and ( ) .for the last step , we adjusted the metaparameters of the nonlinearity so that and are the minimum and maximum observed energies with the current , and .we maximized the approximate likelihood to infer the nonlinearity with these new metaparameters .then we fixed , and ran a persistent contrastive divergence ( ) learning .finally we maximized the approximate likelihood with fixed to get the final nonlinearity .
|
probabilistic models can be defined by an energy function , where the probability of each state is proportional to the exponential of the state s negative energy . this paper considers a generalization of energy - based models in which the probability of a state is proportional to an arbitrary positive , strictly decreasing , and twice differentiable function of the state s energy . the precise shape of the nonlinear map from energies to unnormalized probabilities has to be learned from data together with the parameters of the energy function . as a case study we show that the above generalization of a fully visible boltzmann machine yields an accurate model of neural activity of retinal ganglion cells . we attribute this success to the model s ability to easily capture distributions whose probabilities span a large dynamic range , a possible consequence of latent variables that globally couple the system . similar features have recently been observed in many datasets , suggesting that our new method has wide applicability .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.