article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
a universal unanswered question in neuroscience is how the human brain activities can be mapped to the different brain tasks ?as one of the main techniques in task - based functional magnetic resonance imaging ( fmri ) analysis , multivariate pattern ( mvp ) is a conjunction between neuroscience and computer science , which can extract and decode brain patterns by applying the classification methods .indeed , it can predict patterns of neural activities associated with different cognitive states and also can define decision surfaces to distinguish different stimuli for decoding the brain and understanding how it works . analyzing the patterns of visual objectsis one of the most interesting topics in mvp classification , which can enable us to understand how brain stores and processes the visual stimuli .it can be used to find novel treatments for mental diseases or even to create a new generation of the user interface .technically , mvp classification is really a challenging problem .firstly , most of the fmri data sets are noisy and sparse , which can decrease the performance of mvp methods .the next challenge is defining the regions of interest ( rois ) . as mentioned before ,fmri techniques allow us to study what information are represented in the different regions .so , it is really important to know what are the effects of different stimuli on the brain regions , especially in complex tasks ( doing some simple tasks at the same time such as watching photos and tapping keys ) . on the one hand ,most of the previous studies manually selected the rois . on the other hand , defining wrong rois can significantly decrease the performance of mvp methods .another challenge is the cost of brain studies . combining different homogeneous fmri data setscan be considered as a solution for this problem but data must be normalized in a standard space .the procedure of normalization can increase the time and space complexities and decrease the robustness of mvp techniques , especially in voxel - based methods .the last challenge is visualization . as a machine learning technique, mvp represents the numerical results in the voxel - level , network connections , etc .sometimes , it is so hard for neuroscientists to find a relation between the generated results and the cognitive states .the contributions of the paper are four fold : firstly , the proposed method estimates and analyzes a snapshot of brain image for each stimulus based on the level of using oxygen in the brain instead of analyzing whole of fmri time series .indeed , employing these snapshots can dramatically decrease the sparsity .secondly , our methods can automatically detect active regions for each stimulus and dynamically define rois for each data set .further , it develops a novel model of neural representation for analyzing and visualizing functional activities in the form of anatomical regions .this model can provide a compact and informative representation of neural activities for neuroscientists to understand : what is the effect of a stimulus on each of the automatically detected regions instead of just study the fluctuation of a group of voxels in the manually selected rois .the next contribution is a new gaussian smoothing method for removing noise of voxels in the level of anatomical regions .lastly , this paper employs the l1-regularization support vector machine ( svm ) method for creating binary classification at the rois level and then combine these classifiers by using the bagging algorithm for generating the mvp model .as the most prevalent techniques in the human brain decoding , mvp methods can predict patterns of neural activities . since spatial resolution and within - area patterns of response in fmrican provide an informative representation of stimulus distinctions , most of previous mvp studies for decoding the human brain focused on task - based fmri data sets .they used these data sets for generating different forms of neural representation , include usually voxels ( volume elements in brain images ) , nodes on the cortical surface , the average signal for an area , a principal or independent component , or a measure of functional connectivity between a pair of locations .previous studies demonstrated that mvp classification can also distinguish many other brain states such as recognizing visual , or auditory stimuli .pioneer studies just focused on the special regions of the human brain , such as the fusiform face area ( ffa ) or parahippocampal place area ( ppa ) .haxby et al. showed that different visual stimuli , i.e. human faces , animals , etc ., represent different responses in the brain .hanson et al . developed combinatorial codes in the ventral temporal lobe for object recognition .norman et al . argued for using svm and gaussian naive bayes classifiers .anderson and oates studied the chance of applying non - linear artificial neural network ( ann ) on brain responses .there is great potential for employing sparse methods for brain decoding problems .carroll et al .employed the elastic net for prediction and interpretation of distributed neural activity with sparse models .richiardi et al .extracted the characteristic connectivity signatures of different brain states to perform classification .varoquaux et al .proposed a small - sample brain mapping by using sparse recovery on spatially correlated designs with randomization and clustering .their method is applied on small sets of brain patterns for distinguishing different categories based on a one - versus - one strategy .mcmenamin et al .studied subsystems underlie abstract - category ( ac ) recognition and priming of objects ( e.g. , cat , piano ) and specific - exemplar ( se ) recognition and priming of objects ( e.g. , a calico cat , a different calico cat , a grand piano , etc . ) .technically , they applied svm on manually selected rois in the human brain for generating the visual stimuli predictors .mohr et al . compared four different classification methods , i.e. l1/2 regularized svm , the elastic net , and the graph net , for predicting different responses in the human brain .they show that l1-regularization can improve classification performance while simultaneously providing highly specific and interpretable discriminative activation patterns .osher et al .proposed a network ( graph ) based approach by using anatomical regions of the human brain for representing and classifying the different visual stimuli responses ( faces , objects , bodies , scenes ) .the fmri techniques visualize the neural activities by measuring the level of oxygenation or deoxygenation in the human brain , which is called blood oxygen level dependent ( bold ) signals .technically , these signals can be represented as time series for each subject .most of the mvp techniques directly analyze these noisy and sparse time series for understanding which patterns are demonstrated for different stimuli .the main idea of our proposed method is so simple . instead of analyzing whole of the time series , the proposed method estimates and analyzes a snapshot of brain image for each stimulus when the level of using oxygen is maximized . as a result, this method can automatically decrease the sparsity of brain image .the proposed method is applied in three stages : firstly , snapshots of brain image are selected by finding local maximums in the smoothed version of the design matrix .then , features are generated in three steps , including normalizing to standard space , segmenting the snapshots in the form of automatically detected anatomical regions , and removing noise by gaussian smoothing in the level of rois .finally , decision surfaces are generated by utilizing the bagging method on binary classifiers , which are created by applying l1-regularized svm on each of neural activities in the level of rois .+ ( a ) design matrix in the block - design experiment + ( b ) design matrix in the event - related experiment -0.3 in fmri time series , time points ( onsets ) , + hrf signal , , gaussian parameter : + snapshots , the sets of correlations : + + 1 . generating the design matrix .+ 2 . defining .+ 3 . calculating by using ( [ betaeq ] ) .generating gaussian kernel by ( [ gaussiankernel ] ) .+ 5 . smoothing the design matrix by ( [ smoothingdm ] ) .+ 6 . finding locations of the snapshots by ( [ snapshotlocations ] ) .calculating snapshots by using ( [ snapshots ] ) .+ fmri time series collected from a subject can be denoted by , where is the number of time samples , and denotes the number of voxels .same as previous studies , can be formulated by a linear model as follows : where denotes the design matrix , is the noise ( error of estimation ) , denotes the sets of correlations ( estimated regressors ) between voxels .the design matrix can be denoted by , and the sets of correlations can be defined by . here , and are the column of design matrix and the set of correlations for category , respectively . is also the number of all categories in the experiment .in fact , each category ( independent tasks ) contains a set of homogeneous visual stimuli .in addition , the nonzero voxels in represents the location of all active voxels for the category . as an example , imagine during a unique session for recognizing visual stimuli , if a subject watches 4 photos of cats and 3 photos of houses , then the design matrix contains two columns ; and there are also two sets of correlations between voxels , i.e. one for watching cats and another for watching houses. indeed , the final goal of this section is extracting 7 snapshots of the brain image for the 7 stimuli in this example .the design matrix can be classically calculated by convolution of time samples ( or onsets : ) and as the hemodynamic response function ( hrf ) signal , .in addition , there is a wide range of solutions for estimating values .this paper uses the classical method generalized least squares ( gls ) for estimating the values where is the covariance matrix of the noise ( ) : each local maximum in represents a location where the level of using oxygen is so high .in other words , the stimulus happens in that location .since mostly contains small spikes ( especially for event - related experiments ) , it can not be directly used for finding these local maximums .therefore , this paper employs a gaussian kernel for smoothing the signal .now , the interval is defined as follows for generating the kernel : where denotes a positive real number ; is the ceiling function ; and denotes the set of integer numbers .gaussian kernel is also defined by normalizing as follows : where is the sum of all elements in the interval .this paper defines the smoothed version of the design matrix by applying the convolution of the gaussian kernel and each column of the design matrix ( ) as follows : where .since the level of smoothness in is related to the positive value in , is heuristically defined to generate the optimum level of smoothness in the design matrix .the general assumption here is the can create design matrix , which is sensitive to small spikes .further , can rapidly increase the level of smoothness , and remove some weak local maximums , especially in the event - related fmri data sets .figure [ smootheddm ] illustrates two examples of the smoothed columns in the design matrix .the local maximum points in the can be calculated as follows : where denotes the set of time points for all local maximums in .the sets of maximum points for all categories can be denoted as follows : as mentioned before , the fmri time series can be also denoted by , where is all voxels of fmri data set in the time point .now , the set of snapshots can be formulated as follows : where is the number of snapshots in the brain image , and denotes the snapshot for stimulus .these selected snapshots are employed in next section for extracting features of the neural activities .algorithm [ alg : snapshotselection ] illustrates the whole of procedure for generating the snapshots from the time series . in this paper ,the feature extraction is applied in three steps , i.e. normalizing snapshots to standard space , segmenting the snapshots in the form of automatically detected regions , and removing noise by gaussian smoothing in the level of rois . as mentioned before, normalizing brain image to the standard space can increase the time and space complexities and decrease the robustness of mvp techniques , especially in voxel - based methods . on the one hand ,most of the previous studies preferred to use original data sets instead of the standard version because of the mentioned problem . on the other hand, this mapping can provide a normalized view for combing homogeneous data sets . as a result, it can significantly reduce the cost of brain studies and rapidly increase the chance of understanding how the brain works . employing brain snapshots rather than analyzing whole of datacan solve the normalization problem .normalization can be formulated as a mapping problem .indeed , brain snapshots are mapped from space to the standard space by using a transformation matrix for each snapshot .there is also another trick for improving the performance of this procedure .since the set denotes the locations of all active voxels for the category , it represents the brain mask for that category and can be used for generating the transform matrix related to all snapshots belong to that category .for instance , in the example of the previous section , instead of calculating 7 transform matrices for 7 stimuli , we calculate 2 matrices , including one for the category of cats and the second one for the category of houses . this mapping can be denoted as follows : where denotes the transform matrix , is the set of correlations in the standard space for category .this paper utilizes the flirt algorithm for calculating the transform matrix , which minimizes the following objective function : where the function denotes the normalized mutual information between two images , and is the reference image in the standard space .this image must contain the structures of the human brain , i.e. white matter , gray matter , and csf .these structures can improve the performance of mapping between the brain mask in the selected snapshot and the general form of a standard brain .the performance of ( [ eq : imagereg ] ) will be analyzed in the supplementary materials .in addition , the sets of correlations for all of categories in the standard space is denoted by , and the sets of transform matrices is defined by .now , the function is denoted as follows to find suitable transform matrix for each snapshot : where and are the transform matrix and the set of correlations related to the snapshot , respectively .based on ( [ selectfunc ] ) , each normalized snapshot in the standard space is defined as follows : where is the snapshot in the standard space .further , all snapshots in the standard space can be defined by . as mentioned before, nonzero values in the correlation sets depict the location of the active voxels . based on ( [ selectfunc ] ) , this paper uses these correlation sets as weights for each snapshot as follows : where denotes hadamard product , and is the modified snapshot , where the values of deactivated voxels ( and also deactivated anatomical regions ) are zero in this snapshot . as the final product of normalization procedure ,the set of snapshots can be denoted by .further , each snapshot can be defined in the voxel level as follows , where is the voxel of snapshot : \ ] ] snapshots , correlations , image , atlas : + smoothed snapshots : + + 1 . for each , calculate transform matrix by ( [ eq : imagereg ] ) .mapping to standard space by and ( [ eq : snapshotstandardspace ] ) .+ 3 . detecting active voxels for each snapshot by ( [ nonzerosnapshots ] ) .+ 4 . segmenting each snapshot by ( [ eq : snapshotsegmentation ] ) .+ 5 . finding active regions for each snapshot by ( [ automaticallydetectedactiveregions ] ) .generating gaussian kernel by ( [ gaussiankernelforregions ] ) .smoothing snapshots by ( [ eq : smoothedsnapshots ] ) .the next step is segmenting the snapshots in the form of automatically detected regions .now , consider anatomical atlas , where , , and is the number of all regions in the anatomical atlas .here , denotes the set of voxel locations in the snapshots for the anatomical region . a segmented snapshot based on the region can be denoted as follows : where is the subset of voxels in the snapshot , which these voxels are belonged to the the anatomical region .in addition , the sets of all anatomical regions in the snapshot can be defined by ] is the generated weights for predicting mvp model based on the active region .the classifier for region is also denoted by , where all of these classifiers can be defined by .the final step in the proposed method is combining all classifiers ( ) by bagging algorithm for generating the mvp final predictive model .indeed , bagging method uses the average of predicted results in ( [ eq : l1svm ] ) for generating the final result ( ) .algorithm [ alg : theproposedmethod ] shows the whole of procedure in the proposed method by using leave - one - out ( loo ) cross - validation in the subject level .+ -0.15 in [ tbl : binaryaccuracy ] [ cols="<,^,^,^,^,^,^",options="header " , ] -0.25 in fmri time series , onsets , hrf signal , gaussian parameter ( default ) : + mvp performance ( ) + + 1 .* foreach * subject + 2 .create train set .extract snapshots of by using algorithm [ alg : snapshotselection ] .generate features of by using algorithm [ alg : featureextraction ] .train binary classifiers by using and .generate final predictor ( ) by using bagging .consider as test set .extract snapshots for by using algorithm [ alg : snapshotselection ] .generate features for by using algorithm [ alg : featureextraction ] .apply test set on the final predictor ( ) .calculate performance of ( ) .end foreach * + 13 .accuracy : : .auc : .+ -0.25 in + ( a ) ( b ) + + ( c ) ( d ) + ( e ) ( f ) -0.3 inthis paper utilizes three data sets , shared by openfmri.org , for running empirical studies .as the first data set , ` visual object recognition ' ( ds105 ) includes subjects .it also contains categories of visual stimuli , i.e. gray - scale images of faces , houses , cats , bottles , scissors , shoes , chairs , and scrambles ( nonsense patterns ) .this data set is analyzed in high - level visual stimuli as the binary predictor , by considering all categories except nonsense photos ( scramble ) as objects .please see for more information . as the second dataset , ` word and object processing ' ( ds107 ) includes subjects .it contains categories of visual stimuli , i.e. words , objects , scrambles , consonants. please see for more information . as the last data set , ` multi - subject , multi - modal human neuroimaging dataset ' ( ds117 ) includes meg and fmri images for subjects .this paper just uses the fmri images of this data set .it also contains categories of visual stimuli , i.e. human faces , and scrambles. please see for more information .these data sets are separately preprocessed by spm 12 ( 6685 ) ( www.fil.ion.ucl.ac.uk/spm/ ) , i.e. slice timing , realignment , normalization , smoothing .this paper employs the montreal neurological institute ( mni ) 152 t1 1 mm as the reference image ( ) in for mapping the extracted snapshots to the standard space ( ) .the size of this image in 3d scale is .moreover , the _ talairach _ atlas ( including regions ) in the standard space is used in for extracting features .further , all of algorithms are implemented in the matlab r2016b ( 9.1 ) on a pc with certain specifications2.4 ghz ) , ram = 64 gb , os = elementary os 0.4 loki ] by authors in order to generate experimental results .figure [ fig : correlation ] a , c , and e respectively demonstrate correlation matrix at the voxel level for the data sets ds105 , ds107 , and ds117 .further , figure [ fig : correlation ] b , d , and f respectively illustrate the correlation matrix in the feature level for the data sets ds105 , ds107 , and ds117 .since neural activities are sparse , high - dimensional and noisy in voxel level , it is so hard to discriminate between different categories in figure [ fig : correlation ] a , c , and e. by contrast , figure [ fig : correlation ] b , d , and f provide distinctive and informative representation , when the proposed method used the extracted features .the performance of our proposed method is compared with state - of - the - art algorithms , which were proposed for decoding the visual stimuli in the human brain . as a pioneer algorithm , our methodis compared by svm method , which is used in for decoding the visual stimuli .the performance of graph net and elastic net are reported as the most popular methods for fmri analysis . moreover , the performance of l1-reg .svm is compared by the proposed method .the l1-reg .svm is recently employed by as the most effective approach for decoding visual stimuli . since this paper also applies l1-reg .svm for generating the predictive model in the level of rois , it can be considered as a baseline for comparing our feature space with the previous approaches .lastly , osher et al . proposed a graph - based approach for creating predictors .indeed , they employed the anatomical structure of the human brain for constructing graph networks .this paper compares the performance of the mentioned methods as well as the proposed method by using loo cross - validation at the subject level .further , the gaussian parameter for smoothing the design matrix is considered .the effect of different values of this parameter on the performance of the proposed method will be discussed in the supplementary materials .table [ tbl : binaryaccuracy ] and [ tbl : binaryauc ] respectively demonstrate the classification accuracy and area under the roc curve ( auc ) in percentage ( % ) for the binary predictors .these tables report the performance of binary predictors based on the category of the visual stimuli .all visual stimuli in the data set ds105 except nonsense photos ( scramble ) are considered as the object category for generating these experimental results .in addition , different categories of visual stimuli ( including words , consonants , objects , and scrambles ) in the ds107 are compared by using one - versus - all strategy .moreover , face recognition based on neural activities is trained by using ds117 data set .finally , all data sets are combined for generating predictive models for different categories of visual stimuli , i.e. faces , objects , and scrambles .as table [ tbl : binaryaccuracy ] and [ tbl : binaryauc ] demonstrate , the proposed algorithm has generated better performance in comparison with other methods because it provided a better representation of neural activities by exploiting the snapshots of the automatically detected active regions in the human brain .the last three rows in table [ tbl : binaryaccuracy ] and [ tbl : binaryauc ] illustrate the accuracy of the proposed method by combining all data sets . as depicted in these rows , the performances of other methodsare significantly decreased . as mentioned before , it is the normalization problem .in addition , our framework employs the extracted features from the automatically detected snapshots instead of using all or a group of voxels , which can decrease noise and sparsity and remove high - dimensionality .therefore , the proposed method can significantly decrease the time and space complexities and increase rapidly the performance and robustness of the predictive models .as a conjunction between neuroscience and computer science , multivariate pattern ( mvp ) is mostly used for analyzing task - based fmri data set . there is a wide range of challenges in the mvp techniques , i.e. decreasing noise and sparsity , defining effective regions of interest ( rois ) , visualizing results , and the cost of brain studies . in overcoming these challenges, this paper proposes multi - region neural representation as a novel feature space for decoding visual stimuli in the human brain .the proposed method is applied in three stages : firstly , snapshots of brain image ( each snapshot represents neural activities for a unique stimulus ) are selected by finding local maximums in the smoothed version of the design matrix .then , features are generated in three steps , including normalizing to standard space , segmenting the snapshots in the form of automatically detected anatomical regions , and removing noise by gaussian smoothing in the level of rois .experimental studies on 4 visual categories ( words , objects , consonants and nonsense photos ) clearly show the superiority of our proposed method in comparison with state - of - the - art methods .in addition , the time complexity of the proposed method is naturally lower than the previous methods because it employs a snapshot of brain image for each stimulus rather than using the whole of time series . in future, we plan to apply the proposed method to different brain tasks such as risk , emotion and etc .we thank the anonymous reviewers for comments . this work was supported in part by the national natural science foundation of china ( 61422204 and 61473149 ) ,jiangsu natural science foundation ( bk20130034 ) and nuaa fundamental research funds ( ne2013105 ) .99 m. l. anderson and t. oates , _ a critique of multi - voxel pattern analysis _ , proceedings of the 32nd annual meeting of the cognitive science society , 2010 , pp . 151116 .a. lorbert and p. j. ramadge , _ kernel hyperalignment _ , advances in neural information processing systems , 2012 , pp .. h. mohr , u. wolfensteller , s. frimmel and h. ruge , _ sparse regularization techniques provide novel insights into outcome integration processes _ , neuroimage , elsevier , 104 ( 2015 ) , pp .. b. w. mcmenamin , r. g. deason , v. r. steele , w. koutstaal and c. j. marsolek , _ separability of abstract - category and specific - exemplar visual object subsystems : evidence from fmri pattern analysis _ , brain and cognition , elsevier , 93 ( 2015 ) , pp .5463 . j. v. haxby , a. c. connolly , j. s. guntupalli , _ decoding neural representational spaces using multivariate pattern analysis _ , annual review of neuroscience , annual reviews , 37 ( 2014 ) , pp .s. j. hanson , t. matsuka and j. v. haxby , _ combinatorial codes in ventral temporal lobe for object recognition : haxby ( 2001 ) revisited : is there a ` face ' area ? _ , neuroimage , elsevier , 23 ( 2004 ) , pp. 156166 .p. h. c. chen , j. chen , y. yeshurun , u. hasson , j. v. haxby and p. j. ramadge , _ a reduced - dimension fmri shared response model _ , advances in neural information processing systems , 2015 , pp .s. bradley and o. l. mangasarian , _ feature selection via concave minimization and support vector machines _ , international conference on machine learning ( icml ) , 98 ( 1998 ) , pp .l. breiman , _ bagging predictors _ , machine learning , springer , 24 ( 1996 ) , pp .123140 . k. p. murphy , _ machine learning : a probabilistic perspective _ , mit press , 2012 .j. v. haxby , m. i. gobbini , m. l. furey , a. ishai , j. l. schouten and p. pietrini , _ distributed and overlapping representations of faces and objects in ventral temporal cortex _ , science , american association for the advancement of science , 293 ( 2001 ) , pp .d. e. osher , r. r. saxe , k. koldewyn , j. d. e. gabrieli , n. kanwisher , and z. m. saygin , zeynep m , _ structural connectivity fingerprints predict cortical selectivity for multiple visual categories across cortex _ , cerebral cortex , oxford university press , 2003 , pp .e. formisano , f. de martino , m. bonte and r. goebel,_`who ' is saying ` what ' ?brain - based decoding of human voice and speech _ , science , american association for the advancement of science , ( 322 ) 2008 , pp .. j. v. haxby , _ multivariate pattern analysis of fmri : the early beginnings _ ,neuroimage , elsevier , 62 ( 2012 ) , pp .k. a. norman , s. m. polyn , g. j. detre and j. v. haxby , _ beyond mind - reading : multi - voxel pattern analysis of fmri data _ , trends in cognitive sciences , elsevier , 10 ( 2006 ) , pp .. o. yamashita , m. a. sato , t. yoshioka , f. tong and y. kamitani , _ sparse estimation automatically selects voxels relevant for the decoding of fmri activity patterns _ , neuroimage , elsevier , 42 ( 2008 ) , pp .s. ryali , k. supekar , d. a. abrams and v. menon , _ sparse logistic regression for whole - brain classification of fmri data _ , neuroimage , elsevier , 51 ( 2010 ) , pp .. h. zou and t. hastie , _ regularization and variable selection via the elastic net _ , journal of the royal statistical society : series b ( statistical methodology ) , wiley online library , 67 ( 2005 ) , pp . 301320 .m. k. carroll , g. a. cecchi , i. rish , r. garg and a. r. ravishankar , _ prediction and interpretation of distributed neural activity with sparse models _ , neuroimage , elsevier , 44 ( 2009 ) , pp .j. richiardi , h. eryilmaz , s. schwartz , p. vuilleumier and d. van de ville , _ decoding brain states from fmri connectivity graphs _ , neuroimage , elsevier , 56 ( 2011 ) , pp .g. varoquaux , a. gramfort and b. thirion , _ small - sample brain mapping : sparse recovery on spatially correlated designs with randomization and clustering _ , international conference on machine learning ( icml ) , 2012 . c. cortes and v. vapnik , _ support - vector networks _ , machine learning , springer , 20 ( 1995 ) , pp. 273297 .l. grosenick , b. klingenberg , k. katovich , b. knutson and j. e. taylor , _ interpretable whole - brain prediction analysis with graphnet _ , neuroimage , elsevier , 72 ( 2013 ) , pp .. k. j. fristo , and j. o. h. n. ashburner and j. heather and others , _ statistical parametric mapping _ ,neuroscience databases : a practical guide , 2003 , pp .m. jenkinson , p. bannister , m. brady and s. smith , _ improved optimization for the robust and accurate linear registration and motion correction of brain images _ , neuroimage , elsevier , 17 ( 2002 ) , pp .a. c. connolly , j. s. guntupalli , j. gors , m. hanke , y. o. halchenko , y. c. wu , h. abdi and j. v. haxby , _ the representation of biological classes in the human brain _ , the journal of neuroscience , 32 ( 2012 ) , pp. 26082618 .k. j. duncan , c. pattamadilok , i. knierim and t. j. devlin , _ consistency and variability in functional localisers _ , neuroimage ,elsevier , 46 ( 2009 ) , pp .d. g. wakeman and r. n. henson , _ a multi - subject , multi - modal human neuroimaging dataset _, scientific data , nature publishing group , 2 ( 2015 ) .j. talairach and p. tournoux , _ co - planar stereotaxic atlas of the human brain .3-dimensional proportional system : an approach to cerebral imaging _ , thieme , 1988 .
multivariate pattern ( mvp ) classification holds enormous potential for decoding visual stimuli in the human brain by employing task - based fmri data sets . there is a wide range of challenges in the mvp techniques , i.e. decreasing noise and sparsity , defining effective regions of interest ( rois ) , visualizing results , and the cost of brain studies . in overcoming these challenges , this paper proposes a novel model of neural representation , which can automatically detect the active regions for each visual stimulus and then utilize these anatomical regions for visualizing and analyzing the functional activities . therefore , this model provides an opportunity for neuroscientists to ask this question : what is the effect of a stimulus on each of the detected regions instead of just study the fluctuation of voxels in the manually selected rois . moreover , our method introduces analyzing snapshots of brain image for decreasing sparsity rather than using the whole of fmri time series . further , a new gaussian smoothing method is proposed for removing noise of voxels in the level of rois . the proposed method enables us to combine different fmri data sets for reducing the cost of brain studies . experimental studies on 4 visual categories ( words , consonants , objects and nonsense photos ) confirm that the proposed method achieves superior performance to state - of - the - art methods .
many classical applications , such as radar and error - correcting codes , make use of over - complete spanning systems .oftentimes , we may view an over - complete spanning system as a _frame_. take to be a collection of vectors in some separable hilbert space .then is a frame if there exist _ frame bounds _ and with such that for every .when , is called a _tight frame_. for finite - dimensional unit norm frames , where , the _ worst - case coherence _ is a useful parameter : note that orthonormal bases are tight frames with and have zero worst - case coherence . in both ways , frames form a natural generalization of orthonormal bases . in this paper, we only consider finite - dimensional frames .those not familiar with frame theory can simply view a finite - dimensional frame as an matrix of rank whose columns are the frame elements .with this view , the tightness condition is equivalent to having the spectral norm be as small as possible ; for an unit norm frame , this equivalently means . throughout the literature , applications require finite - dimensional frames that are nearly tight and have small worst - case coherence . among these ,a foremost application is sparse signal processing , where frames of small spectral norm and/or small worst - case coherence are commonly used to analyze sparse signals . in general , sparse signal processing deals with measurements of the form where is with , has at most nonzero entries , and is some sort of noise . when given measurements of , one might be asked to reconstruct the original sparse vector , or to find the locations of its nonzero entries , or to simply determine whether is nonzero each of these is a sparse signal processing problem . in some applications ,the signal is sparse in the identity basis , in which case represents the measurement process . in other applications, is sparse in an orthonormal basis or an overcomplete dictionary . in this case, is a composition of , the frame resulting from the measurement process , and , the sparsifying dictionary , i.e. , .we do not make a distinction between the two formulations in this paper , but our results are most readily interpretable in a physical setting for the former case .recently , introduced another notion of frame coherence called _ average coherence _ : note that , in addition to having zero worst - case coherence , orthonormal bases also have zero average coherence . intuitively , worst - case coherence is a measure of dissimilarity between frame elements , whereas average coherence measures how well the frame elements are distributed in the unit hypersphere . in sparse signal processing ,there are a number of performance guarantees that depend only on worst - case coherence .these guarantees at best allow for sparsity levels on the order of .compressed sensing has brought guarantees that depend on the restricted isometry property , which is much more difficult to check , but the guarantees allow for sparsity levels on the order of .recently , used worst - case and average coherence to produce _ probabilistic _ guarantees that also allow for sparsity levels on the order of ; these guarantees require that worst - case and average coherence together satisfy the following property : we say an unit norm frame satisfies the _ strong coherence property _ if where and are given by and , respectively .the reader should know that the constant is not particularly essential to the above definition ; it is used in to simplify some analysis and make certain performance guarantees explicit , but the constant is by no means optimal .this in mind , the requirement ( scp-1 ) can be interpreted more generally as . in the next section, we will use the strong coherence property to continue the work of .where provided guarantees for noiseless reconstruction , we will produce near - optimal guarantees for signal detection and reconstruction from _ noisy _ measurements of sparse signals .these guarantees are related to those in , and we will also elaborate on this relationship . the results given in and section 2 , as well as the applications discussed in demonstrate a pressing need for nearly tight frames with small worst - case and average coherence , especially in the area of sparse signal processing .this paper offers three additional contributions in this regard . in section 3, we provide a sizable catalog of frames that exhibit small spectral norm , worst - case coherence , and average coherence . with all three frame parameters provably small ,these frames are guaranteed to perform well in relevant applications .next , performance in many applications is dictated by worst - case coherence .it is therefore particularly important to understand which worst - case coherence values are achievable . to this end, the welch bound is commonly used in the literature . however , the welch bound is only tight when the number of frame elements is less than the square of the spatial dimension .another lower bound , given in , beats the welch bound when there are more frame elements , but it is known to be loose for real frames .given this context , section 4 gives a new lower bound on the worst - case coherence of real frames .our bound beats both the welch bound and the bound in when the number of frame elements far exceeds the spatial dimension .finally , since average coherence is so new , there is currently no intuition as to when ( scp-2 ) is satisfied . in section 5, we use ideas akin to the switching equivalence of graphs to transform a frame that satisfies ( scp-1 ) into another frame with the same spectral norm and worst - case coherence that additionally satisfies ( scp-2 ) . throughout the paper , we make use of certain notations that we address here .recall , with big - o notation , that if there exists positive and such that for all , . also , if , and if and . additionally , we use to denote the matrix whose columns are taken from the matrix according to the index set .similarly , we use to denote the column vector whose entries are taken from the column vector according to the index set . the column vector of the largest entries in column vector denoted by .we also use to denote the norm of a vector , while is the spectral norm of a matrix .lastly , we use a star ( ) to denote the matrix adjoint , a dagger ( ) to denote the matrix pseudoinverse , and to denote the identity matrix .frames with small spectral norm , worst - case coherence , and/or average coherence have found use in recent years with applications involving sparse signals .donoho et al .used the worst - case coherence in to provide uniform bounds on the signal and support recovery performance of combinatorial and convex optimization methods and greedy algorithms .later , tropp and cands and plan used both the spectral norm and worst - case coherence to provide tighter bounds on the signal and support recovery performance of convex optimization methods for most support sets under the additional assumption that the sparse signals have independent nonzero entries with zero median .recently , bajwa et al . made use of the spectral norm and both coherence parameters to report tighter bounds on the noisy model selection and noiseless signal recovery performance of an incredibly fast greedy algorithm called _ one - step thresholding ( ost ) _ for most support sets and _ arbitrary _ nonzero entries . in this section , we discuss further implications of the spectral norm and worst - case and average coherence of frames in applications involving sparse signals . a common task in signal processing applications is to test whether a collection of measurements corresponds to mere noise . for applications involving sparse signals, one can test measurements against the null hypothsis and alternative hypothesis , where the entries of the noise vector are independent , identical zero - mean complex - gaussian random variables and the signal is -sparse .the performance of such signal detection problems is directly proportional to the energy in .in particular , existing literature on the detection of sparse signals leverages the fact that when satisfies the restricted isometry property ( rip ) of order .in contrast , we now show that the strong coherence property also guarantees for most -sparse vectors .we start with a definition : [ def : wrip ] we say an frame satisfies the _ -weak restricted isometry property ( weak rip ) _ if for every -sparse vector , a random permutation of s entries satisfies with probability exceeding . at first glance , it may seem odd that we introduce a random permutation when we might as well define weak rip in terms of a -sparse vector whose support is drawn randomly from all possible choices .in fact , both versions would be equivalent in distribution , but we stress that in the present definition , the values of the nonzero entries of are _ not _ random ; rather , the only randomness we have is in the locations of the nonzero entries .we wish to distinguish our results from those in , which explicitly require randomness in the values of the nonzero entries .we also note the distinction between rip and weak rip weak rip requires that preserves the energy of _ most _ sparse vectors .moreover , the manner in which we quantify `` most '' is important . for each sparse vector , preserves the energy of most permutations of that vector , but for different sparse vectors , might not preserve the energy of permutations with the same support .that is , unlike rip , weak rip is _ not _ a statement about the singular values of submatrices of .certainly , matrices for which most submatrices are well - conditioned , such as those discussed in , will satisfy weak rip , but weak rip does not require this .that said , the following theorem shows , in part , the significance of the strong coherence property .[ thm.wrip ] any unit norm frame that satisfies the strong coherence property also satisfies the -weak restricted isometry property provided and .let be as in definition [ def : wrip ] .note that is equivalent to .defining , then the cauchy - schwarz inequality gives where the last inequality uses the fact that in .we now consider ( * ? ? ?* lemma 3 ) , which states that for any and , with probability exceeding provided .we claim that together with ( * ? ? ?* lemma 3 ) guarantee with probability exceeding . in order to establish this claim , we fix and .it is then easy to see that ( scp-1 ) gives , and also that ( scp-2 ) and give .therefore , since the assumption that together with implies , we obtain .the result now follows from the observation that implies .this theorem shows that having small worst - case and average coherence is enough to guarantee weak rip .this contrasts with related results by tropp that require to be nearly tight .in fact , the proof of theorem [ thm.wrip ] does not even use the full power of the strong coherence property ; instead of ( scp-1 ) , it suffices to have , part of what calls the coherence property .also , if has worst - case coherence and average coherence , then even if has large spectral norm , theorem [ thm.wrip ] states that preserves the energy of most -sparse vectors with , i.e. , the sparsity regime which is linear in the number of measurements .another common task in signal processing applications is to reconstruct a -sparse signal from a small collection of linear measurements .recently , tropp used both the worst - case coherence and spectral norm of frames to find bounds on the reconstruction performance of _ basis pursuit ( bp ) _ for most support sets under the assumption that the nonzero entries of are independent with zero median .in contrast , used the spectral norm and worst - case and average coherence of frames to find bounds on the reconstruction performance of ost for most support sets and _ arbitrary _ nonzero entries .however , both and limit themselves to recovering in the absence of noise , corresponding to , a rather ideal scenario .our goal in this section is to provide guarantees for the reconstruction of sparse signals from noisy measurements , where the entries of the noise vector are independent , identical complex - gaussian random variables with mean zero and variance . in particular , and in contrast with , our guarantees will hold for arbitrary unit norm frames without requiring the signal s sparsity level to satisfy . the reconstruction algorithm that we analyze here is the ost algorithm of , which is described in algorithm [ alg : ost_recon ] .the following theorem extends the analysis of and shows that the ost algorithm leads to near - optimal reconstruction error for certain important classes of sparse signals . before proceeding further ,we first define some notation .we use ] , and suppose the magnitudes of nonzero entries of are some , while the magnitudes of the remaining nonzero entries are not necessarily same , but are smaller than and scale as .then , provided .let be the support of , and define .we wish to show that , since this implies . in order to prove , notice that and so combining this with the fact that gives therefore , provided , we have that . in words , lemma [ lem : ost_opt_cond ] implies that ost is near - optimal for those -sparse signals whose entries above the noise floor have roughly the same magnitude .this subsumes a very important class of signals that appears in applications such as multi - label prediction , in which all the nonzero entries take values . to the best of our knowledge ,theorem [ thm : rsp ] is the first result in the sparse signal processing literature that does not require rip and still provides near - optimal reconstruction guarantees for such signals from noisy measurements , while using either random or deterministic frames , even when .we note that our techniques can be extended to reconstruct noisy signals , that is , we may consider measurements of the form , where is also a noise vector of independent , identical zero - mean complex - gaussian random variables .in particular , if the frame is tight , then our measurements will not color the noise , and so noise in the signal may be viewed as noise in the measurements : ; if the frame is not tight , then the noise will become correlated in the measurements , and performance would be depend nontrivially on the frame s gram matrix . also , the authors have had some success with generalizing theorem [ thm : rsp ] to approximately sparse signals ; the analysis follows similiar lines , but is rather cumbersome , and it appears as though the end result is only strong enough in the case of very nearly sparse signals . as such , we omit this result .in this section , we consider a range of nearly tight frames with small worst - case and average coherence .we investigate various ways of selecting frames at random from different libraries , and we show that for each of these frames , the spectral norm , worst - case coherence , and average coherence are all small with high probability .later , we will consider deterministic constructions that use gabor and chirp systems , spherical designs , equiangular tight frames , and error - correcting codes .for the reader s convenience , all of these constructions are summarized in table [ table.constructions ] .before we go any further , recall the following lower bound on worst - case coherence : [ thm.welch bound ] every unit norm frame has worst - case coherence .we will use the welch bound in the proof of the following lemma , which gives three different sufficient conditions for a frame to satisfy ( scp-2 ) .these conditions will prove quite useful in this section and throughout the paper .[ lem.sufficient conditions ] for any unit norm frame , each of the following conditions implies : 1 . for every , 2 . and , 3 . and . for condition ( i ), we have the welch bound therefore gives .for condition ( ii ) , we have considering the welch bound , it suffices to show .rearranging equivalently gives when , the left - hand side of becomes , which is trivially nonnegative .otherwise , we have in this case , by the quadratic formula and the fact that the left - hand side of is concave up in , we have that is indeed satisfied . for condition ( iii ) , we use the triangle and cauchy - schwarz inequalities to get considering the welch bound , it suffices to show .taking and rearranging gives a polynomial : . by convexity and monotonicity of the polynomial in , it can be shown that the largest real root of this polynomial is always smaller than . also , considering it is concave up in , it suffices that , which we have since .construct a matrix with independent , gaussian - distributed entries that have zero mean and unit variance . by normalizing the columns ,we get a matrix called a _normalized gaussian frame_. this is perhaps the most widely studied type of frame in the signal processing and statistics literature . to be clear, the term `` normalized '' is intended to distinguish the results presented here from results reported in earlier works , such as , which only ensure that gaussian frame elements have unit norm in expectation . in other words ,normalized gaussian frame elements are independently and uniformly distributed on the unit hypersphere in .that said , the following theorem characterizes the spectral norm and the worst - case and average coherence of normalized gaussian frames .[ thm.normalized gaussian frames ] build a real frame by drawing entries independently at random from a gaussian distribution of zero mean and unit variance .next , construct a normalized gaussian frame by taking for every .provided , then the following inequalities simultaneously hold with probability exceeding : 1 . , 2 . , 3 . .theorem [ thm.normalized gaussian frames](i ) can be shown to hold with probability exceeding by using a bound on the norm of a gaussian random vector in ( * ? ? ?* lemma 1 ) and a bound on the magnitude of the inner product of two independent gaussian random vectors in ( * ? ? ?* lemma 6 ) .specifically , pick any two distinct indices , and define probability events , , and for and .then it follows from the union bound that one can verify that because of ( * ? ? ?* lemma 1 ) , and we further have because of ( * ? ? ?* lemma 6 ) and the fact that .thus , for any fixed and , with probability exceeding .it therefore follows by taking a union bound over all choices for and that theorem [ thm.normalized gaussian frames](i ) holds with probability exceeding .theorem [ thm.normalized gaussian frames](ii ) can be shown to hold with probability exceeding by appealing to the preceding analysis and hoeffding s inequality for a sum of independent , bounded random variables .specifically , fix any index , and define random variables .next , define the probability event using the analysis for the worst - case coherence of and taking a union bound over the possible s gives . furthermore, taking , then elementary probability analysis gives where denotes the unit hypersphere in , denotes the -dimensional hausdorff measure on , and denotes the probability density function for the random vector .the first thing to note here is that the random variables are bounded and jointly independent when conditioned on and .this assertion mainly follows from bayes rule and the fact that are jointly independent when conditioned on .the second thing to note is that = 0} ] , and so we have from ( * ? ? ?* theorem a.1.12 , theorem a.1.13 ) and that for any , taking , then a union bound gives provided .conditioning on , we have that theorem [ thm.random harmonic frames](i ) holds trivially , while theorem [ thm.random harmonic frames](ii ) follows from lemma [ lem.sufficient conditions ] . specifically , we have that guarantees because of the conditioning on , which in turn implies that satisfies either condition ( i ) or ( ii ) of lemma [ lem.sufficient conditions ] , depending on whether .this therefore establishes that theorem [ thm.random harmonic frames](i)-(ii ) simultaneously hold with probability exceeding .the only remaining claim is that with high probability . to this end , define , and pick any two distinct indices .note that where the last equality follows from the fact that has orthogonal columns .next , we write for some . then applying the union bound to and to the real and imaginary parts of gives where the last term follows from and the fact that .define random variables .note that the s have zero mean and are jointly independent .also , the s are bounded by almost surely since and .moreover , the variance of each is bounded : .therefore , we may use the bernstein inequality for a sum of independent , bounded random variables to bound the probability that deviates from : similarly , the probability that is also bounded above by .substituting these probability bounds into gives with probability at most provided .finally , we take a union bound over the possible choices for and to get that theorem [ thm.random harmonic frames](iii ) holds with probability exceeding .the result now follows by taking a final union bound over and .as stated earlier , random harmonic frames are not new to sparse signal processing .interestingly , for the application of compressed sensing , provides performance guarantees for both random harmonic and gaussian frames , but requires more rows in a random harmonic frame to accommodate the same level of sparsity .this suggests that random harmonic frames may be inferior to gaussian frames as compressed sensing matrices , but practice suggests otherwise . in a sense , theorem [ thm.random harmonic frames ] helps to resolve this gap in understanding ; there exist compressed sensing algorithms whose performance is dictated by worst - case coherence , and theorem [ thm.random harmonic frames ] states that random harmonic frames have near - optimal worst - case coherence , being on the order of the welch bound with an additional factor . to illustrate the bounds in theorem [ thm.random harmonic frames ] , we ran simulations in matlab . picking , we observed realizations of random harmonic frames for each .the distributions of , , and were rather tight , so we only report the ranges of values attained , along with the bounds given in theorem [ thm.random harmonic frames ] .notice that theorem [ thm.random harmonic frames ] gives a bound on in terms of both and . to simplify matters ,we show that , where the minimum and maximum are taken over all realizations in the sample : & \qquad\subseteq[500,1500 ] \\ & \qquad\nu_f & \in & [ 0.2000,0.8082]\times10^{-3 } & \qquad\leq0.0023\approx\tfrac{0.0746}{\sqrt{1052 } } \\ & \qquad\mu_f & \in & [ 0.0746,0.0890 ] & \qquad\leq0.8967 \\ \\ m=1250 : & \qquad|\mathcal{m}| & \in & [ 1207,1305 ] & \qquad\subseteq[625,1875 ] \\ & \qquad\nu_f & \in & [ 0.2000,0.6273]\times10^{-3 } & \qquad\leq0.0018\approx\tfrac{0.0623}{\sqrt{1305 } } \\ & \qquad\mu_f & \in & [ 0.0623,0.0774 ] & \qquad\leq0.7766 \\\\ m=1500 : & \qquad|\mathcal{m}| & \in & [ 1454,1590 ] & \qquad\subseteq[750,2250 ] \\ & \qquad\nu_f & \in & [ 0.2000,0.4841]\times10^{-3 } & \qquad\leq0.0015\approx\tfrac{0.0571}{\sqrt{1590 } } \\ & \qquad\mu_f & \in & [ 0.0571,0.0743 ] & \qquad\leq0.6849 \end{array}\ ] ] the reader may have noticed how consistently the average coherence value of was realized .this occurs precisely when the zeroth row of the dft is not selected , as the frame elements sum to zero in this case : these simulations seem to indicate that our bounds on , , and leave room for improvement .the only bound that lies within an order of magnitude of real - world behavior is our bound on .gabor frames constitute an important class of frames , as they appear in a variety of applications such as radar , speech processing , and quantum information theory . given a nonzero seed function , we produce all time- and frequency - shifted versions : , . viewing these shifted functions as vectors in an gabor frame .the following theorem characterizes the spectral norm and the worst - case and average coherence of gabor frames generated from either a deterministic alltop vector or a random steinhaus vector .[ thm.gabor ] take an alltop function defined by , .also , take a random steinhaus function defined by , , where the s are independent random variables distributed uniformly on the unit interval . then the gabor frames and generated by and , respectively , are unit norm and tight , that is , , and both frames have average coherence .furthermore , if is prime , then , while if , then with probability exceeding . the tightness claim follows from , in which it was shown that gabor frames generated by nonzero seed vectors are tight .the bound on average coherence is a consequence of ( * ? ? ?* theorem 7 ) concerning arbitrary gabor frames .the claim concerning follows directly from , while the claim concerning is a simple consequence of ( * ? ? ?* theorem 5.1 ) . instead of taking all translates and modulates of a seed function , constructs _ chirp frames _ by taking all powers and modulates of a chirp function . picking to be prime , we start with a chirp function defined by , .the frame elements are then defined entrywise by , .certainly , chirp frames are , at the very least , similar in spirit to gabor frames . as a matter of fact ,the chirp frame is in some sense equivalent to the gabor frame generated by the alltop function : it is easy to verify that , and when , the map is a permutation over . using terminology from definition [ def.flipping and wiggling ], we say the chirp frame is _ wiggling equivalent _ to a unitary rotation of permuted alltop gabor frame elements .as such , by lemma [ lem : geom_eqframes ] , the chirp frame has the same spectral norm and worst - case coherence as the alltop gabor frame , but the average coherence may be different . in this case , the average coherence still satisfies ( scp-2 ) . indeed , adding the frame elements gives and so .therefore , lemma [ lem.sufficient conditions](i ) gives the result : [ thm.chirp ] pick prime , and let be the frame of all powers and modulates of the chirp function .then is a unit norm tight frame with , and has worst case coherence and average coherence . to illustrate the bounds in theorems [ thm.gabor ] and [ thm.chirp ] , we consider the examples of an alltop gabor frame and a chirp frame , each with . in this case , the gabor frame has , while the chirp frame has .note the gabor and chirp frames have different average coherences despite being equivalent in some sense .for the random steinhaus gabor frame , we ran simulations in matlab and observed realizations for each .the distributions of and were rather tight , so we only report the ranges of values attained , along with the bounds given in theorem [ thm.gabor ] : \times10^{-2 } & \qquad\leq0.0164 \\ & \qquad\mu_g & \in & [ 0.3242,0.4216 ] & \qquad\leq0.9419 \\ \\ m=70 : & \qquad\nu_g & \in & [ 0.3151,0.4532]\times10^{-2 } & \qquad\leq0.0141 \\ & \qquad\mu_g & \in & [ 0.2989,0.3814 ] & \qquad\leq0.8883 \\\\ m=80 : & \qquad\nu_g & \in & [ 0.2413,0.3758]\times10^{-2 } & \qquad\leq0.0124 \\ & \qquad\mu_g & \in & [ 0.2711,0.3796 ] & \qquad\leq0.8439 \end{array}\ ] ] these simulations seem to indicate that bound on is conservative by an order of magnitude .lemma [ lem.sufficient conditions](ii ) leads one to consider frames of vectors that sum to zero . in , it is proved that real unit norm tight frames with this property make up another well - studied class of vector packings : spherical 2-designs . to be clear , a collection of unit - norm vectors called a spherical -design if , for every polynomial of degree at most , we have where is the unit hypersphere in and denotes the -dimensional hausdorff measure on . in words ,vectors that form a spherical -design serve as good representatives when calculating the average value of a degree- polynomial over the unit hypersphere . today , such designs find application in quantum state estimation . since real unit normtight frames always exist for , one might suspect that spherical 2-designs are equally common , but this intuition is faulty the sum - to - zero condition introduces certain issues .for example , there is no spherical 2-design when is odd and . in ,spherical 2-designs are explicitly characterized by construction .the following theorem gives a construction based on harmonic frames : [ thm.spherical 2-designs ] pick even and . take an harmonic frame by collecting rows from a discrete fourier transform matrix according to a set of nonzero indices and normalize the columns .let denote largest index in , and define a real frame by then is unit norm and tight , i.e. , , with worst - case coherence and average coherence .it is easy to verify that is a unit norm tight frame using the geometric sum formula . also , since the frame elements sum to zero and , the claim regarding average coherence follows from lemma [ lem.sufficient conditions](ii ) .it remains to prove . for each distinct pair of indices , we have and so .this gives the result .to illustrate the bounds in theorem [ thm.spherical 2-designs ] , we consider the spherical 2-design constructed from a harmonic equiangular tight frame .specifically , we take a dft matrix , choose nonzero row indices and normalize the columns to get a harmonic frame whose worst - case coherence achieves the welch bound : .following theorem [ thm.spherical 2-designs ] , we produce a spherical 2-design with and .we now consider a construction that dates back to seidel with , and was recently developed further in . here, a special type of block design is used to build an equiangular tight frame ( etf ) , that is , a tight frame in which the modulus of every inner product between frame elements achieves the welch bound .let s start with a definition : a -_steiner system _ is a -element set with a collection of -element subsets of , called _ blocks _ , with the property that any -element subset of is contained in exactly one block . the -_incidence matrix _ of a steiner system has entries , where if the block contains the element , and otherwise .one example of a steiner system is a set with all possible two - element blocks .this forms a -steiner system because every pair of elements is contained in exactly one block .the following theorem details how constructs etfs using steiner systems .[ thm.steiner etfs ] every -steiner system can be used to build a equiangular tight frame according the following procedure : 1 .let be the incidence matrix of a -steiner system .2 . let be the discrete fourier transform matrix .3 . for each , let be a matrix obtained from the column of by replacing each of the one - valued entries with a distinct row of , and every zero - valued entry with a row of zeros .concatenate and rescale the s to form ] defines a homomorphism on .since , the inverse images of under this homomorphism must form two cosets of equal size , and so }=0 ] .pick for which there exists a such that both and =1 ] .this is certainly possible whenever .exponentiation gives which has degree .thus , has at most solutions , and each such produces a summand in of size .next , we consider the s for which , =0 ] are parallel , and so . here , which has degree .thus , has at most solutions , and each such produces a summand in of size .we can now continue the bound from : . from here , isolating gives the claim .lastly , for the average coherence , pick some .then summing the entries in the row gives } = \tfrac{1}{\sqrt{2^{m}}}\bigg(\sum_{\alpha_0\in\mathbb{f}_{2^m}}(-1)^{\mathrm{tr}(\alpha_0x)}\bigg)\sum_{\alpha_1\in\mathbb{f}_{2^m}}\cdots\sum_{\alpha_t\in\mathbb{f}_{2^m}}(-1)^{\mathrm{tr}\big[\sum_{i=1}^t\alpha_ix^{2^i+1}\big ] } = \left\{\begin{array}{lc}2^{(t+1/2)m},&x=0\\0,&x\neq0\end{array}\right . .\ ] ] that is , the frame elements sum to a multiple of an identity basis element : .since every entry in row is , we have for every , and so by lemma [ lem.sufficient conditions](i ) , we are done . to illustrate the bounds in theorem [ thm.code-based coherence ] , we consider the example where and .this is a code - based frame with and .in many applications of frames , performance is dictated by worst - case coherence .it is therefore particularly important to understand which worst - case coherence values are achievable . to this end, the welch bound is commonly used in the literature . when worst - case coherence achieves the welch bound ,the frame is equiangular and tight ; one of the biggest open problems in frame theory concerns equiangular tight frames . however , equiangular tight frames can not have more vectors than the square of the spatial dimension , meaning the welch bound is not tight whenever . when the number of vectors is exceedingly large , the following theorem gives a better bound : [ thm.asymptotic bound ] every sufficiently large unit norm frame with and worst - case coherence satisfies for some constant . for a fixed worst - case coherence , thisbound indicates that the number of vectors can not exceed some exponential in the spatial dimension , that is , for some .however , since the constant is not established in this theorem , it is unclear which base is appropriate for each .the following theorem is a little more explicit in this regard : [ thm.complex bound ] every unit norm frame has worst - case coherence .furthermore , taking , this lower bound goes to as . for many applications, it does not make sense to use a complex frame , but the bound in theorem [ thm.complex bound ] is known to be loose for real frames .we therefore improve theorems [ thm.asymptotic bound ] and [ thm.complex bound ] for the case of real unit norm frames : [ thm.bound ] every real unit norm frame has worst - case coherence .\ ] ] furthermore , taking , this lower bound goes to as . before proving this theorem, we first consider the special case where the spatial dimension is : [ lem.3d points ] given points on the unit sphere , the smallest angle between points is .we first claim there exists a closed spherical cap in with area that contains two of the points .suppose otherwise , and take to be the angular radius of a spherical cap with area .that is , is the angle between the center of the cap and every point on the boundary . since the cap is closed , we must have that the smallest angle between any two of our points satisfies .let denote the closed spherical cap centered at of angular radius , and let denote our set of points .then we know for , the s are disjoint , , and , and so taking 2-dimensional hausdorff measures on the sphere gives a contradiction . since two of the pointsreside in a spherical cap of area , we know is no more than twice the radius of this cap .we use spherical coordinates to relate the cap s area to the radius : .therefore , when , we have , and so gives the result .[ thm.3d points ] every real unit norm frame has worst - case coherence . packing unit vectors in corresponds to packing antipodal points in , and so lemma [ lem.3d points ] gives . applying the double angle formula to ] . note that we do not lose generality by forcing , since this is guaranteed with .continuing gives using the formula for a hypersphere s hypersurface area , we can express the left - hand side of : isolating above and using and gives .the second part of the result comes from a simple application of stirling s approximation . in ,numerical results are given for , and we compare these results to theorems [ thm.complex bound ] and [ thm.bound ] in figure [ figure ] . considering this figure , we note that the bound in theorem [ thm.complex bound ] is inferior to the maximum of the welch bound and the bound in theorem [ thm.bound ] , at least when . this illustrates the degree to which theorem [ thm.bound ] improves the bound in theorem [ thm.complex bound ] for real frames . in fact , since for all , the bound for real frames in theorem [ thm.bound ] is asymptotically better than the bound for complex frames in theorem [ thm.complex bound ] .moreover , for , theorem [ thm.bound ] says , and proved this bound to be tight for every .lastly , figure [ figure ] illustrates that theorem [ thm.3d points ] improves the bound in theorem [ thm.bound ] for the case . in many applications ,large dictionaries are built to obtain sparse reconstruction , but the known guarantees on sparse reconstruction place certain requirements on worst - case coherence .asymptotically , the bounds in theorems [ thm.complex bound ] and [ thm.bound ] indicate that certain exponentially large dictionaries will not satisfy these requirements .for example , if , then by theorem [ thm.complex bound ] , and if the frame is real , we have by theorem [ thm.bound ] .such a dictionary will only work for sparse reconstruction if the sparsity level is sufficiently small ; deterministic guarantees require , while probabilistic guarantees require , and so in this example , the dictionary can , at best , only accommodate sparsity levels that are smaller than 10 . unfortunately , in real - world applications , we can expect the sparsity level to scale with the signal dimension . this in mind , theorems [ thm.complex bound ] and [ thm.bound ] tell us that dictionaries can only be used for sparse reconstruction if for some sufficiently small . to summarize , the welch bound is known to be tight only if , and theorems [ thm.complex bound ] and [ thm.bound ] give bounds which are asympotically better than the welch bound whenever .when is between and , the best bound to date is the ( loose ) welch bound , and so more work needs to be done to bound worst - case coherence in this parameter region .in , average coherence is used to derive a number of guarantees on sparse signal processing . sinceaverage coherence is so new to the frame theory literature , this section will investigate how average coherence relates to worst - case coherence and the spectral norm .we start with a definition : [ def.flipping and wiggling ] we say the frames and are _ wiggling equivalent _ if there exists a diagonal matrix of unimodular entries such that .furthermore , they are _ flipping equivalent _ if is real , having only s on the diagonal .the terms `` wiggling '' and `` flipping '' are inspired by the fact that individual frame elements of such equivalent frames are related by simple unitary operations .note that every frame with nonzero frame elements belongs to a flipping equivalence class of size , while being wiggling equivalent to uncountably many frames .the importance of this type of frame equivalence is , in part , due to the following lemma , which characterizes the shared geometry of wiggling equivalent frames : [ lem : geom_eqframes ] wiggling equivalence preserves the norms of frame elements , the worst - case coherence , and the spectral norm .take two frames and such that .the first claim is immediate .next , the gram matrices are related by . since corresponding off - diagonal entries are equal in modulus , we know the worst - case coherences are equalfinally , , and so we are done .wiggling and flipping equivalence are not entirely new to frame theory . for a real equiangular tight frame , the gram matrix completely determined by the sign pattern of the off - diagonal entries , which can in turn be interpreted as the seidel adjacency matrix of a graph . as such , flipping a frame element has the effect of negating the corresponding row and column in the gram matrix , which further corresponds to _ switching _ the adjacency rule for that vertex in the graph vertices are adjacent to after switching precisely when they were not adjacent before switching .graphs are called _ switching equivalent _ if there is a sequence of switching operations that produces one graph from the other ; this equivalence was introduced in and was later extensively studied by seidel in . since flipping equivalent real equiangular tight frames correspond to switching equivalent graphs , the terms have become interchangeable .for example , uses switching ( i.e. , wiggling and flipping ) equivalence to make progress on an important problem in frame theory called the _ paulsen problem _ , which asks how close a nearly unit norm , nearly tight frame must be to a unit norm tight frame . now that we understand wiggling and flipping equivalence , we are ready for the main idea behind this section .suppose we are given a unit norm frame with acceptable spectral norm and worst - case coherence , but we also want the average coherence to satisfy ( scp-2 ) . then by lemma [ lem : geom_eqframes ] , all of the wiggling equivalent frames will also have acceptable spectral norm and worst - case coherence , and so it is reasonable to check these frames for good average coherence .in fact , the following theorem guarantees that at least one of the flipping equivalent frames will have good average coherence , with only modest requirements on the original frame s redundancy .[ thm : avc_rand ] let be an unit norm frame with .then there exists a frame that is flipping equivalent to and satisfies .take to be a rademacher sequence that independently takes values , each with probability .we use this sequence to randomly flip ; define . note that if , we are done . fix some .then we can view as a sum of independent zero - mean complex random variables that are bounded by .we can therefore use a complex version of hoeffding s inequality ( see , e.g. , ( * ? ? ? * lemma 3.8 ) ) to bound the probability expression in as . from here , a union bound over all choices for gives , and so implies , as desired .while theorem [ thm : avc_rand ] guarantees the existence of a flipping equivalent frame with good average coherence , the result does not describe how to find it .certainly , one could check all frames in the flipping equivalence class , but such a procedure is computationally slow . as an alternative , we propose a linear - time flipping algorithm ( algorithm [ alg : flipping ] ) .the following theorem guarantees that linear - time flipping will produce a frame with good average coherence , but it requires the original frame s redundancy to be higher than what suffices in theorem [ thm : avc_rand ] .* input : * an unit norm frame + * output : * an unit norm frame that is flipping equivalent to [ thm.alg ] suppose . then algorithm [ alg : flipping ] outputs an frame that is flipping equivalent to and satisfies .considering lemma [ lem.sufficient conditions](iii ) , it suffices to have .we will use induction to show for .clearly , .now assume . then by our choice for in algorithm [ alg : flipping ] , we know that . expanding both sides of this inequality gives and so .therefore , where the last inequality uses the inductive hypothesis . as an example of how linear - time flipping reduces average coherence ,consider the following matrix : .\ ] ] here , . even though , we run linear - time flipping to get the flipping pattern . then has average coherence .this example illustrates that the condition in theorem [ thm.alg ] is sufficient but not necessary .the authors thank the anonymous referees for their helpful suggestions , matthew fickus for his insightful comments on chirp frames , and samuel feng and michael a. schwemmer for their help with using the computer clusters in princeton s mathematics department .this work was supported by the office of naval research under grant n00014 - 08 - 1 - 1110 , by the air force office of scientific research under grants fa9550 - 09 - 1 - 0551 and fa 9550 - 09 - 1 - 0643 , and by nsf under grant dms-0914892 .mixon was supported by the a.b .krongard fellowship .the views expressed in this article are those of the authors and do not reflect the official policy or position of the united states air force , department of defense , or the u.s . government .00 d.l .donoho , j. tanner , observed universality of phase transitions in high - dimensional geometry , with implications for modern data analysis and signal processing , phil .r. soc . a 367 ( 2009 ) 42734293 .
this paper investigates two parameters that measure the coherence of a frame : worst - case and average coherence . we first use worst - case and average coherence to derive near - optimal probabilistic guarantees on both sparse signal detection and reconstruction in the presence of noise . next , we provide a catalog of nearly tight frames with small worst - case and average coherence . later , we find a new lower bound on worst - case coherence ; we compare it to the welch bound and use it to interpret recently reported signal reconstruction results . finally , we give an algorithm that transforms frames in a way that decreases average coherence without changing the spectral norm or worst - case coherence . frames , worst - case coherence , average coherence , welch bound , sparse signal processing
determining unknown values of parameters from noisy measurements is a ubiquitous problem in physics and engineering . in quantum mechanics ,the single - parameter problem is posed as determining a coupling parameter that controls the evolution of a probe quantum system via a hamiltonian of the form . traditionally , an estimation procedure proceeds by ( i ) preparing an ensemble of probe systems , either independently or jointly ; ( ii ) evolving the ensemble under ; ( iii ) measuring an appropriate observable in order to infer .the quantum cramr - rao bound gives the optimal sensitivity for _ any _ possible estimator and much research has focused on achieving this bound in practice , using entangled probe states and nonlinear probe hamiltonians . yet, it is often technically difficult to prepare the exotic states and hamiltonians needed for improved sensitivity . instead, an experiment is usually repeated many times to build up sufficient statistics for the estimator .in contrast , the burgeoning field of continuous quantum measurement provides an opportunity for on - line _ single - shot _parameter estimation , in which an estimate is provided in near real - time using a measurement trajectory from a single probe system .parameter estimation via continuous measurement has been previously studied in the context of force estimation and magnetometry . although verstraete et .al develop a general framework for quantum parameter estimation , both of focus on the readily tractable case when the dynamical equations are linear and the quantum states have gaussian statistics . in this case , the optimal estimator is the quantum analog of the classical kalman filter . in this paper, we develop on - line estimators for continuous measurement when the dynamics and states are not restricted .rather than focusing on fundamental quantum limits , we instead consider the more basic problem of developing an actual parameter filter for use with continuous quantum measurements . by embedding parameter estimation in the standard quantum filtering formalism , we construct the optimal bayesian estimator for parameters drawn from a finite dimensional set .the resulting filter is a generalized form of one derived by jacobs for binary state discrimination . using recent stability results of van handel , we give a simple check for whether the estimator can successfully track to the true parameter value in an asymptotic time limit . for caseswhen the parameter is continuous valued , we develop _ quantum particle filters _ as a practical computational method for quantum parameter estimation .these are analogous to , and inspired by , particle filtering methods that have had much success in classical filtering theory .although the quantum particle filter is necessarily sub - optimal , we present numerical simulations which suggest they perform well in practice . throughout, we demonstrate our techniques using a single qubit magnetometer .the remainder of the paper is organized as follows .section [ sec : quantum_filtering ] reviews quantum filtering theory .section [ sec : finite ] develops the estimator and stability results for a parameter from a finite - dimensional set .section [ sec : infinite ] presents the quantum particle filtering algorithm , which is appropriate for estimation of a continuous valued parameters .section [ sec : conclude ] concludes .in this section , we review the notation and features of quantum filtering and quantum stochastic calculus , predominantly summarizing the presentation in , which provides a more complete introduction . in the general quantum filtering problem , we consider a continuous - stream of probe quantum systems interacting with a target quantum system .the probes are subsequently measured and provide a continuous stream of measurement outcomes .the task of quantum filtering is to provide an estimate of the state of the target system given these indirect measurements . in the quantum optics setting ,the target system is usually a collection of atomic systems , with hilbert space and associated space of operators .the probe is taken to be a single mode of the quantum electromagnetic field , from which vacuum fluctuations give rise to white noise statistics . in the limit of weak atom - field coupling , the joint atom - field evolutionis described by the following quantum stochastic differential equation ( qsde ) where is an atomic operator that describes the atom - field interaction and is the atomic hamiltonian .the interaction - picture field operators are quantum white noise processes with a single non - zero it product . for any atomic observable , the heisenberg evolution or quantum flow is defined as .application of the it rules gives the time evolution as )dt + j_t([l^{\dag},x_a])da_t + j_t([x_a , l])da_t^{\dag}\ ] ] with lindblad generator = i[h , x_a ] + l^{\dag}x_al -\frac{1}{2}l^{\dag}lx_a - \frac{1}{2}x_al^{\dag}l .\ ] ] similarly , the observation process , which we take to be homodyne detection of the scattered field , is given by .the it rules give the corresponding time evolution together , and are the system - observation pair which define the filtering problem .the quantum flow describes our knowledge of how atomic observables evolve exactly under the joint propagator in , but it is inaccessible since the system is not directly observed . nonetheless , the scattered fields as measured in carry information about the atomic system , providing a continuous measurement of the observable , albeit corrupted by quantum noise .the quantum filtering problem is to find = \mathbbm{e}(j_t(x_a)|m_{[0,t]}) ] for all .the state is often called the conditional density matrix .the sde or stochastic master equation ( sme ) for is then + ( l\rho_tl^{\dag } - \frac{1}{2}l^{\dag}l\rho_t - \frac{1}{2}\rho_tl^{\dag}l)dt\\ + ( l\rho_t + \rho_tl^{\dag } - { \operatorname{tr}\bigl[(l+l^{\dag})\rho_t\bigr]}\rho_t)dw_t\end{gathered}\ ] ] where the _ innovations process _ ,}dt ] and it rule .measurement , , ( top ) filtered values of ] for simulated trajectory ] consider the setup depicted in figure [ fig : schematic ] .a qubit , initially in the pure state , precesses about a magnetic field while undergoing a continuous measurement along . in terms of the general framework , and , where is the continuous measurement strength in the weak coupling limit .we will not dwell on the underlying physical mechanism which gives rise to the measurement , though continuous polarimetry measurements could suffice . plugging into, the quantum filter for the bloch vector ,\pi_t[\sigma_y],\pi_t[\sigma_z]) ] .it is not difficult to verify that the quantum filter maintains pure states and that the initial state remains on the bloch circle in the - plane .letting be the angle from the positive -axis such that /\pi_t[\sigma_x] ] .more precisely , extend the atomic hilbert space and the operator space , where is the set of diagonal operators on .assuming takes on possible values , .introduce the diagonal operator so that with .this allows one to generalize as any remaining atomic operators act as the identity on the auxiliary space , i.e. .given these definitions , the derivation of the quantum filtering equation remains essentially unchanged , so that the filter in either the operator form of or the adjoint form of is simply updated with the extended forms of operators given in the last paragraph . since is a classical parameter, we require that the reduced conditional density matrix be diagonal in the basis of .thus we can write where } \equiv \pi_t[{{\lvert \xi_i \rangle}{\langle \xi_i \rvert}}\otimes i]\\ \equiv \mathbbm{e}[{{\lvert\xi_i \rangle}{\langle \xi_i \rvert } } \otimes i | m_{[0,t ] } ] \equiv p(\xi = \xi_i | m_{[0,t ] } ) .\end{gathered}\ ] ] then is precisely the conditional probability for to have the value and the set gives the discrete conditional distribution of the random variable represented by . similarly , by requiring operators to be diagonal in ,we ensure that they correspond to classical random variables . in short ,we have simply embedded filtering of a truly classical random variable in the quantum formalism .the fact that both states and operators are diagonal in the auxiliary space suggests using an ensemble form for filtering . as such ,consider an ensemble consisting of a weighted set of conditional atomic states , each state evolved under a different .later , in section [ sec : infinite ] , we will call each ensemble member a _quantum particle_. for now , we explicitly write the conditional quantum state as where is a density matrix on .the reduced state , , is clearly diagonal in the basis of . using the extended version of the adjoint quantum filter in, one can derive the _ ensemble quantum filtering equations _[ eq : finitedim : ensemblefilter ] + ( l\rho_t^{(i)}l^{\dag } - \frac{1}{2}l^{\dag}l\rho_t^{(i ) } - \frac{1}{2}\rho_t^{(i)}l^{\dag}l)dt \nonumber \\ & + \left(l\rho_t^{(i ) } + \rho_t^{(i)}l^{\dag } - { \operatorname{tr}\bigl[(l+l^{\dag})\rho_t^{(i)}\bigr]}\rho_t^{(i)}\right)dw_t \label{eq : finitedim : ensemblefilter : rho}\\ dp_t^{(i ) } & = \left({\operatorname{tr}\bigl[(l+l^{\dag})\rho_t^{(i)}\bigr ] } - { \operatorname{tr}\bigl[i\otimes(l+ l^{\dag})\rho_t^{e}\bigr]}\right)p_t^{(i)}dw_t \label{eq : finitedim : ensemblefilter : prob}\\ dw_t & = dm_t - { \operatorname{tr}\bigl[i\otimes ( l + l^{\dag})\rho_t^{e}\bigr]}dt \label{eq : finitedim : ensemblefilter : innov } \end{aligned}\ ] ] we see that each in the ensemble evolves under a quantum filter with and is coupled to other ensemble members through the innovation factor , which depends on the ensemble expectation of the measurement observable .note that one can incorporate any prior knowledge of in the weights of the initial distribution .the reader should not be surprised that a similar approach would work for estimating more than one parameter at a time , such as three cartesian components of an applied magnetic field .one would introduce an auxiliary space for each parameter and extend the operators in the obvious way .the ensemble filter would then be for a joint distribution over the multi - dimensional parameter space .similarly , one could use this formalism to distinguish initial states , rather than parameters which couple via the hamiltonian .for example , in the case of state discrimination , one would introduce an auxiliary space which labels the possible input states , but does not play any role in the dynamics .the filtered weights would then be the probabilities to have been given a particular initial state .in fact , using a slightly different derivation , jacobs derived equations similar to for the case of binary state discrimination .yanagisawa recently studied the general problem of retrodiction or `` smoothing '' of quantum states . in light of his work and results in the following section ,the retrodictive capabilities of quantum filtering are very limited without significant prior knowledge or feedback .although introducing the auxiliary parameter space does not change the derivation of the quantum filter , it is not clear how the initial uncertainty in the parameter will impact the filter s ability to ultimately track to the correct value .indeed , outside of anecdotal numerical evidence ( which we will presently add to ) , there has been little formal consideration of the sensitivity of the quantum filter to the initial state estimate .recently , van handel presented a set of conditions which determine whether the quantum filter will asymptotically track to the correct state independently of the assumed initial state .since we have embedded parameter estimation in the state estimation framework , such stability then determines whether the quantum filter can asymptotically track to the true parameter , i.e. whether when . in this section ,we present van handel s results in the context of our parameter estimation formalism and present a simple check of asymptotic convergence of the parameter estimate .we begin by reviewing the notions of absolute continuity and observability . in the general stability problem ,let be the true underlying state and be the initial filter estimate .we say that is _ absolutely continuous _ with respect to , written , if and only if . in the context of parameter estimation, we assume that we know the initial atomic state exactly , so that as long as the reduced states satisfy .since these reduced states are simply discrete probability distributions , and , this is just the standard definition of absolute continuity in classical probability theory . in our case, the true state has if the parameter has value . thus , as long as our estimate has non - zero weight on the -th component , .this is trivially satisfied if for all .the other condition for asymptotic convergence is that of observability .a system is _ observable _ if one can determine the exact initial atomic state given the entire measurement record over the infinite time interval .observability is then akin to the ability to distinguish any pair of initial states on the basis of the measurement statistics alone .recall the definition of the lindblad generator in and further define the operator = l^{\dag}x_a + x_al ] .given these definitions , one has the following theorem for filter convergence and corollary for parameter estimation .( theorem 2.5 in ) let be the evolved filter estimate , initialized under state . if the system is observable and , the quantum filter is asymptotically stable in the sense that } } \stackrel{t \rightarrow\infty } { \longrightarrow } 0 \quad \forall x_a \in \mathcal{a}\ ] ] where the convergence is under the observations generated by . one could use this theorem to directly check the stability of the quantum filter for parameter estimation , using the extended forms of operators in and and being careful that the observability condition is now .however , the following corollary relates the observability of the parameter filter to the observability of the related filter for a known parameter . combined with the discussion of extending the absolute continuity condition ,this then gives a simple check for the stability of the parameter filter .consider a parameter which takes on one of distinct positive real values .if the quantum filter with known parameter is observable , then the corresponding extended filter for estimation of is observable . in order to satisfy the observability condition ,we require , where we have set and used the fact that .given that the filter for a known parameter is observable , its observable space coincides with and has an orthogonal operator basis , where we take .similarly , consider the -dimensional operator space .if are distinct , any set of the form is linearly independent , since the corresponding generalized vandermonde matrix has linearly independent columns .following the iterative procedure , we construct the observable space for the parameter estimation filter starting with , which is the identity in the extended space .we then iteratively apply and until we have an invariant linear span of operators .the only non - trivial operator on the auxiliary space comes from the hamiltonian part of the lindblad generator , which introduces higher and higher powers of the diagonal matrix .since is finite , this procedure must terminate .the resulting observable space can be decomposed into subspaces where is some increasing sequence of non - negative integers which correspond to the powers of that are introduced via the hamiltonian .note that the specific values of depend on the commutator algebra of and the atomic - space operator basis .regardless , since the hamiltonian in can always add more powers of , the procedure will not terminate until is composed of a largest linearly independent set of powers of .this set has at most distinct powers of , since it can not exceed the dimension of the auxiliary space .given that any collection of powers of is linearly independent , this means once we reach a set of powers , the procedure terminates and .since has subspaces , each of dimension , as desired and the observability condition is satisfied .although these conditions provide a simple check , we would like to stress that they do not determine how quickly the convergence occurs , which will depend on the specifics of the problem at hand .additionally , as posed , the question of observability is a binary one. one might expect that some unobservable systems are nonetheless `` more observable '' than others or simply that unobservable systems might still be useful for parameter estimation .given the corollary above , one can see that this may occur if a single parameter . then has a row of all zeros , so that the maximal dimension of a set of linearly independent powers of is .similarly , if one allows both positive and negative real - valued parameters , the properties of are not as obvious , though in many circumstances , having both and renders the system unobservable .we explore these nuances in numerical simulations presented in the following section .consider using the single qubit magnetometer of section [ subsec : single_qubit ] as a probe for the magnetic field .since the initial state is restricted to the - plane , the component of the bloch vector is always zero and thus is not a relevant part of the atomic observable space , which is spanned by . in some sense ,the filter with known is trivially observable , since we assume the initial state is known precisely .when is unknown , the ensemble parameter filter is given by [ eq : qubit_ensemble ] where and .we simulated this filter by numerically integrating the quantum filter in using a value for uniformly chosen from the given ensemble of potential values .this generates a measurement current , which is then fed into the ensemble filter of . for all simulations , we set and used a simple it - euler integrator with a step - size .figure [ fig : discrete : combinedruns](a ) shows a simulation of a filter for the case .the filter was initialized with a uniform distribution , . for the particular trajectory shown ,the true value of was and we see that the filter successfully tracks to the correct value .this is not surprising , given that the potential values of are positive and distinct , thus satisfying the convergence corollary .it is also interesting to note that the filter quickly discounts the probabilities for , which are far from the true value .conversely , the filter initially favors the incorrect value before honing in on the correct parameter value . in figure[ fig : discrete : combinedruns](b ) , we show a simulation for the case of , which does not satisfy the convergence corollary .in fact , using the iterative procedure , one finds the observable space is spanned by .but since , so that only 3 of the 6 operators are linearly independent .although the filter does not converge to the true underlying value of , it does reach a steady - state that weights the true value of more heavily .simulating 100 different trajectories for the filter , we observed 81 trials for which the final probabilities were weighted more heavily towards the true value of .this confirms our intuition that the binary question of observability does not entirely characterize the performance of the parameter filter .figure [ fig : discrete : convergence ] shows the rate of convergence of filters meant to distinguish different sets of .the rate of convergence is defined as the ensemble average of the random variable although any individual run might fluctuate before converging to the underlying value , the average of over many runs should give some sense of the rate at which these fluctuations die down . for the simulation shown ,we set and averaged over 1000 runs for two different cases either all possible values are greater than or all are less than .as shown in the plot , the former case shows faster convergence since the field drives the dynamics more strongly than the measurement process , which in turn makes the trajectories of different ensemble members more distinct .of course , one can not make the measurement strength too weak since we need to learn about the system evolution . ) , averaged over 1000 trajectories .the filters are for cases when possible values are either all larger or all smaller than the measurement strength . ]abstractly , developing a parameter estimator in the continuous case is not very different than in the finite dimensional case .one can still introduce an auxiliary space , which is now infinite dimensional . in this space , we embed the operator version of as where and .again , by extending operators appropriately , the filters in and become optimal parameter estimation filters .we generalize the conditional ensemble state of to where }) ] for particle filter set with , , and resampling threshold of .the true magnetic field was . ]figure [ fig : densityplot ] shows a typical run of the quantum particle filter for particles .the true value was and the prior distribution over was taken to be uniform over the interval ] , which we plotted as with . as is seen in the figure , after some initial multi - modal distributions over parameter space , the filter hones in on the true value of . for the simulation shown ,the final estimate was with uncertainty .the filter resampled 7 times over the course of integration .we have presented practical methods for single - shot parameter estimation via continuous quantum measurement . by embedding the parameter estimation problem in the standard quantum filtering problem ,the optimal parameter filter is given by an extended form of the standard quantum filtering equation .for parameters taking values in a finite set , we gave conditions for determining whether the parameter filter will asymptotically converge to the correct value .for parameters taking values from an infinite set , we introduced the quantum particle filter as a computational tool for suboptimal estimation . throughout, we presented numerical simulations of our methods using a single qubit magnetometer .our techniques should generalize straightforwardly for estimating time - dependent parameters and to a lesser extent , estimating initial state parameters .the binary state discrimination problem studied by jacobs is one such example and his approach is essentially a special case of our ensemble parameter filter .we caution that the utility of initial state or parameter estimation depends heavily on the observability and absolute continuity of the problem at hand .future extensions of our work include exploring alternate resampling techniques for the quantum particle filter and developing feedback strategies for improving the parameter estimate .more broadly , we believe there is much to be learned from classical control and parameter estimation theories .
we present filtering equations for single shot parameter estimation using continuous quantum measurement . by embedding parameter estimation in the standard quantum filtering formalism , we derive the optimal bayesian filter for cases when the parameter takes on a finite range of values . leveraging recent convergence results [ van handel , arxiv:0709.2216 ( 2008 ) ] , we give a condition which determines the asymptotic convergence of the estimator . for cases when the parameter is continuous valued , we develop _ quantum particle filters _ as a practical computational method for quantum parameter estimation .
let us suppose we are given a medium containing sensitive information which , for some reason , we want to dispose of in a secure way , i. e. in such a way that neither we nor anyone else can have access to or be deemed to possess that information anymore . in many everyday situations , particular precautions have to be taken in advance in order to counter unwanted _ data remanence _, i. e. the persistence of data that were nominally erased or removed .when dealing with macroscopic objects , the irreversibility of dissipative processes is generally enough to provide a _ practically secure _ erasure of data ( think of e. g. shredding the medium ) .however , such a conclusion is _ in - principle _ completely inadequate .let us suppose in fact that the medium carrying the information is represented by the state of a microscopic object obeying the laws of quantum mechanics . since quantum evolutions are globally reversible , also information has to be globally preserved , and any sort of _ true _ information erasure is thus forbidden .even so , information could be , if not erased , _ hidden _ or encoded in such a way to achieve what a secure disposal of information is meant to achieve .our aim here is to introduce and analyze a model - independent paradigm of secure information disposal , suitable to describe both classical and quantum information disposal , and naturally encompassing the ideal situation in which the only limitations to data processing are those imposed by the laws of quantum mechanics . in order to do so ,let us consider the following protocol : let denote the initial bipartite state shared between two parties the active player ( or _ receiver _ ) and the passive and inaccessible reference system ( or _ remote sender _ ) . according to a common understanding , the amount of correlations existing in between subsystems and can be interpreted as the amount of information that carries or possesses _ about _ .since the content of the message is assumed to be private ( otherwise there is no reason to require security in its disposal ) , we suppose that the state is decoupled from other accessible quantum systems , in particular from the local environment i .e. the ` trash ' system of .the goal for is to securely dispose of the information she has about . by identifying information with correlations ,this means that she has to reduce her correlations with by applying local operations on her share only ( as is not accessible ) , _ and _ without transferring any of these correlations into her local environment ( as this is assumed to be accessible to adversaries ) .we name this protocol _( local ) private quantum decoupling _ ( pqd ) .pqd constitutes a novel instance of the general task of producing , under various constraints , an uncorrelated ( or , equivalently , ` factorized ' ) state out of a correlated one .the importance of studying decoupling protocols lies in the fact that , with appropriate constraints , it is possible to quantitatively characterize diverse properties of quantum correlations with respect to their robustness _ against _ decoupling .such an approach has been introduced independently in refs . and , and contributed in recognizing the central role that decoupling plays in quantum information processing : for example , the primitives of state merging , state redistribution , and the ` mother ' protocol , are all based on decoupling arguments , and decoupling procedures form the building blocks of many recently constructed coding theorems achieving quantum capacity .hence , pqd offers a new point of view on the study of quantum correlations , with possible implications in entanglement theory and quantum cryptography .the structure of the paper is as follows : in section [ sec:2 ] , previous approaches to quantum decoupling are described and the new paradigm of pqd is motivated through simple examples .the rigorous definition of pqd is given in section [ sec:3 ] by defining eliminable and ineliminable correlations , and the concept of private local randomness as a resource is introduced . in section [ sec:4 ]we prove general bounds on pqd when acting on an arbitrary mixed state , and show that ineliminable correlations are in fact monogomous correlations , in the sense that they can not be freely shared .this is a feature common to many distinctively quantum correlations , like e. g. entanglement ( when suitably measured ) .section [ sec:5 ] deals with the asymptotic limit where infinitely many identical and independent copies of a given state are available : in this case , the optimal rate for pqd is explicitly calculated as being expressed by the coherent information . in section [ sec:6 ]we discuss some examples for which pqd assumes a particularly simple form .a connection between pqd and random - unitary channels is exhibited , together with an open question concerning the latter .finally , section [ sec:7 ] describes the relations existing between ineliminable correlations , quantum entanglement , and other ` quantum ' correlations present in an arbitrary bipartite quantum state .here we show that ineliminable correlations represent an entanglement parameter , in the sense given by .section [ sec:8 ] concludes the paper with a brief summary of the results obtained and possible directions to investigate in future .all quantum systems considered in the following are finite dimensional , in the sense that their attached hilbert spaces are finite dimensional .we use greek letters like for pure quantum states , while letters like are reserved for mixed states .the usual ket - bra notation denoting the rank - one projector onto the state is generally abbreviated simply as .roman letters label the systems sharing a quantum state : for example , is a mixed state defined on the composite system , carrying the hilbert space .where no confusion arises ( or it is not differently specified ) , omission of a letter in the label indicates a partial trace , namely , ] , provides a sound and operationally meaningful measure of the total amount of correlations present in .it is known that if and only if , while if and only if is purified by . moreover , when systems and are classical , qmi coincides with its classical counterpart .it makes sense then to say that the system carries a total of ( quantum ) bits of information about .notice that the information content , as we defined it , turns out to be positive and symmetric , that is , the amount of information carries about equals the amount of information carries about , since .when the state is clear from the context , we will simply write to indicate , and , in case the state is multipartite like , for example , , we will denote ) ] , for all states on .such an isometric extension is unique up to local unitary transformations on , which represents the environment interacting with the system during the open evolution described by . before proceeding weshall notice that since everything here is finite - dimensional , minima and maxima appearing in the following are all achievable . in practise, applies the isometry on her share , keeps the part and discards .accordingly , -pqd is mathematically characterized by the following quantity : where , and is the set of isometries such that and , without loss of generality since the roles of output subsystems and in the definition ( [ eq : definition ] ) can be exchanged , in formula , the non - negative quantity , which we call -_ineliminable information _ ( or , equivalently , -_ineliminable correlations _ ) , measures the amount of correlations with the referee that alice can not eliminate , without discharging into the environment more than bits of them .we will refer to the parameter as the _ privacy level parameter _ : the two extreme cases , that is , and , correspond to _ perfect _ pqd and to _ advantage preserving _pqd , respectively . in the following , for sake of clarity, we will denote simply as .notice that if and only if . already from the definition ( [ eq : definition ] ), we can see that is invariant under local unitary operations , that is moreover , for all , since all the values of that are achievable with the constraint are also achievable with the looser constraint , but not viceversa , while the upper bound is trivially achieved when alice does nothing at all , in such a way that .implicitly , by giving alice the possibility of performing local isometric embeddings , we are providing her with free access to local pure states : indeed , an isometric embedding is nothing but a unitary interaction of the system with some pure ancillary state . on the contrary ,let us think for a while to the opposite situation , like the one considered in ref . in the context of local purity distillation , where alice is granted unlimited access to local randomness , that is , she can freely create maximally mixed states . in particular ,let us consider such a local randomness as being _ private _, i. e. factorized from all other parties taking part either actively or passively into the protocol ( including the adversary eve ) . within this alternative scenario ,suppose that alice and the referee initially share some bipartite two - qubits state .since we allow local private randomness for free , we can actually consider the state where belongs to alice , namely , alice happens to be provided with two extra - bits of private randomness .the idea is now simple : alice can use these two extra - bits in order to securely decouple from .the decoupling isometry alice has to perform is given by where the correspond to the pauli s matrices .written , it is easy to check that that implies , for all . then , two extra - bits of private randomness in suffice to securely decouple any two - qubit state shared between and , no matter how correlated is the state .notice that this is in agreement with ref .it is important at this point to stress that , in order to eliminate correlations , the randomness in has to loose its privacy , since in general has to get correlated with fact , .this means that , even if the reduced state of still looks maximally mixed after the decoupling process , it does not represent anymore a fresh source of _ private _ randomness : in other words , there is no _ catalysis _ occurring here. we can conclude saying that , if private local randomness is provided for free , correlations can always be perfectly eliminated : in this sense , the framework of pqd implicitly assumes that local private randomness has to be considered as a _resource_. one final remark : from the preceding example , one should not jump to the conclusion that , in order to perfectly eliminate bits of total correlations , at least bits of extra - randomness are _ always _ needed .it is indeed possible to securely decouple a maximally pure state of two qubits , hence carrying _ two bits _ of total correlations , using only _one bit _ of extra randomness .see subsection [ sec : economy ] for details .we now face the general problem of quantifying the amount of ineliminable correlations present in a given state which is neither pure nor simply classically correlated .the following proposition exhibits a useful lower bound on ineliminable correlations : [ prop:1 ] for any given state , it holds that and * proof .* we introduce a purification of .then , any isometry produces a four - partite pure state .since both and are purifications of the same state , it is easy to check ( by direct inspection ) that : notice that the notation is not ambiguous since . plugging into ( [ eq : equaz ] ) the chain rule , we obtain .a second application of the chain rule leads to the estimate , hence to along exactly the same lines , we also obtain the case of -pqd requires and .the first condition , plugged into ( [ eq : equaz3 ] ) , gives , which in turns , plugged into ( [ eq : equaz2 ] ) , gives .the second condition , together with eqs .( [ eq : equaz2 ] ) and ( [ eq : equaz3 ] ) , gives .this proves eq .( [ eq : statement1 ] ) .the case of advantage preserving pqd , on the other hand , only requires the second condition , which proves eq .( [ eq : statement2 ] ) . the following proposition refines the upper bound ( [ eq : simple - upper - bound ] ) : [ prop:2 ] for any given state , it holds that * proof .* let us consider isometries of the form , where the ( in general neither normalized nor orthogonal ) vectors form a rank - one povm ( positive operator valued measure ) , i. e. , while the vectors are orthonormal .isometries of such a form give by construction .this means that the application of an isometry like automatically constitutes a suitable candidate for advantage preserving pqd .this means that , where the minimum is taken over all rank - one povm . to conclude the proof, we just notice that , by its very definition , the quantity turns out to be equal to the so - called _ unlocalizable entanglement _ defined in ref . , where it is also proved to satisfy . due to proposition [ prop:2 ] , we discover the following : according to definition ( [ eq : definition ] ) , it is unnecessary to consider values for the privacy parameter . another interesting consequence of proposition [ prop:2 ] is the following : [ coro : monog ] for any given tripartite pure state , for ] . as shown in ref . , the monogamy formula holds , where is the so - called _ rate of entanglement of assistance _ , which has been proved in ref . to be equal to .since , we have , and , due to eq .( [ eq : becomes - strict ] ) , we obtain the statement of the proposition . our analysis hence led us to a situation like the one depicted in figure [ default ] , where the achievable rates region for a given initial state is sketched ., that is , the set of allowed pairs , for any given initial state .the symmetry about the bisector reflects the possibility of exchanging the roles of subsystems and in eq .( [ eq : definition ] ) .the dark - grey shaded area around the origin corresponds to rates which are forbidden due to proposition [ prop:1 ] .the light - grey shaded area , instead , corresponds to rates achievable via the result in proposition [ prop : ap ] and time - sharing. the white areas in between are not characterized yet , as well as the boundary of the achievable rates region , which could well be given by some curve similar to the dotted one . when the privacy rate is set to some ,the corresponding achievable rates region is further constrained to lie below the line .finally , note that , for pure states , , namely , the achievable rates region collapses onto the most external dashed line , according to the fact that correlations carried by bipartite pure states constitute a conserved quantity . on the other hand , for separable states , ,namely , the achievable rates region fills the octant : in this case , perfect pqd , i. e. the origin , is achievable ( see corollary [ coro : sep]).,width=302 ] as we noticed before , the condition is equivalent to saying that can be privately decoupled from , in the limit of infinitely many copies provided . on the other hand , it is known that , for any separable state , , .this provides an intriguing connection between pqd and entanglement theory , stated in the following corollary of proposition [ prop : ap ] : [ coro : sep ] for any separable state , .in other words , the presence of asymptotically ineliminable information is a signature of quantum entanglement . following examples are presented in order to show that the notion of ineliminable correlations can not be straightforwardly explained in terms of entanglement only , as soon as one leaves the pure state case .it is however very hard to find explicit counterexamples , as the minimization in eq .( [ eq : definition ] ) is difficult to be explicitly solved in general .as we already noticed , propositions [ prop:1]-[prop : ap ] imply that , for any pure bipartite state , it holds namely , ineliminable correlations equal the entropy of entanglement . when possesses purely classical information about , i. e. when the shared state is classically correlated , that is for orthonormal s , we already saw at the end of section [ sec:2 ] that that is , classical correlations can be perfectly shredded .let us consider a quantum system , whose state is initially described be the density matrix , undergoing a channel .let be the purification of , where the system plays the role of a reference that does not change in time .moreover , let be the stinespring s isometry purifying the channel , that is ,\qquad\forall\sigma.\ ] ] let us denote as the tripartite pure state finally shared among the output , the environment , and the reference . if the channel is a closed evolution , namely , if it is described by one isometry only , then the reference is completely decoupled from the environment .let us suppose now that the only error occurring in the whole process is due to a classical shuffling , resulting in a classical randomization of different possible isometries : then , the resulting evolution will not be described by one particular isometry , as in the closed evolution case , but rather by a mixture of such isometries .this kind of noisy evolutions are called _ random - unitary channels _ and act like where is a probability distribution and are isometries .the following questions arise naturally : which kind of correlations between the reference and the environment cause ( or , depending on the point of view , are caused by ) such a ` classical ' error ?which properties do these correlations satisfy ?can we ascribe a ` classical character ' to these correlations ?it is known that a channel admits ( on the support of the input state ) a random - unitary kraus representation as in eq .( [ eq : randunit ] ) if and only if there exists a rank - one povm , , on the environment such that where ] , originating from the purification of a random - unitary channel , can in principle be different from that of a classically correlated state as in eq .( [ eq : abcc ] ) . in other words ,random - unitary channels induce a class of states for which perfect pqd is possible that is in principle _ larger _ than the class of classically correlated states . *open problem .* since the implication in ( [ eq : implic1 ] ) is in one direction only , it would be interesting to characterize the class of channels inducing only perfectly eliminable correlations between the reference and the environment : is such a class strictly larger than the class of random - unitary channels ?if so , does it admit an easier characterization ?we already saw , in subsection [ sec:4a ] , that there exist entangled mixed states that can be securely decoupled for all .there , we needed two extra - bits of private randomness in order to securely decouple whatever two - qubits state , in agreement with the fact that a two - qubits state contains at most two bits of total correlations . however , before rushing to the conclusion that we _ always _ need at least extra - bits of randomness to perfectly eliminate bits of total correlations , we should consider the following example where _ just one _ extra - bit of randomness is required to securely decouple a maximally entangled pure state of two qubits ( hence carrying two bits of total correlations ) .let us consider indeed the state acting on defined as where . for this state , even if carries only _one _ bit of extra - private randomness , one can show that .the proof is easy , as the null value is achieved by the isometry , with , defined as where is coherently performing the measurement needed to teleport the maximally mixed state from to .hence , for every outcome , the reduced state on is equal to , so that , written , we get and , which yields , for all .through pqd , we found a non - trivial division of total correlations , , into ineliminable ones , measured by , and eliminable ones , representing the rest , that is , . at this point , it is tempting to speculate a bit about hypothetical relations between the division of correlations into ineliminable and eliminable ones , versus the division into quantum and classical correlations .we already saw how random - unitary noise , that is classical noise in the sense explained in subsection [ subsec : randu ] , only induces perfectly eliminable correlations between the reference and the environment .moreover , ineliminable correlations satisfy the two axioms required in ref . for a measure of quantumness : they are zero for classically correlated states ( [ eq : abcc ] ) and they are invariant under local unitary transformations ( [ eq : uni - inv ] ) . also , being upper bounded by one half of the quantum mutual information and being equal to the entropy of entanglement for pure states , ineliminable correlations fall under the hypotheses of theorem 2 in ref . , which proves that , for an arbitrary bipartite state with , it holds where is a fannes - type function , that is a positive , concave , continuous , monotically increasing function which depends on the dimension of the underlying hilbert space only logarithmically and satisfies .( [ eq : reverse ] ) shows that , whenever the amount of ineliminable correlations in a bipartite state is sufficiently large , then such a state is necessarily entangled , since also coherent information has to be correspondingly close to the upper bound . in this sense, ineliminable correlations can be considered ` genuinely quantum ' correlations , since eq .( [ eq : reverse ] ) tells us that they constitute an entanglement _ parameter _ , in the sense explained in , namely , the more ineliminable correlations are present , the more the state is entangled ( coherent information is the paramount example of an entanglement parameter ) , even though ineliminable correlations do not satisfy many natural requirements to be a proper entanglement _measure_. in reinforcing this interpretation , it stands the fact , expressed by corollary [ coro : monog ] , that ineliminable correlations indeed satisfy a monogamy constraint , which is another strongly distinctive feature of quantum correlations versus classical ones .another interesting feature of ineliminable correlations is that , for every state , they always represent ( already at the level of a single copy ) _ at most one half _ of the total amount of correlations , that is , always .this fact is to be compared , once again , with what happens for different measures of ` quantum vs classical ' correlations : according to different definitions , there exist quantum states exhibiting quantum correlations _ without _ classical correlations , hence representing a strikingly counterintuitive situation . on the contrary , assuming for a while the definition of quantum correlations as the ineliminable ones , every quantum state would turn out to be always more correlated classically than quantum , hence reinforcing the common - sense intuition about correlations .however , in spite of this encouraging list of properties , the existence of entangled even maximally entangled states with perfectly eliminable correlations only ( recall the examples analyzed in subsections [ sec:4a ] and [ sec : economy ] ) seems to stand as an insurmountable argument against the maybe naive statement ` what is ineliminable is quantum ' .we however think that the dichotomy proposed here can contribute to the program of understanding the structure of total correlations , as coming from the _ operational _ paradigm of distant laboratories , versus the notion of entanglement , which is the _ formal _ property of not being separable .we introduced the operational task of private quantum decoupling ( pqd ) , which naturally arises as a model - independent description of secure disposal of information .partial results suggest that there may be a deep connection between the theory of pqd and the theory of quantum entanglement .further research in this direction would be in order .moreover , it could be useful to generalize the asymptotic results presented here to the one - shot scenario , by exploiting some recently introduced tools .also , pqd could turn out to be useful in designing quantum cryptographic protocols , where the possibility of securely erasing old data and keys is required .stimulating discussions with n. datta , g. gour , m. horodecki , and j. oppenheim are gratefully acknowledged .this research was supported by the program for improvement of research environment for young researchers from special coordination funds for promoting science and technology ( scf ) commissioned by the ministry of education , culture , sports , science and technology ( mext ) of japan .99[biblio ] a. k. pati and s. l. braunstein , nature * 404 * , 164 ( 2000 ) ; m. horodecki , r. horodecki , a. sen(de ) , and u. sen , found . of phys .* 35 * , 2041 ( 2005 ) ; r. jozsa , ibm j. res . & dev . *48 * , 79 ( 2004 ) ; s. l. braunstein and a. k. pati , phys .lett . * 98 * , 080502 ( 2007 ) ; p. hayden and j. preskill , j. high en . phys .* 9 * , 120 ( 2007 ) .a. ambainis , m. mosca , a. tapp , and r. de wolf , in _ proc .41st focs , 2000 _ , 547 ( unpublished ) , e - print arxiv : quant - ph/0003101v2 ; p. o. boykin and v. roychowdhury , phys .a * 67 * , 042317 ( 2003 ) ; p. hayden , d. leung , p. w. shor , and a. winter , comm .. phys . * 250 * , 371 ( 2004 ) .
given a bipartite system , correlations between its subsystems can be understood as information that each one carries about the other . in order to give a model - independent description of secure information disposal , we propose the paradigm of _ private quantum decoupling _ , corresponding to locally reducing correlations in a given bipartite quantum state without transferring them to the environment . in this framework , the concept of _ private local randomness _ naturally arises as a resource , and total correlations get divided into eliminable and ineliminable ones . we prove upper and lower bounds on the amount of ineliminable correlations present in an arbitrary bipartite state , and show that , in tripartite pure states , ineliminable correlations satisfy a monogamy constraint , making apparent their quantum nature . a relation with entanglement theory is provided by showing that ineliminable correlations constitute an entanglement parameter . in the limit of infinitely many copies of the initial state provided , we compute the regularized ineliminable correlations to be measured by the coherent information , which is thus equipped with a new operational interpretation . in particular , our results imply that two subsystems can be privately decoupled if their joint state is separable .
the actin cytoskeleton is a dynamic filament system used by cells to achieve mechanical strength and to generate forces . in response to biochemical or mechanical signals, it switches rapidly between different morphologies , including isotropic networks and contractile filament bundles .the isotropic state of crosslinked passive actin networks has been studied experimentally in great detail , for example with microrheology .similar approaches have been applied to actively contracting actin networks and live cells . however, less attention has been paid to the mechanical response of the other prominent morphology of the actin cytoskeleton , namely the contractile actin bundles , which in mature adhesion appear as so - called stress fibers . during recent years , it has become clear that stress fibers play a crucial role not only for cell mechanics , but also for the way adherent tissue cells sense the mechanical properties of their environment .thus it is important to understand how passive viscoelasticity and active contractility conspire in stress fibers .stress fibers are often mechanically anchored to sites of cell - matrix adhesion , are contracted by non - muscle myosin ii motors and have a sarcomeric structure similar to muscle , as shown schematically in . however, their detailed molecular structure is much less ordered than in muscle . in particular ,stress fibers in live cells continuously grow out of the focal adhesions and tend to tear themselves apart under the self - generated stress . up to now, the mechanical response of stress fibers has been measured mainly isolated from cells .recently , pulsed lasers have been employed to disrupt single stress fibers in living cells . by using the intrinsic sarcomeric pattern or an artificial pattern bleached into the fluorescently labeled stress fibers ,the contraction dynamics of dissected actin stress fibers has been resolved with high spatial and temporal resolution along their whole length .these experiments showed that dissected stress fibers contract non - uniformly and that the total contraction length saturates for long fibers , suggesting that stress fibers in adherent cells are not only attached at their endpoints , but also along their whole length . in the same study ,cyclic forces have been applied to stress fibers by an afm cantilever , mimicking physiological conditions like in heart , vasculature or gut .shows schematically how laser cutting and micromanipulation are applied to an adherent cell. early theory work on stress fibers focused on the dynamics of self - assembly leading to a stable contractile state .later more detailed mechanical models have been developed and parametrized by experimental data .here we investigate a generic continuum model for the mechanics of contractile filament bundles and show that it can be solved analytically for the boundary conditions corresponding to stress fiber laser nanosurgery and cyclic pulling experiments .our analytical results can be easily used for analyzing experimental data . for relaxation dynamics after laser cutting ,our model predicts unexpected oscillations .we reevaluate data obtained earlier from laser cutting experiments and indeed find evidence for the predicted oscillations .this paper is organized as follows . inwe introduce our continuum model , including the central stress fiber equation , .the stress fiber equation is a partial differential equation with mixed spatial and temporal derivatives . in order to solve it analytically ,in we discretize this equation in space .this results in a system of ordinary differential equations , which can be solved in closed form by an eigenvalue analysis . in ,we take the continuum limit of this solution , thus arriving at the general solution of the continuum model .this general solution is given in , with the corresponding spectrum of retardation times given in . in and , we specify and discuss the general solution for the boundary conditions appropriate for laser cutting and cyclic loading , respectively . in , we close with a discussion .we model the effectively one - dimensional stress fiber as a viscoelastic material which is subject to active myosin contraction forces and which interacts viscoelasticly with its surrounding . in the framework of continuum mechanics ,the fiber internal viscoelastic stress is given by the viscoelastic constitutive equation : where and denote the relevant components of the stress and strain tensors , respectively . denotes the displacement along the fiber and is the internal stress relaxation function .in addition to the viscoelastic stress , the fiber is subject to myosin contractile stress , which we characterize by a linear stress - strain rate relation . denotes the strain rate of an unloaded fiber and is the maximal stress that the molecular motors generate under stalling conditions .in addition to the fiber internal stresses , viscoelastic interactions with the surrounding lead to body forces , , that act over a characteristic length along the fiber and resist the fiber movement : here , is the external stress relaxation function . in the followingwe assume that both internal and external stress relaxation functions have the characteristics of a kelvin - voigt material : where and are elastic and viscous parameters , respectively , and and denote the heaviside step and dirac delta function , respectively .the chosen kelvin - voigt model is the simplest model for a viscoelastic solid that can carry load at constant deformation over a long time .note that represents the elastic foundation of the stress fibers revealed by the laser cutting experiments , while represents dissipative interactions between the moving fiber and the cytoplasm . our central equation ( the _ stress fiber equation _) follows from mechanical equilibrium , , which results in the following partial differential equation : this equation has been written in non - dimensional form using the typical length scale , the time scale , the force scale , the non - dimensional ratio of viscosities and the non - dimensional ratio of stiffnesses .( [ eq_model_nondim ] ) has been derived before via a different route , namely as the continuum limit of a discrete model representing the force balance in each sarcomeric element of a discrete model .however , the pure continuum viewpoint taken here seems at least equally valid , because stress fibers are more disordered than muscle and because the interactions with the environment represented by and are expected to be continuous along the stress fiber . in general , the stress fiber equation ( [ eq_model_nondim ] ) can be solved numerically with finite element techniques . in this paper , we show that it also can be solved analytically . in order to solve the stress fiber equation , we have to impose boundary and initial conditions .we impose the boundary conditions that the fiber is firmly attached at its left end at , and is pulled with a certain boundary force at its right end at : the boundary condition for describes the balance of forces at the right end of the fiber where is the difference between the myosin stall force and the externally applied boundary force . as initial condition, we simply use , that is vanishing displacement .before we derive the general model solution , we briefly discuss the special case . this case can be easily solved and gives first insight into the solution for the displacement field . with the definition , the partial differential equation becomes a homogeneous linear ordinary differential equation : with the boundary conditions and .it can be solved by an exponential ansatz and thus leads to a inhomogeneous linear ordinary differential equation for , with the initial condition .the final solution reads laser cutting experiments correspond to the situation where the externally applied boundary forces vanish , that is .then the integral in is trivial and thus the special case leads to a retardation process with a single retardation time .the largest , always negative displacement given by occurs at , where the fiber was released by the laser cut .the magnitude of the displacement decreases exponentially with increasing distance from this point and the typical length scale of this decay is given by .in order to find a closed analytical solution for the general stress fiber equation , , we discretize our model in space . in order to implement the correct boundary conditions at , it is convenient to symmetrize the system .thus we consider a doubled model with units and nodes as shown in . like in the continuum model , internal and external stress relaxationare modeled as kelvin - voigt - like , that is ( , ) and ( , ) are the spring stiffness and the viscosity of the internal and external kelvin - voigt elements , respectively .each internal kelvin - voigt body is also subject to the contractile actomyosin force modeled by a linearized force - velocity relationship : is the force exerted by the -th motor moving with velocity . is the zero - load or maximum motor velocity and is the stall force of the motor . in the final relationwe have used that the contraction velocity of the -th motor , , can be related to the rate of elongation of the n - th sarcomeric unit as . , a dashpot of viscosity and a linear extension .actomyosin contractility is described by a contractile element with contraction force added to each kelvin - voigt body in parallel .viscoelastic interactions between fiber elements and their surrounding are described by an additional set of external kelvin - voigt bodies with stiffness and viscosity .the total fiber length is . denotes the displacement of the -th node .( b ) schematic drawing of the solution for the node displacements assuming that both ends are pulled by an external force . since both terminating nodes are pulled outward , the solution for the displacements is antisymmetric with respect to the center node at , which therefore does not move. thus we obtain the boundary conditions of interest , clamped at and pulled by at .( c ) the index starts counting at the center node .the index starts counting at the node which terminates the fiber at the left.,scaledwidth=90.0% ] our model resembles the kargin - slonimsky - rouse ( ksr ) model for viscoelastic polymers , although it is more complicated due to the presence of active stresses and the elastic coupling to the environment .the main course of our derivation of the solution for the discrete model follows a similar treatment given before for the ksr - model .the force balance at each node of the fiber as shown in reads we non - dimensionalized time using the time scale , introduced the non - dimensional parameters ( , ) and combine all inhomogeneous boundary terms in the function : it is important to note that is not made non - dimensional in regard to space ; this will be done later when the continuum limit is performed . by taking the difference of subsequent equations in and by introducingthe relative coordinates , we can write with the matrix : the matrix has the same form as , except that replaces .in addition we have defined the -dimensional vectors : we first solve the homogeneous equation .let be an eigenvalue and let be the associated eigenvector that solves the eigenvalue problem : then the general solution of the homogeneous equation is given by : with the eigenvalues and eigenvectors \sin ( \frac{\pi 2l}{2n+1 } ) \\[0.4 cm ] \sin ( \frac{\pi 3l}{2n+1 } ) \\\vdots\\ \sin ( \frac{\pi 2nl}{2n+1 } ) \\\end{array } \right)\ .\ ] ] it is straight forward to check that is indeed the solution to the eigenvalue problem defined by , see the appendix .there we also prove that the eigenvalues are distinct , positive and non - zero , and that the eigenvectors are orthogonal and their length is given by .these results validate the form of the homogeneous solution given in . in order to determine the solution of the inhomogeneous equation , , we use variation of the coefficients : inserting this ansatz into the inhomogeneous and using the homogeneous solution yields conditions defining the coefficients : evaluation of the product , rewriting the equations by components , and applying appropriate addition theorems yields here the first sinus term is simply the -th component of the -th eigenvector . we define a new matrix and a new -dimensional vector with these definitions , can be rewritten as : because is built up by the normalized and orthogonal eigenvectors , .moreover it is symmetric , thus .therefore the only non - zero components of are .therefore the solution for is given by : & \displaystyle = & \displaystyle -f(t)\sqrt{\frac{2}{2n+1 } } \left(1+(-1)^{l+1}\right)\sin ( \frac{\pi l}{2n+1})\ .\end{array}\ ] ] we conclude that all even - numbered components of vanish .the coefficients are obtained by using and integrating : \displaystyle-\frac{4}{2n+1}\,\frac{\sin ( \frac{\pi l}{2n+1 } ) } { \gamma+4\sin^2 ( \frac{\pi l}{2(2n+1 ) } ) } \int_0^t f(t')e^{\lambda_l t'}dt'&\,\,\,\,\,\,\,\,\text{if}\,\,\,l\,\,\text{odd } \end{array } \right.\ ] ] the solution for the relative coordinates follows from : the actual displacements are recovered from the relative coordinates by evaluating the telescoping sum : & = & \displaystyle\sum_{j=1}^{2n}y_j\ .\end{array}\ ] ] since the solution has to be antisymmetric with respect to the center node at , compare , it must hold true that and more generally , such that for , the displacements are given by : & \displaystyle = & \displaystyle \frac{1}{2}\sum_{j=1}^{2n - k}y_j-\frac{1}{2}\sum_{j=1}^{k}y_j\\[0.7 cm ] & \displaystyle = & \displaystyle \frac{1}{2}\sum_{l=1,3,5,\ldots}^{2n}c_l(t)e^{-\lambda_l t}\left(\sum_{j=1}^{2n - k}\sin ( \frac{\pi j l}{2n+1 } ) - \sum_{j=1}^{k}\sin ( \frac{\pi j l}{2n+1 } ) \right)\ .\\[0.7 cm ] \end{array}\ ] ] in the last step , we used the solution for the relative coordinates given by and have subsequently reversed the order of summation in both terms .the two sums in parenthesis can be further simplified by using the identity rewriting the result to the index , see , we obtain the desired solution of the discrete model : with the retardation times : note that gives the correct result for the left boundary . for this reason , we can extend the range of validity of to .we also note that the retardation times depend on the number of units because the solution describes the movement of a fiber with units which is attached at its left end and is pulled at its right end with boundary force .it is straight forward to confirm the validity of the derived discrete solution , and , by inserting it into the discrete model equation , .the discrete stress fiber model can be transformed to a continuum equation by considering the limit while the length of the fiber is kept constant . in this process, the stress fiber length is subdivided into incremental smaller pieces of length .thereby it has to be ensured that the effective viscoelastic properties of the whole fiber are conserved .this is accomplished by re - scaling all viscoelastic constants in each iteration step with the appropriate scaling factor according to : k_{n , ext } & = & { \displaystyle \phi_n^{-1}k_{ext } } & \,\,\,\,\,\,\text{and}\,\,\,\,\,\ , & \gamma_{n , ext } & = & { \displaystyle\phi_n^{-1}\gamma_{ext } } \end{array}\ ] ] to further clarify this procedure consider a single harmonic spring of resting length and stiffness .this spring is equivalent to two springs of length and stiffness that are connected in series . here , the scaling factor is .thus , the stiffness in represents the stiffness of a fiber fragment of length and increases linearly with the number of partitions , whereas is the reference stiffness of a fiber fragment of length . a typical value for the length scale would be , the typical length of sarcomeric units in stress fibers .while increases linearly with the number of partitions , decreases according to .similarly it follows that the viscous parameter and scale as and , respectively .the non - dimensional parameters and the boundary force scale like : we begin the limiting procedure by introducing the continuous spatial variable , denoting the position of the -th node within the discrete chain with units . then yields ( also compare ): \textrm{for~}n=1,\ldots , n\textrm{:}\hspace{0.5 cm } \\\dot{u}(x+a_n)-2\dot{u}(x)+\dot{u}(x - a_n)-\gamma_n\dot{u}(x)+u(x+a_n)-2u(x)+u(x - a_n)-\kappa_n u=0\\[0.3 cm ] \textrm{for~}n = n\textrm { : } \\ \dot{u}(l)-\dot{u}(l - a_n)+\gamma_n\dot{u}(l)+u(l)-u(l - a_n)+\kappa_nu(l)+f_n(t)=0\\ \end{array}\ ] ] using the scaling relations for the viscoelastic parameters given in and conducting the limit yields for : \displaystyle a^2\lim_{n\rightarrow\infty}\left(\frac{u(x+a_n)-2u(x)+u(x - a_n)}{{a_n}^2}\right)-\kappa u(x)&\displaystyle = & \displaystyle 0 \end{array}\ ] ] since is a sequence which converges to zero , the limits define the second derivative of with respect to .the continuum limit of the upper equation results in a partial differential equation for the displacement .the highest order term will contain mixed derivatives in and , namely , .similarly , the limiting process can be performed for the boundary condition at the right end .note that at this point the spatial variable evaluates to : \displaystyle a\lim_{n\rightarrow\infty}\left(\frac{u(l)-u(l - a_n)}{a_n}\right)+\kappa\lim_{n\rightarrow\infty}\frac{a_n}{a}u(l ) + f(t)=&\displaystyle 0 \end{array}\ ] ] in each line of the equation , the first limit gives the first derivative of with respect to evaluated at and the second limit in each line vanishes as converges to zero .consequently , in the continuum representation , the stresses which originate from shearing the environment can not contribute to the boundary condition .our continuum model for stress fibers defined by and is recovered after non - dimensionalizing using the typical length scale . to obtain a closed solution for the continuous model ,we apply the continuum limit to the discrete model solution .the limiting procedure is first performed on the retardation times of the discrete model given by : performing the limit yields the retardation times of the continuum model : since , the upper relation defines infinitely many discrete retardation times , non - dimensionalized by . fromwe deduce that the retardation times are bounded by the extreme values and according to : the first relation holds if , whereas the second holds if . in the special case the finite range of possible values collapses to the single retardation time which leads to a very simple form of the analytical solution as shown above with .the limiting procedure applied in can be carried out similarly on the remaining -dependent terms of the discrete solution given by .this yields our central result , i.e. the solution for the continuous boundary value problem defined by and : in combination with is the general solution of our continuum model .we successfully checked the validity of our analytical solution by comparision with a numerical solution of .one big advantage of the analytical solution is that it can be easily used to evaluate experimental data . in the following, we will discuss its consequences for the two special cases of laser cutting and cyclic loading ., the displacements of inner fiber segments exhibit damped oscillations around their final steady state . herewe show a log - log - plot of the time course of the absolute difference calculated from for the position , which is close to the position with maximal amplitude and corresponds to the band in .the inset gives the amplitudes of the first and second oscillation along the fiber , with maxima and , respectively .the position used here is highlighted as dashed line .( b ) the difference at the position is shown on a linear scale .numbering of the extremal values are included for comparison with ( a ) .parameters for ( a ) and ( b ) are as in : and , . ,scaledwidth=80.0% ] if a fiber is cut by a train of laser pulses , then there are no external forces acting anymore on the free fiber end and vanishes . then can be written as : the solution for the displacementcan be understood as a retardation process with infinitely many discrete retardation times given by and associated , spatially dependent amplitudes .the amplitudes are given by : the solution for the displacement at , the position of the cut , is particularly simple .at this special position , the amplitudes have a linear relation to the corresponding retardation times : since the range of possible retardation times is bounded according to , it follows that the spectrum at has only negative amplitudes and the resulting solution for the displacement at is always a monotonically decreasing function . however , this is not true for arbitrary .inspection of yields that negative as well as positive amplitudes appear simultaneously .since evaluates the numerator in at its maximum , the resulting spectrum constitutes a lower bound for the negative amplitudes of the spectra with .similarly , the absolute value of gives an upper bound for all positive amplitudes .thus , the retardation spectra with oscillate around zero within an envelope for the amplitudes that decays linearly toward zero .this can lead to damped oscillations in the displacement of inner fiber bands about their stationary value .an representative time course is shown in .the emergence of these oscillations is particularly interesting since the stress fiber is modeled in the overdamped limit , that is , inertia terms are neglected .we find that these damped oscillations in this inertia - free system occur only for , but then for all positions .the amplitude of these oscillations reach their maximum at distinct positions along the fiber , as shown by the inset to .the location of the maxima moves further away from the cut ( toward smaller -values ) with increasing order of the oscillation . shows the time course of the displacement at which is close to the position where the first oscillation reaches its maximum .using the same parameters as in , we show in the time course of the displacement at where the second oscillation reaches its maximum .since the oscillations are strongly damped , the maximum amplitude of the oscillations also decreases with the order .while the maximal amplitude of the first oscillation can reach hundreds of nanometers ( at , see inset to ) , the maximal amplitude of the second oscillation is already much smaller and only of the order of tens of nanometers ( at , see ) .thus , in order to detect the oscillations in experiments , it is essential to measure close to where the oscillations reach their respective maximum amplitude .we demonstrate this by showing predicted time courses of the difference at the two positions in ( b ) and ( b ) , respectively .while the first oscillation is most prominent in ( b ) , the second oscillations is not detectable at this position .in contrast , the amplitude of the first oscillation in ( b ) is reduced compared to ( b ) but the amplitude of the second oscillation is much larger and becomes detectable . on a log - log - scale at the position , where the amplitude of the second oscillation attains its maximum .the inset again shows the maximum amplitude of the first and second oscillation along the fiber , but now the position is highlighted as dashed line .( b ) the difference at the position is shown on a linear scale .numbering of the extremal values are included for comparison with ( a ) .parameters used for ( a ) and ( b ) are the same as in and .,scaledwidth=80.0% ] for the continuum model ( solid ) and for the discrete model ( dashed ) .the continuum solution is calculated for a fiber of length at position .the discrete solution is calculated for a fiber with subunits at node .both solutions were calculated for the same parameters as extracted from experimental data .the two solutions agree well with each other and both models predict oscillations.,scaledwidth=60.0% ] to show that the oscillations are not an artifact introduced by the continuum limit , we have compared solutions of corresponding continuous and discrete models .results are shown in where we have used the same parameters as for and . to facilitate comparison between continuum and discrete model ,solutions are calculated for integer fiber lengths and integer positions .we find that continuum and discrete solution agree very well and , most importantly , both predict oscillations when . .( b ) fiber bleached with stripe pattern .( c - e ) stress fiber , and after laser cutting .( f ) time - space kymograph reconstructed from fluorescence intensity profiles along the stress fiber .band positions are extracted by edge - detection ( solid lines ) , scale bars : and . ( g )model fit to the displacement data of shown bands ( n increasing from bottom to top ) with initial positions yields with .note that and that the experimental data provide evidence for the predicted oscillations , because the curves for n=5 and 7 show dips at 8s and 5s after cutting , respectively ., scaledwidth=90.0% ] because of the analytical solution , we can easily apply our model to evaluate experimental data for stress fiber contraction dynamics induced by laser nano - surgery . briefly, ptk-2 cells were tranfected with gfp - actin and a stripe pattern was bleached into their stress fibers .10 s later the stress fibers were cut with a laser and their retraction was recorded over several minutes .kymographs were constructed and for each band , the retraction trace was extracted by edge detection .least - square fitting of the theoretical predictions to four selected bands simultaneously was used to estimate the four model parameters ( ) .an representative example for the outcome of this procedure is shown in ( more examples and the details of our experiments are provided in the supplementary material ) .we find that and ( mean std , ) .this means that in our experiments the second relation of applies and that the oscillations demonstrated by are predicted for this experimental system .for the positions corresponding to bands and our model predicts minima at and , respectively .indeed these minima appear as dips in the experimental data shown in ( g ) .the response of stress fibers to cyclic loading is characterized by the complex modulus which we derive from the general solution by assuming a cyclic boundary force , with a constant offset compensating the stall force of the molecular motors .evaluation of the resulting integral in yields : inspection of the time - dependent terms yields that the solution for the displacements approaches a harmonic oscillation .the deviations decay exponentially in time , according to . as a consequence , in the limit for large times, the fiber displacements also oscillate with the same frequency as the force input , but the stationary phase shift between displacements and might vary spatially along the fiber . in the following ,we are only interested in the response of the fiber as a whole , i.e. we focus on the displacement at . with the above arguments , we find in the limit for large times : & \displaystyle = 1/\mathcal{g}^*(\omega ) & \\\end{array}\ ] ] the complex modulus , non - dimensionalized by , can be deduced from by noting that the cyclic force input and the creep response of the fiber are connected by the inverse of the complex modulus .the expression for the complex modulus can be separated into its real and imaginary part , the storage and the loss modulus , respectively : & \displaystyle = \mathcal{g}'(\omega ) & \displaystyle = \mathcal{g}''(\omega)\\ \end{array}\ ] ] with q(\omega)=&\displaystyle 8l\sum_{m=1}^\infty\frac{1}{4\kappa l^2+\pi^2(2m-1)^2}\,\frac{\omega \tau_m}{\omega^2\tau^2_m+1}\ .\\ \end{array}\ ] ] and ., scaledwidth=66.0% ] an alternative , more concise expression for the complex modulus can be derived by solving the laplace - transformed model equation for the situation of a sudden force application , , where is the unit step function .solution of this laplace - transformed boundary value problem for , with , directly yields the laplace - transformed creep compliance , .it is connected to the complex modulus by : or are equivalent expressions for the complex modulus of the stress fiber model .to further study its frequency dependence we use . in the special case , it simplifies to .the storage modulus becomes a constant , and the loss modulus is linearly dependent on the frequency .these are the characteristics of a kelvin - voigt body .the more the ratio differs from unity , the larger are the deviations from these simple characteristics . to study the general case ,consider the limits and . in both limits, the stress fiber model again exhibits the characteristics of a kelvin - voigt body .the explicit values for the limit are : \displaystyle\mathcal{g}_0''(\omega)=&\displaystyle\frac{1}{4\kappa}\operatorname{csch}^2(l\sqrt{\kappa})\left(2l\kappa(\kappa-\gamma)+\sqrt{\kappa}(\kappa+\gamma)\sinh(2l\sqrt{\kappa})\right)\omega.\\ \end{array}\ ] ] similarly , in the limit , we find : \displaystyle\mathcal{g}_{\infty}''(\omega)=&\displaystyle\sqrt{\gamma}\coth(l\sqrt{\gamma})\omega.\\ \end{array}\ ] ] it holds that , with equality for .a similar relation holds for the slope of the loss modulus at high and low frequencies . shows the predicted frequency dependences of and we have presented a complete analytical solution of a generic continuum model for the viscoelastic properties of actively contracting filament bundles .our model contains the most important basic features which are known to be involved in the function of stress fibers , namely internal viscoelasticity , active contractility by molecular motors , and viscous and elastic coupling to the environment .the resulting stress fiber equation , , can be solved with numerical methods for partial differential equations . in this paper , we have shown that a general solution can be derived by first discretizing the equation in space . in order to implement the correct boundary conditions ,the system is symmetrized by doubling its size .the resulting system of ordinary differential equations leads to an eigenvalue problem which can be solved exactly , leading to .a continuum limit needs to take care of the appropriate rescaling of the viscoelastic parameters and finally leads to the general solution for the stress fiber equation .the validity of our analytical solution has been successfully checked by comparing it with both the discrete and numerical solutions . due to their analytical nature, our results can be easily used to evaluate experimental data .here we have demonstrated this for the case of laser cutting of stress fibers . in an earlier experimental study , we focused on the movement of the first three bands ( ) of the stress fiber .these bands are within less than from the fiber tip and thus we did not report the oscillatory feature of bands farther away from the cut .after prediction of these oscillations by our analytical results , we evaluated the experimental data in this respect and indeed found evidence for their occurance ( and supplementary material ) .this was possible with conventional light microscopy because the amplitude of the first oscillation can reach hundreds of nanometers .the amplitude of the second oscillation , however , is predicted to be typically on the order of tens of nanometers , which is below our resolution limit . in the future , super - resolution microscopy or single particle tracking might allow a nanometer - precise validation of our theoretical predictions .the extracted parameter values suggest that the frictional coupling between stress fiber and cytoplasm , quantified by , is not relevant in our experiments , and that the retraction dynamics is dominated by the elastic foundation quantified by .however , the elastic coupling to the environment might depend on cell type and substrate coating .in fact our findings differ from the results of an earlier study , which neglected elastic , but predicted high frictional coupling . in our model ,high frictional coupling corresponds to and thus no oscillations are expected in this case .it would be interesting to cut stress fibers in cells grown on micro - patterned surfaces that prevent substrate attachment along the fiber .we then would expect not only the transition from elastic to viscous coupling , but also the disappearance of the oscillations . as a second application of our theoretical results, we suggest to measure the viscoelastic response function of single stress fibers .this could be done with afm or similar setups either on live cells or on single stress fibers extracted from cells . in this case, our model could provide a valuable basis for evaluating changes in the viscoelastic properties of stress fibers induced by changes in motor regulation , e.g. by calcium concentration or pharmacological compounds . in summary , our analytical results of a generic model open up the perspective of quantitatively evaluating the physical properties for any kind of contractile filament bundle .in order to apply this approach to more complicated cellular or biomimetic systems , it would be interesting to go beyond the one - dimensional geometry of bundles and to also consider higher dimensional arrangements of contractile elements , which could be modeled for example by appropriately modified two- and three - dimensional networks .ehks and uss are members of the heidelberg cluster of excellence cellnetworks .uss was supported by the karlsruhe cluster of excellence center for functional nanostructures ( cfn ) and by the mechanosys - grant from the federal ministry of education and research ( bmbf ) of germany .ab was supported by the nih grant r01 gm071868 and by the german research foundation ( dfg ) through fellowship be4547/1 - 1 .in the main text we have used the eigenvalues and eigenvectors given by without proving that this system indeed solves the eigenvalue problem defined by . here, we verify the solution to the eigenvalue problem and prove the following properties of the eigenvalues and eigenvectors : in order to verify the given eigenvalues and eigenvectors , we first rewrite the matrix as : where and . by using one can express in terms of : substitution of this relation into and the application of the addition theorem yields : to prove it has to be shown that the product vanishes for all .the -th component of the vector which results from this product is given below .it simplifies to zero after application of the addition theorem : thus we have shown that the system of eigenvalues and eigenvector indeed solves the eigenvalue problem . nextwe show that the eigenvalues are distinct , positive and non - zero .the fact that the eigenvalues are positive and non - zero follows directly by inspection of and by noting that all viscoelastic constants are positive .it remains to be shown that there are no multiple eigenvalues .this can be seen after reformulating the expression for the eigenvalues as : since , it holds for the argument of the -function that . in this interval ,the -function increases monotonically and is single - valued .for this reason , the eigenvalues , , are also single - valued .the eigenvalues increase monotonically with if and decrease monotonically for increasing if the opposite inequality holds .next we show that the eigenvectors are orthogonal and their length is given by .consider the matrix of normalized eigenvectors , _ * u * _ , defined in the main text . by means of this matrix ,the statement to be shown can be recapitulated as .the second relation follows since is obviously symmetric . in the followingwe will evaluate the square of the matrix by components : & \displaystyle = \frac{2}{2n+1}\sum_{j=1}^{2n}\sin\frac{\pi k j}{2n+1}\sin\frac{\pi j m}{2n+1}\\[0.5 cm ] & \displaystyle = \displaystyle\frac{1}{2n+1}\sum_{j=1}^{2n}\left(\cos\frac{\pi j(k - m)}{2n+1}-\cos\frac{\pi j ( k+m)}{2n+1}\right)\\[0.5 cm ] \end{array}\ ] ] the finite sums over the -functions can be evaluated by expressing it in terms of exponential functions .the used identity is : \end{array}\ ] ] application of in order to simplify finally yields : & \displaystyle\left.\hspace{0.3cm}-\sin((k+m)\pi)\cot\frac{(k+m)\pi}{2(2n+1)}\right.\\[0.5 cm ] & \displaystyle\left.\hspace{0.3cm}+\cos((k+m)\pi)-\cos((k - m)\pi)\right ) \end{array}\ ] ] there are two cases , namely and , that have to be considered .first assume that .in this case , the last two terms in just cancel out each other .the -function in the second term evaluates to zero while the cotangent gives a finite value : since , the singularities are just spared .thus , also this term vanishes .it is only the first term that gives a contribution . using lhpital s rule, it evaluates to : & \displaystyle = \lim_{m\rightarrow k}\frac{\cos\left((k - m ) \pi\right ) } { \cos\frac{(k - m)\pi}{2(2n+1)}}=1 \end{array}\ ] ] this result ensures that all diagonal components of are unity .next assume that . in this case the first as well as the second term in vanish since the -function evaluates to zero while the co - tangent yields finite values .the last two terms further simplify to : & \displaystyle=&\displaystyle\frac{1}{2(2n+1)}(-1)^{k - m}\left((-1)^{2m}-1)\right)=0 \end{array}\ ] ] this result ensures that all off - diagonal components of vanish .the combination of and yields which was to be demonstrated .thus we have shown that all eigenvectors are of length and form a complete orthogonal basis .
the actin cytoskeleton of adherent tissue cells often condenses into filament bundles contracted by myosin motors , so - called stress fibers , which play a crucial role in the mechanical interaction of cells with their environment . stress fibers are usually attached to their environment at the endpoints , but possibly also along their whole length . we introduce a theoretical model for such contractile filament bundles which combines passive viscoelasticity with active contractility . the model equations are solved analytically for two different types of boundary conditions . a free boundary corresponds to stress fiber contraction dynamics after laser surgery and results in good agreement with experimental data . imposing cyclic varying boundary forces allows us to calculate the complex modulus of a single stress fiber .
in online computation , we face the challenge of designing algorithms that work in environments where parts of the input are not known while parts of the output ( that may heavily depend on the yet unknown input pieces ) are already needed . the standard way of evaluating the quality of online algorithms is by means of _ competitive analysis _ , where one compares the outcome of an online algorithm to the optimal solution constructed by a hypothetical optimal offline algorithm .since deterministic strategies are often proven to fail for the most prominent problems , randomization is used as a powerful tool to construct high - quality algorithms that outperform their deterministic counterparts .these algorithms base their computations on the outcome of a random source ; for a detailed introduction to online problems we refer the reader to the literature . the most common way to measure the performance of randomized algorithms is to analyze the worst - case expected outcome and to compare it to the optimal solution . with offline algorithms , a statement aboutthe expected outcome is also a statement about the _ outcome with high probability _ due to markov s inequality and the fact that the algorithm may be executed many times to amplify the probability of success .however , this amplification is not possible in online settings . as online algorithms only have one attempt to compute a reasonably good result , a statement with respect to the expected value of their competitive ratio may be rather unsatisfying . as a matter of fact , for a fixed input, it might be the case that such an algorithm produces results of a very high quality in very few cases ( i.e. , for a rather small number of random choices ) , but is unacceptably bad for the majority of random computations ; still , the expected competitive ratio might suggest a better performance . thus ,if we want to have a certain guarantee that some randomized online algorithm obtains a particular quality , we must have a closer look at its analysis .in such a setting , we would like to state that the algorithm does not only perform well on average , but `` almost always . ''besides a theoretical formalization of the above statement , the main contribution of this paper is to show that , for a broad class of problems , the existence of a randomized online algorithm that performs well in expectation immediately implies the existence of a randomized online algorithm that is virtually as good with high probability .our investigations , however , need to be detailed in order to face the particularities of the framework .first , we show that it is not possible to measure the probability of success with respect to the input size , which might be considered the straightforward approach .many of the known randomized online algorithms are naturally divided into some kind of _ phases _ ( e.g. , the algorithm for metrical task systems from borodin et al . , the marking algorithm for paging from fiat et al . , etc . ) where each phase is processed and analyzed separately . since the phases are independent , a high probability result ( i.e. , with a probability converging to with an increasing number of phases ) can be obtained . however , the definition of these phases is specific to each problem and algorithm . also , there are other algorithms ( e.g. , the optimal paging algorithm from achlioptas et al . and many workfunction - based algorithms ) that use other constructions and that are not divided into phases . as we want to establish results with high probability that are independent of the concrete algorithms , we thus have to measure this probability with respect to another parameter ; we show that the cost of an optimal solution is a very reasonable quantity for this purpose. then again it turns out that , if we consider general online problems , the notions of the expected outcome and an outcome with high probability are still not related in any way , i.e. , we define problems for which these two measures are incomparable . hence , we carefully examine both to which parameter the probability should relate and which properties we need the studied problem to fulfill to again allow a division into independent phases ; finally , this allows us to construct randomized online algorithms that perform well with a probability tending to with a growing size of the optimal cost .we show that this technique is applicable for a wide range of online problems .classically , results concerning randomized online algorithms commonly analyze their expected behavior ; there are , however , a few exceptions , e.g. , leonardi et al . analyze the tail distribution of algorithms for call control problems , and maggs et al . deal with online distributed data management strategies that minimize the congestion in certain network topologies . in section [ sec : prelim ], we define the class of symmetric online minimization problems and present the main result ( theorem [ thm : exptohp ] ) .the theorem states that , for any symmetric problem which fulfills certain natural conditions , it is possible to transform an algorithm with constant expected competitive ratio to an algorithm having a competitive ratio of with high probability ( with respect to the cost of an optimal solution ) .section [ sec : mainthm ] is devoted to proving theorem [ thm : exptohp ] .we partition the run of the algorithm into phases such that the loss incurred by the phase changes can be amortized ; however , to control the variance within one phase , we need to further subdivide the phases .modelling the cost of single phases as dependent random variables , we obtain a supermartingale that enables us to apply the azuma - hoeffding inequalityand thus to obtain the result .these investigations are followed by applications of the theorem in section [ sec : applications ] where we show that our result is applicable for task systems and that for the -server problem on unbounded metric spaces , no comparable result can be obtained .we further elaborate on the tightness of our result in section [ sec : discussion ] .we use the following definitions of online algorithms that deal with online minimization problems .[ dfn : online - alg ] consider an initial configuration and an input sequence .an _ online algorithm _ the output sequence , where for some function .the _ cost _ of the solution is denoted by . for the ease of presentation, we refer to the tuple that consists of the initial configuration and the input sequence , i.e. , , as the input of the problem .even though the initial configuration is not explicitly introduced in the definition in , it is often very natural , and it is used in the definitions of some well - known online problems ( e.g. , the -server problem ) . as we see later , the notion of an initial configuration plays an important role in the relationship between different variants of the competitive ratio .since , for the majority of online problems , deterministic strategies are often doomed to fail in terms of their output quality , randomization is used in the design of online algorithms .formally , randomized online algorithms can be defined as follows . a _ randomized online algorithm _ computes the output sequence such that is computed from , where is the content of the random tape , i.e. , an infinite binary sequence where every bit is chosen uniformly at random and independent of the others . by denote the random variable ( over the probability space defined by ) expressing the cost of the solution . the efficiency of an online algorithm is usually measured in terms of the competitive ratio as introduced by sleator and tarjan .an online algorithm is -_competitive _, for some , if there exists a constant such that , for every initial configuration and each input sequence , , where denotes the value of the optimal solution for the given instance ; an online algorithm is _ optimal _ if it is -competitive with .when dealing with randomized online algorithms we compare the expected outcome to the one of an optimal algorithm .a randomized online algorithm is -competitive _ in expectation _ if there exists a constant such that , for every initial configuration and input sequence , \le r\cdot { { \ifthenelse{\equal{i , x}{\empty } } { \ensuremath{{\ensuremath{\mathrm{cost}}\xspace}}({\ensuremath{\texttt{\textsc{opt}}}\xspace})\xspace } { \ensuremath{{\ensuremath{\mathrm{cost}}\xspace}}_{i , x}({\ensuremath{\texttt{\textsc{opt}}}\xspace})\xspace } } } + \alpha ] denote the set . 1 .on the one hand , there are problems for which the competitive ratio w.h.p . is better than the expected one .consider , e.g. , the following problem .there is a unique initial configuration and the input sequence consists of bits .an online algorithm has to produce one - bit answers .if , for every ] . the proof is done by induction on . for statement holds by definition .let denote the index of the first request after subphases , with , and if there are less than subphases .in order to have at least subphases , the algorithm must enter some suffix of phase at position and incur a cost of more than ( see fig .[ fig : subphases ] ) .hence , = { } & { \ensuremath{\mathrm{pr}}\mathopen{}}[{\ensuremath{\overline{n}}}_{\delta-1}<n_{i+1}-1\mid s_{n_i}=s]\\ & \cdot { \ensuremath{\mathrm{pr}}\mathopen { } } [ w({\ensuremath{\overline{n}}}_{\delta-1},n_{i+1}-1)>d \mid { \ensuremath{\overline{n}}}_{\delta-1}<n_{i+1}-1\wedge s_{n_i}=s].\nonumber \end{aligned}\ ] ] subphases.,width=377 ] the fact that means that there are at least subphases , i.e. , = { \ensuremath{\mathrm{pr}}\mathopen{}}[x_{i}\ge\delta-1\mid s_{n_i}=s ] \le p^{\delta-2 } \end{aligned}\ ] ] by the induction hypothesis .further , we can decompose \\ & = \sum_{i',s'\atop n_i\le i'<n_{i+1}-1 } { \ensuremath{\mathrm{pr}}\mathopen { } } [ w({\ensuremath{\overline{n}}}_{\delta-1 } , n_{i+1}-1)>d \mid { \ensuremath{\overline{n}}}_{\delta-1}=i'\wedge s_{i'}=s'\wedge s_{n_i}=s]\nonumber\\ & \quad\cdot { \ensuremath{\mathrm{pr}}\mathopen{}}[{\ensuremath{\overline{n}}}_{\delta-1}=i'\wedge s_{i'}=s ' \mid { \ensuremath{\overline{n}}}_{\delta-1}<n_{i+1}-1\wedge s_{n_i}=s].\nonumber \end{aligned}\ ] ] now let us argue about the probability .\ ] ] the algorithm performed a reset just before reading , so it starts simulating state . however , in the optimal solution , there is some state associated with position such that the cost of the remainder of the phase is at most . due to the assumption of the theorem ,the optimal cost on the input starting from state is at most , and the expected cost incurred by at most . using markov s inequality ,we get \le \frac{r(c + f + b)}{d}=p.\ ] ] plugging into , and then together with into yields the result .now we can argue about the expected cost of a phase .[ lm : wexp ] for any and it holds that \le\mu ] . from the definition of the it follows that .consider any elementary event from the probability space , and let , for be the values of the corresponding random variables .we have (\xi ) = { \ensuremath{\mathbbm{e}}\mathopen { } } [ z_{i+1}\mid z_0=z_0,\dots , z_i = z_i]\\ & = { \ensuremath{\mathbbm{e}}\mathopen { } } [ z_i+{\overline{w}}_{i+1}-\mu\mid z_0=z_0,\dots , z_i = z_i ] = z_i-\mu+{\ensuremath{\mathbbm{e}}\mathopen{}}[{\overline{w}}_{i+1}\mid z_0=z_0,\dots , z_i = z_i]\\ & \textstyle = z_i-\mu+\sum_s{\ensuremath{\mathbbm{e}}\mathopen{}}[{\overline{w}}_{i+1}\mid z_0=z_0,\dots , z_i = z_i , s_{n_{i+1}}=s]\\ & \quad\cdot{\ensuremath{\mathrm{pr}}\mathopen{}}[s_{n_{i+1}}=s \mid z_0=z_0,\dots , z_i = z_i]\\ & \textstyle \le z_i-\mu+\sum_s{\ensuremath{\mathbbm{e}}\mathopen{}}[w(n_{i+1},n_{i+2}-1)\mid s_{n_{i+1}}=s]\cdot{\ensuremath{\mathrm{pr}}\mathopen{}}[s_{n_{i+1}}=s \mid z_0=z_0,\dots , z_i = z_i]\\ & \textstyle \le z_i-\mu+\mu\sum_s{\ensuremath{\mathrm{pr}}\mathopen{}}[s_{n_{i+1}}=s \mid z_0=z_0,\dots , z_i = z_i]=z_i = z_i(\xi ) , \end{aligned}\ ] ] where the last inequality is a consequence of lemma [ lm : wexp ] .now we can use the following special case of the azuma - hoeffding inequality .[ lm : azuma ] let be a supermartingale , such that .then for any positive real , \le\exp\mathopen{}\left(-\frac{t^2}{2k\gamma^2}\right).\ ] ] in order to apply lemma [ lm : azuma ] , we need the following bound .[ clm : martbound ] let be such that .for any it holds that .we are now ready to prove the subsequent lemma .[ lm : mainbound ]let be such that .there is a constants ( depending on , , , ) such that \le\exp\mathopen{}\left(-\frac{k\left((1+\varepsilon)rc-\mu\right)^2}{2c^2\log^2k}\right).\ ] ] applying lemma [ lm : azuma ] for any positive , we get \le\exp\mathopen{}\left(-\frac{t^2}{2kc^2\log^2k}\right).\ ] ] noting that , and choosing the statement follows .the only remaining task is to verify that , i.e. , that there is a constant such that let us choose such that . then , and it is possible to choose such that both as required , and thus , we have and therefore and the claim follows . to get to the statement of the main theorem , we show the following technical bound .[ lm : expprob ] for any , and there is a such that for any note that the left - hand side is of the form for some positive constant .clearly , for any and large enough , it holds that . combining lemmata [ lm : mainbound ] and [ lm : expprob ], we get the following result .[ crlr : probzk ] there is a constant ( depending on , , , ) such that for any there is a such that for any we have \le\frac{1}{2(2+kc)^\beta}.\ ] ] in order to finish the proof of the main theorem we show that w.h.p . , is actually the cost of the algorithm .[ lm : zcost ] for any there is a and a such that for any \le\frac{1}{2(2+kc)^\beta}.\ ] ] since the event that happens exactly when there exists some such that .consider any fixed .since the cost of a subphase is at most , it holds that . from lemma [ lm : xprob ]it follows that for any , \le{\ensuremath{\mathrm{pr}}\mathopen{}}\left[x_j\ge\left\lceil\frac{c\log k}{f+d}\right\rceil\right ] \le p^{\frac{c\log k}{f+d}-1}.\ ] ] consider the function it is decreasing , and .hence , it is possible to find a constant , and a such that for any it holds that from that it follows that and i.e. , thus , for this choice of and , it holds that \le p^{\frac{c\log k}{f+d}-1}\le\frac{1}{2k(2+kc)^\beta}.\ ] ] using the union bound , we conclude that the probability that the cost of any phase exceeds is at most .using the union bound , combining lemma [ lm : zcost ] and corollary [ crlr : probzk ] , and noting that the cost of the optimum is at most , we get the following statement .there is a constant such that for any there a such that for any it holds \le\frac{1}{(2+kc)^\beta}.\ ] ] to conclude the proof by showing that for any there is some such that \le\frac{1}{(2+kc)^\beta}\ ] ] holds for all , we have to choose large enough to cover the cases of .for these cases , , and hence the expected cost of at most , and due to lemma [ lm : wexp ] , the expected cost of is constant .the right - hand side is decreasing in , so it is at least , which is again a constant .from markov s inequality it follows that there exists a constant such that <\frac{1}{(2+k_2c)^\beta}\ ] ] finishing the proof of the restricted setting .all that is left to do is to show how to handle problems that are not request - bounded .the main idea is to apply the restricted theorem [ thm : exptohp ] to a modified request - bounded version of the given problem .we then have to show that there is a modified version of the algorithm such that the computed solution has an expected competitive ratio close to the original one for the modified problem . by ensuring that _ any _ solution to the modified problem translates to a solution of the original problem with at most the same competitive ratio, it is enough to apply our theorem to the modified problem to obtain an analogous result for the original problem .let be an opt - bounded symmetric problem ; then is described entirely by the feasible request - answer pairs ( depending on the states ) , by its set of states , and by costs of all request - answer pairs for all states .note that an expected -competitive online algorithm has to have an expected competitive ratio of for every request - answer pair .let denote the cost to give as answer on request when in state of the problem .let be the set of all possible answers .then we define the _-truncated _ version of as follows .let be a state and be a request ; we set i.e. , the minimal cost to answer when in state . in assign the cost , if and otherwise .we define all request - answer pairs of such that to have a cost of .both and have the same remaining feasible request - answer pairs for each state .note that any algorithm that gives an answer of cost with nonzero probability can not be competitive and that due to the modifications of the cost function , some distinct states of may become a single state of .we will abuse notation and ignore this fact because it does not change the proof .thus we assume that both problems have the same set of states .we continue with some insights that help us to choose useful values for and .[ claim : bounds ] given an expected -competitive algorithm , for any there is a -competitive online algorithm such that the cost for any provided by at most . furthermore, if , ignore the destination state and give a minimum cost answer greedily . let be the state selected after by an optimal solution and let be the state when giving a greedy answer of cost .let , , and be the costs of the respective optimal solutions when starting from , , or .we first note that the optimal answer that leads from to can have a cost of at most as otherwise , by the opt - boundedness , choosing greedily and moving to would be a better solution .the sum of probabilities of select an answer of cost at least is at most where the parameter is due to the definition of the competitive ratio .otherwise the expected value would be too high if the adversary chooses to only send a single request .we set to satisfy the -closeness to the expected competitiveness .we now show how to handle large values of . to be -competitive, we can afford a cost of if we choose the first answer greedily and apply all remaining requests , the expected cost of the solution is at most therefore , if , the modified solution is -competitive .the claim suggests to set and , where we chose . from now on is the -truncated version of with these values of and . as before , let an online algorithm for that computes a solution with expected competitive ratio at most .we design an algorithm as follows .suppose in state of , the adversary requests . then in state on within . if and the answer has a cost smaller than , the answer of is .otherwise the answer of answers greedily while ignoring the destination state , and performing a reset subsequently .it is clear that all answers of feasible for .we first show that the expected competitive ratio of is at most . for each round with , the claim follows directly from claim [ claim : bounds ] using that any answer in with cost higher than neither affects an optimal answer nor the algorithm s answer due to the claim . otherwise , if , the competitive ratio of the greedy answer is at most , using the same argumentation as in the proof of the second part of claim [ claim : bounds ] . to summarize, is a symmetric , opt - bounded , and request - bounded problem and an expected -competitive algorithm for .therefore , we can apply the restricted theorem [ thm : exptohp ] as proven in the last section with an error of and with show that there is an algorithm that is -competitive for w.h.p .finally we show that the competitive ratio in for any sequence of answers on any request string can not be larger than the competitive ratio of the same sequence in .observe that a string of answers is optimal for if and only if it is optimal for . due to the opt - boundedness, an optimal solution can not have any answer on request from state that has a cost larger than in or larger than in .therefore the parameter does not influence any optimal solution in and it can not be an advantage to give an answer in that is set to a cost of in . in each time step ,the difference of the cost of any answer in and given any state and request is fixed to exactly as long as the answer has finite cost .thus , any improvement of the answer sequence in one of the problems translates to an improvement in the other one .let be an optimal sequence of answers and be the corresponding sequence of states .then it is sufficient to show that for each , the competitive ratio of for is at most as high as the competitive ratio for . for any ,let us fix a state and a request .let be the answer given by .then the competitive ratio in is . if , the cost of both the optimal answer and the algorithmic answer , and therefore also the ratio , is identical in and .otherwise , the ratio in is where the last inequality uses that any competitive ratio is at least one .we now discuss the impact of theorem [ thm : exptohp ] on task systems , the -server problem , and paging . despite being related , these problems have different flavors when analyzing them in the context of high probability results .finally , we show that there are also problems that do not directly fit into our framework but nevertheless allow for high probability results for specific algorithms .the properties of online problems needed for theorem [ thm : exptohp ] are related to the definition of task systems .there are , however , some important differences . to analyze the relation ,let us recall the definition of task systems as introduced by borodin et al .we are given a finite state space and a function that specifies the ( finite ) cost to move from one state to another .the requests given as input to a task system are a sequence of -vectors that specify , for each state , the cost to process the current task if the system resides in that state .an online algorithm for task systems aims to find a schedule such that the overall cost for transitions and processing is minimized . from now on we will call states in _ system states _ to distinguish them from the states of definition [ dfn : state ] .the main difference between states of definition [ dfn : state ] and system states is that states and the distances between states depend on the requests provided as input and on the answers given by the online algorithm ; this way there may be infinitely many states .states are also more general than system states in that we may forbid specific state transitions .[ thm : tasksys ] let a randomized online algorithm with expected competitive ratio for task systems .then , for any , there is a randomized online algorithm for task systems with competitive ratio w.h.p .( with respect to the optimal cost ) . in a task system ,the system states are exactly the states according to our definition , because the optimal future cost only depends on the current system state and a future request has the freedom to assign individual costs to each of the system states . in other words , an equivalence class from definition [ dfn : state ]( i.e. , one state ) consists of exactly one unique system state . to apply theorem [ thm : exptohp ] , we choose the constant of the theorem to be .this way , the problem is opt - bounded as one transition of cost at most is sufficient to move to any system state used by an optimal computation .the problem is clearly partitionable according to definition [ dfn : partitionable ] as each round is associated with a non - negative cost .the adversary may also stop after an arbitrary request .the remaining condition of theorem [ thm : exptohp ] that every state is initial formally conflicts with the definition of task systems , because usually there is a unique initial configuration that corresponds to a state .this problem is easy to circumvent by relabeling the states before each run ( reset ) of the algorithm , i.e. , we construct an algorithm that is used instead of .when starting the computation , determines the mapping and simulates the run of on the mapped instance .thus we are able to use theorem [ thm : exptohp ] on and the claim follows . the -server problem , introduced by manasse et al . , is concerned with the movement of servers in a metric space .each request is a location and the algorithm has to move one of the servers to that location .if the metric space is finite , this problem is well known to be a special metrical task system .the states are all combinations of locations in the metric space and the distance between two states is the corresponding minimum cost to move servers such that the new locations are reached . each request is a vector where all states but those containing the correct destination have a processing time and the states containing the destination have processing time zero . using theorem [ thm : tasksys ]this directly implies that all algorithms with a constant expected competitive ratio for the -server problem in a finite metric space can be transformed into algorithms that have almost the same competitive ratio w.h.p .if the metric space is infinite , an analogous result is still valid except that we have to bound the maximum transition cost by a constant .this is the case , because the proof of theorem [ thm : tasksys ] uses the finiteness of the state space only to ensure bounded transition costs . without the restriction to bounded distances , in general we can not obtain a competitive ratio much better than the deterministic one w.h.p .[ thm : server ] let be a metric space with constant , be the initial position of all servers , a constant and let be the infimum over the competitive ratios of all deterministic online algorithms for the -server problem in for instances with at most requests .for every , there is a metric space where for any randomized online algorithm the -server problem there is an oblivious adversary against which the solution of a competitive ratio of at least with constant probability .we obtain as follows .the set is composed of copies of .let , for each , denote the copy of in together with the point ( i.e. , is in each of the sets ) .this way . for any pair of points with copies in ,we set ; we call the _ scaling factor _ of . for any ,the distance between points in distinct copies of is .this way is a metric and we can choose freely a scaling factor for the cost function .we now describe an adversary uses oblivious adversaries for deterministic online algorithms as black boxes and has two parameters and that specify lower bounds on the number of requests and the cost of the optimal offline solution . with requests of the point in ( i.e. , the optimal cost after the first requests is zero ) .note that we can not assume to be a constant .afterwards the adversary starts a second phase where it simulates a deterministic adversary in a suitably scaled copy of .we assume without loss of generality that any considered algorithm is _ lazy _ , i.e. , it answers requests by only moving at most one server ( see manasse et al .we choose as scaling factor . all subsequent requests in . due to the laziness assumption , after the first requests there are at most different possibilities to answer the subsequent requests ( we can view an answer simply as the index of one of the servers ) .adding also all shorter request sequences , by the geometric series there are at most possible answer sequences .analogously , there are less than possible request sequences of length at most in .thus , the total number of algorithms behaving differently within at most requests is less than and therefore constant . choose one of at most deterministic algorithms to play against .he analyzes the probability distribution of s strategies after the first requests .then he selects one of the algorithms that corresponds to the strategy run by maximal probability . with s choice of the algorithm, the competitive ratio of at least with constant probability at least and the choice of ensures that the optimal cost is at least .if we allow the metric to be infinite , then there is no -competitive online algorithm w.h.p .for the -server problem for any constant .we simply use that the lower bound of manesse et al . satisfies the properties of theorem [ thm : server ] . in the paging problemthere is a cache that can accommodate memory pages and the input consists of a sequence of requests to memory pages . if the requested page is in the cache , it can be served immediately , otherwise some page must be evicted from the cache , and be replaced by the requested page ; this process is called a _ page fault_. the aim of a paging algorithm is to generate as few page faults as possible .each request generates either cost ( no page fault ) or ( page fault ) , and the overall cost is the sum of the costs of the requests .paging can be seen as a -server problem restricted to uniform metrics where all distances are exactly one . in particular , the transition costs in that metric are bounded .hence , the assumptions discussed in the previous subsection are fulfilled , meaning that for any paging algorithm with expected competitive ratio there is an algorithm with competitive ratio w.h.p .note that the marking algorithm is analyzed based on phases that correspond to distinct requests , and hence the analysis of the expected competitive ratio immediately gives the competitive ratio also w.h.p . however , e.g. , the optimal algorithm with competitive ratio due to achlioptas et al . is a distribution - based algorithm where the high probability analysis is not immediate ; theorem [ thm : exptohp ] gives an algorithm with competitive ratio w.h.p .also in this case . in section [ sec : discussion ] we will show that none of the conditions of theorem [ thm : exptohp ] can be omitted .however , there are problems that do not fit the assumptions of the theorem , and still can be solved almost optimally by specific randomized online algorithms with high probability .we use , however , a weaker notion of high probability than in the previous sections . consider the problem _ job shop scheduling with unit length tasks _ ( jssfor short ) defined as follows : we are given a constant number of _ jobs _ to that consist of _ tasks _ each .each such task needs to be processed on a unique one of _ machines _ which are identified by their indices , and we want to find a schedule with the following properties . processing one task takes exactly time unit and , since all jobs need every machine exactly once , we may represent them as permutations of the machine indices , where for every and .all arrive in an online fashion , that is , the task of is not known before the task is processed .obviously , as long as all jobs request different machines , the work can be parallelized .if , however , at one time step , some of them ask for the same machine , all but one of them have to be delayed .the cost of a solution is given by the total time needed for all jobs to finish all tasks ; the goal is to minimize this time ( i.e. , the overall makespan ) . in the following ,we use a graphical representation that was introduced by brucker .let us first consider only two jobs and . consider an -grid where we label the -axis with and the -axis with .the cell models that , in the corresponding time step , processes a task on machine while processes a task on . a feasible schedule for the induced instance of jssis a path that starts at the upper - left vertex of the grid and leads to the bottom right vertex . and two strategies .obstacles are marked by filled cells.,width=226 ] it may use diagonal edges whenever .however , if , both and ask for the same machine at the same time and therefore , one of them has to be delayed . in this case, we say that and collide and call the corresponding cells in the grid _ obstacles _( see fig .[ fig : jssexample ] for an example with ) .if an algorithm has to delay a job , we say that it _ hits an obstacle _ and may therefore not make a diagonal move , but either a horizontal or a vertical one . in the first case , gets delayed , in the second case , gets delayed .note that , since and are permutations , there is exactly one obstacle per row and exactly one obstacle per column for every instance , therefore , obstacles overall for any instance .the graphical representation generalizes naturally to the -dimensional case .the problem has been studied previously , for instance in .hromkovi et al . showed the existence of a randomized online algorithm achieves an expected competitive ratio of , for , assuming that it knows . on diagonals in the grid ; intuitively ( in two or three dimensions ) , a diagonal in the grid is the sequence of integer points on a line that is parallel to the line from the coordinate to .more precisely , let be the convex hull of the grid .then a diagonal is a sequence of integer points such that is in the facet of that contains the origin , is in the facet containing the destination , none of the two points is in a smaller - dimensional face , and we obtain from by increasing each coordinate by exactly one . as shown by hromkovi et al . , the number of diagonals that start at points with all coordinates at most is exactly .a diagonal template with respect to and is a sequence of consecutive points in the grid that starts from , moves to , visits each point of and finally moves to the destination . to reach , delays each job by time units in the begining and delays each job by time units upon reaching the last point of the diagonal .thus , a schedule that follows a diagonal template without delays has a length of exactly . a diagonal strategy with respect to a diagonal template is a minimum - length schedule that visits each point of .note that an online algorithm has all necessary information to run a diagonal strategy , because when reaching an obstacle , all possible ways to the subsequent point are available ; an example of a diagonal strategy is depicted in fig .[ fig : jssexample ] .the randomized algorithm the value and chooses uniformly at random a diagonal with ; then it follows the corresponding diagonal strategy . for any is an online algorithm for jssthat is -competitive with probability , for any .we already mentioned that one of diagonals .it is also known that the total number of delays in all diagonal strategies caused by obstacles is at most .clearly , any schedule has a length of at least .thus , in order to be -competitive , we need a diagonal strategy such that , where is the number of delays due to obstacles .let be the number of diagonals considered by the algorithm such that the corresponding diagonal strategies have more than delays caused by obstacles . then, to show our claim , we have to ensure that .the value of is maximized if we assume that any diagonal has either no obstacles or the delay is exactly .therefore , since the dimension is a constant , the claim follows from mentioned above , our result holds with large generality as many well - studied online problems meet the requirements we imposed .however , the assumptions of theorem [ thm : exptohp ] require that the problem at hand 1 . is partitionable , 2. every state is equivalent to some initial state , and [ enum : init ] 3 . [ enum : optb ] as stated before , partitionability is not restrictive ; every problem can be presented as a partitionable one .we now show that removing any of the conditions [ enum : init ] and [ enum : optb ] allows for a counterexample to the theorem . for the purpose of this discussion ,let and in condition [ enum : optb ] range over all _ initial _ states to have it defined also for non - symmetric problems .first , let us consider the following online problem where condition [ enum : init ] is violated , i.e. , where not every state is equivalent to some initial state .there are requests .the request is a dummy request .the request is a test : if the test is _ passed _, otherwise the test is _ failed _ ; the cost of and is always zero . for the remaining requests we have cost of , for , is if the test has been passed , or if .otherwise , the cost of is .the cost of is zero .the problem is clearly partitionable .there are six states : the initial state , then two possible states to guess the test , then one state for processing all requests with the test passed , and two states for processing requests with the test failed , based on the value of the previous answer . from any state , however , the optimal value of the remaining sequence of requests is between and . a randomized online algorithm that guesses each time independently has probability to pass the test incurring a cost of , and probability to fail , in which case , for any subsequent request, it pays 1 with probability , and with probability . putting everything together, the expected cost is , so . on the other hand , for any randomized algorithm, there is an input for which it has probability at least of failing the test , and then on each request probability at least of a wrong guess . from symmetry arguments we conclude that , once the test is failed , the probability that the algorithm makes at least wrong guesses is at least .hence , with probability at least the cost of the algorithm is at least , so it can not be -competitive w.h.p . for any .next , let us remove condition [ enum : optb ] .we have seen a hint to the necessity in theorem [ thm : server ] , but currently no randomized online algorithm for the -server problem is known to have a competitive ratio better than independent of the size of the metric space . therefore we give a second unconditional argument .let us consider the following problem : the states are pairs where , , and any state can be an initial one .processing the request in state produces the answer ; the cost of is if , and if .after processing the request , the new state is .it is easy to verify that the problem is partitionable and that the states are in accord with definition [ dfn : state ] . also , it is easy to check that the worst - case expected ratio of the algorithm that produces random answers is . on the other hand ,consider inputs that start from state with .the optimal cost is , however , any randomized algorithm has probability at least of incurring cost ( by failing the two last requests ) .our result opens several new questions .for instance , our results , so far , are only shown for minimization problems . also note that our analysis does not hold for the notion of _ strict _ competitiveness ( i.e. , ) for arbitrary input sizes .furthermore , the assumption that all input strings are feasible for all states ( implied by the opt - boundedness ) may allow for relaxations . until now , we only focused on upper bounds on the competitive ratio . our results , however , also open a potential lower bound technique : if a problem satisfies our requirements , a lower bound w.h.p .implies a lower bound of almost the same quality in expectation . in this contextit is natural to ask for the requirements of problems for a complementary result .how can we determine the class of problems such that each algorithm that is -competitive w.h.p . can be transformed into an algorithm that is almost -competitive in expectation ? finally , we would like to suggest the terminology to call a randomized online algorithm _ _ -competitive if , for any positive constant , -competitive in expectation and we may use theorem [ thm : exptohp ] to construct an online algorithm that is -competitive w.h.p .analogously , an online problem is totally -competitive if it admits a totally -competitive algorithm .10 k. azuma .weighted sums of certain dependent random variables ._ thoku mathematical journal _ , 19(3):357367 , 1967 .d. achlioptas , m. chrobak , and j. noga .competitive analysis of randomized paging algorithms ._ theoretical computer science _ , 234(1 - 2):203218 , 2000 .bckenhauer , d. komm , r. krlovi , r. krlovi , and t. mmke . on the advice complexity of online problems . in _ proc . of the 20th international symposium on algorithms and computation ( isaac 2009 ) _ , _ lncs _ 5878 , pp. 331340 .springer - verlag , 2009 .a. borodin and r. el - yaniv ._ online computation and competitive analysis_. cambridge university press , 1998 .a. borodin , n. linial , and m. e. saks .an optimal on - line algorithm for metrical task system ._ journal of the acm _, 39(4):745763 , 1992 .p. brucker .an efficient algorithm for the job - shop problem with two jobs . _ computing _ , 40(4):353359 , 1988 .a. fiat , r. m. karp , m. luby , l. a. mcgeoch , d. d. sleator , and n. e. young .competitive paging algorithms . _journal of algorithms _, 12(4):685699 , 1991 . w. hoeffding .probability inequalities for sums of bounded random variables ._ journal of the american statistical association _, 58(301):1330 , 1963 . j. hromkovi . _ design and analysis of randomized algorithms_. springer - verlag , berlin , 2005. j. hromkovi , t. mmke , k. steinhfel , and p. widmayer .job shop scheduling with unit length tasks : bounds and algorithms . _ algorithmic operations research _, 2(1):114 , 2007 . s. irani and a. r. karlin . on online computation .approximation algorithms for -hard problems , chapter 13 _ , pp . 521564 .pws publishing company , 1997 .d. komm and r. krlovi .advice complexity and barely random algorithms . in _ proc . of the 37th international conference on current trends in theory and practice of computer science ( sofsem 2011 ) _ , _ lncs _ 6543 , pp .springer - verlag , 2011 .e. koutsoupias .the -server problem ._ computer science review _ , 3(2):105118 , 2009 . s. leonardi , a. marchetti - spaccamela , a. presciutti , and a. rosn . on - linerandomized call control revisited ._ siam journal on computing _, 31(1):86112 , 2001 . b. m. maggs , f. meyer auf der heide , b. voecking , and m. westermann . exploiting locality for networks of limited bandwidth . in _ proc . of the 38th ieee symposium on foundations of computer science ( focs 1997 ) _ , pp284293 , 1997 . m. s. manasse , l. a. mcgeoch , and d. d. sleator .competitive algorithms for on - line problems ._ journal of algorithms _, 11(2):208230 , 1990 .d. d. sleator and r. e. tarjan . amortized efficiency of list update and paging rules ._ communications of the acm _ , 28(2):202208 , 1985 .
we study the relationship between the competitive ratio and the tail distribution of randomized online minimization problems . to this end , we define a broad class of online problems that includes some of the well - studied problems like paging , -server and metrical task systems on finite metrics , and show that for these problems it is possible to obtain , given an algorithm with constant expected competitive ratio , another algorithm that achieves the same solution quality up to an arbitrarily small constant error a with high probability ; the `` high probability '' statement is in terms of the optimal cost . furthermore , we show that our assumptions are tight in the sense that removing any of them allows for a counterexample to the theorem . in addition , there are examples of other problems not covered by our definition , where similar high probability results can be obtained .
the problem that motivated our study is the analysis of benchtop and computer experiments that produce dynamical data associated with the structural fluctuations of a protein in water .frequently , the physical laws that govern these dynamics are time - reversible .therefore , a stochastic model for the experiment should also be reversible .reversible markov models in particular have become widespread in the field of molecular dynamics .modeling with reversible markov chains is also natural in a number of other disciplines .we consider the setting in which a scientist has a sequence of states sampled from a reversible markov chain .we propose a bayesian model for a reversible markov chain driven by an unknown transition kernel .problems one can deal with using our model include ( i ) predicting how soon the process will return to a specific state of interest and ( ii ) predicting the number of states not yet explored by that appear in the next transitions .more generally , the model can be used to predict any characteristic of the future trajectory of the process .problems ( i ) and ( ii ) are of great interest in the analysis of computer experiments on protein dynamics .diaconis and rolles introduced a conjugate prior for bayesian analysis of reversible markov chains .this prior is defined via de finetti s theorem for markov chains .the predictive distribution is that of a linearly edge - reinforced random walk ( errw ) on an undirected graph .much is known about the asymptotic properties of this process , its uniqueness and its recurrence on infinite graphs ( , and references therein ) .fortini , petrone and bacallado recently discussed other examples of markov chain priors constructed through representation theorems . our construction can be viewed as an extension of the errw defined on an infinite space .the prediction for the next state visited by the process is not solely a function of the number of transitions observed in and out of the last state . in effect, transition probabilities out of different states share statistical strength .this will become relevant in applications where many states are observed , especially for those states that occur rarely .a major goal in our application is the prediction of the number of states that the markov chain has not yet visited that will appear in the next transitions .more generally , scientists are interested in predicting aspects of the protein dynamics that may be strongly correlated with the rate of discovery of unobserved states , for instance , the variability of the time needed to reach a conformation of interest , starting from a specific state .predictive distributions for such attributes are useful in deciding whether one should continue a costly experiment to obtain substantial additional information on a kinetic property of interest . estimating the probability of discovering new species is a long - standing problem in statistics .most contributions in the literature assume that observations , for example , species of fish captured in a lake , can be modeled as independent and identically distributed random variables with an unknown discrete distribution . in thissetting , several bayesian nonparametric models have been studied .here we assume that species , in our case protein conformational states , are sampled from a reversible markov chain .to the best of our knowledge , this is the first bayesian analysis of species sampling in this setting .scheme and special cases . ] we can now outline the article .section [ tabschemesection ] introduces the species sampling model , which we call the scheme .the process specializes to the errw , a markov exchangeable scheme , and to the two - parameter hoppe urn , a classical exchangeable scheme which gives rise to the pitman yor process and the two - parameter poisson dirichlet distribution .as illustrated in figure [ diagram ] , the parameter smoothly tunes the model between these two special cases .section [ representationssection ] shows that the scheme can be represented as a mixture of reversible markov chains .this allows us to use its de finetti measure as a prior for bayesian analysis .section [ largesupportsection ] shows that our scheme is a projection of a conjugate prior for a random walk on a multigraph .this representation is then used to prove that our model has full weak support .section [ sufficientnesssection ] provides a sufficientness characterization of the proposed scheme .this result is strictly related to the characterizations of the errw and the two - parameter hoppe urn discussed in and , respectively . in section [ lawsection ] , an expression for the law of the schemeis derived , and this result is used in section [ bayesianinferencesection ] to define algorithms for posterior simulation .section [ applications ] applies our model to the analysis of two molecular dynamics datasets .we evaluate the predictive performance of the model by splitting the data into training and validating datasets .section [ discussionsection ] concludes with a discussion of remaining challenges .the scheme is a stochastic process on a polish measurable space equipped with a diffuse ( i.e. , without point masses ) probability measure .we construct the law of the process using an auxiliary random walk with reinforcement on the extended space .the auxiliary process classifies each transition into three categories listed in figure [ mechanism ] and defines latent variables , taking values in , that capture each transitions category . in this sectionwe first provide a formal definition of the scheme and then briefly describe the latent process .the blue arrow represents the transition between two states in , while the red arrows represent the path of an auxiliary random walk with reinforcement .the edges that have positive weight before the transition are drawn in black , and in each case , we mark the reinforcements of produced by the transition .self - transitions follow a slightly different reinforcement scheme formalized in definition [ tabscheme ] . ]the law of the scheme is specified by a weighted undirected graph with vertices in .this graph can be formalized as a symmetric function , where is the weight of an undirected edge with vertices and .we require that the set is countable , and that the set of edges is a finite subset of .the graph will be sequentially reinforced after each transition of the scheme . in the following definition , we assume the initial state is deterministic and contained in .[ tabscheme ] the _ scheme _ , , has parameters , and ] to the numerator of the probability of the latent path .we can write , where is the total probability of all latent paths consistent with .taking into account that the factors listed in the previous paragraph are common to all latent paths with a given , we obtain where we use pitman s notation for factorial powers the function is a sum with as many terms as the possible latent paths consistent with .the term corresponding to a specific latent path is the product of those factors that appear in the numerator of the latent path probability and correspond to errw - like transitions . for every pair of states , there are factors , but their sequential reinforcement depends on the order in which errw - like and mediated transitions appear in a specific latent path . summing these factors over all possible orders , one pair of states at a time , we can factorize , where and [ recursion ] the function satisfies the following recursion for all , \\[-8pt ] & & { } + f_{e,\beta}(n-1,k-1 ) \bigl[e-1+\beta k+(1-\beta)n\bigr],\nonumber\end{aligned}\ ] ] where we set , for all , the recursive representation allows one to compute quickly . in order to obtain the values of for every , where is an arbitrarily selected integer and , it is sufficient to solve ( [ soleq ] ) fewer than times . in the next proposition, we provide a closed - form solution for in terms of the generalized lah numbers , a well - known triangular array .[ generalizedlah ] let be the generalized factorial of of order and increments , namely with .the _ generalized lah numbers _ ( sometimes referred to as generalized stirling numbers ) , are defined by where .[ recursionsolution ] for any and the function coincides with where and this section we introduce a gibbs algorithm for performing bayesian inference with the scheme given the trajectory of a reversible markov chain . on the basis of the almost conjugate structure of the prior model described in the previous sections we only need to sample the latent variables conditionally on the data .recall that the latent variables express what fraction of the transitions in are errw - like transitions ; cf . figure [ mechanism ] .we want to sample from or equivalently from recall that and are functions of and that . for simplicity , and without loss of generality , we consider the case where initially is infinitesimal , , and otherwise .the count is a function of and therefore if and then we can write $ ] , where in other words , if we consider the joint distribution of three variables , , and , where and are the distributions of and , then the marginal law of coincides with .we note that sampling from is simple , because the variables are conditionally independent , and that sampling from is straightforward . the random variables and conditionally on are independent with dirichlet and gamma distributions .finally , we use these conditional distributions to construct a gibbs sampler for . in any markov chain monte carlo algorithm, it is important to ensure mixing . in appendix d, we derive an exact sampler for which uses a coupling of the gibbs markov chain just defined .the method is related to coupling from the past .we performed simulations with the exact sampler to check the convergence of the proposed gibbs algorithm .species sampling problems have a long history in ecological and biological studies .the aim is to determine the species composition of a population containing an unknown number of species when only a sample drawn from it is available .a common statistical issue is how to estimate species richness , which can be quantified in different ways .for example , given an initial sample of size , species richness might be quantified by the number of new species we expect to observe in an additional sample of size . it can be alternatively evaluated in terms of the probability of discovering at the draw a new species that does not appear across the previous observations ; this yields the discovery rate as a function of the size of an hypothetical additional sample .these estimates allow one to infer the coverage of a sample of size , in other words , the relative abundance of distinct species observed in a sample of size . a review of the literature on this problem can be found in bunge and fitzpatrick .lijoi et al .proposed a bayesian nonparametric approach for evaluating species richness , considering a large class of exchangeable models , which include as special case the two - parameter hoppe urn .see also lijoi et al . and favaro et al . for a practitioner - oriented illustration using expressed sequence tag ( est ) data obtained by sequencing cdna libraries .we illustrate the use of the scheme in species sampling problems .in particular , we evaluate species richness in molecular dynamics simulations. the data we analyze come from a series of recent studies applying markov models to protein molecular dynamics simulations ( , and references therein ) .these computer experiments produce time series of protein structures .the space of structures is discretized , such that two structures in a given state are geometrically similar ; this yields a sequence of species which correspond to conformational states that the molecule adopts in water .we apply the scheme to perform predictive inference of this discrete time series .we analyze two datasets .the first is a simulation of the alanine dipeptide , a very simple molecule .the dataset consists of 25,000 transitions , sampled every 2 picoseconds , in which 104 distinct states are observed . in this casethe 50 most frequently observed states constitute 85% of the chain and each of the 104 observed states appears at least 12 times .the second dataset is a simulation of a more complex protein , the ww domain , performed in the supercomputer anton .this example illustrates the complexity of the technology and the large amount of resources required for simulating protein dynamics _ in silico_. it also motivates the need for suitable statistical tools for the design and analysis of these experiments . in this dataset1410 distinct states are observed in 10,000 transitions , sampled every 20 nanoseconds .many of the states are observed only a few times ; in particular we have 991 states that have been observed fewer than 4 times and 547 states that appear only once . to apply the scheme it is necessary to tune the three parameters .we consider the initial weights everywhere null except for and infinitesimal .the parameters and affect the probability of finding a novel state when the latent process reaches , while the parameter tunes the degree of dependence between the random transition probabilities .we recall that in the extreme case of the sequence is exchangeable and the random transition probabilities out of the observed states become identical .we proceed by approximating the marginal likelihood of the data for the set of parameters , where , and .we iteratively drew samples , under specific values , from the conditional distribution using the gibbs algorithm defined in the previous section .note that where the last equality is obtained counting the possible values of .we compare the models on the basis of approximations of the marginal probabilities across prior parameterizations .the samples we drew from are used to compute monte carlo estimates of . recall that the probability can be computed using the analytic expressions derived in section [ lawsection ] . using compute the estimates and obtain standard errors by bootstrapping . in table[ modelcomparisons ] , we report the logarithm of these estimates for each model , shifted by a constant such that the largest entry for each dataset is 0 .we only show , due to limits of space , these results for the values associated with the maxima of across the considered parameterizations .the difference between two entries corresponds to a logarithmic bayes factor between two models .the values in table [ modelcomparisons ] indicate that in each dataset there is one model for which there is strong evidence against all others .this also holds when several values of are considered . for each dataset , we have highlighted the optimal parameters .the degenerate cases and were also included in the comparisons but are not shown in table [ modelcomparisons ] . the difference in the marginal log - likelihood between models with and is negligible . on the other hand , shifting the parameter from 0.97 to 1 in the optimal model for dataset 2 decreased the log - likelihood by 7565 , as this model is exchangeable and does not capture the markovian nature of the data .these observations suggest that a fully bayesian treatment with a hyper - prior over a grid of possible combinations would produce similar results .@ & + & + & & & & & + + [ 4pt ] 0.03 & & & & & + 0.2 & & & & & + 0.5 & & & & & + 0.8 & & & & & + 0.97 & & & & & + [ 4pt ] + [ 4pt ] 0.03 & & & & & + 0.2 & & & & & + 0.5 & & & & & + 0.8 & & & & & + 0.97 & & & & & + summarizing , the use of a three - dimensional grid and the computation of monte carlo estimates allows one to effectively obtain a parsimonious approximation of the likelihood function that , in our case , supported selection of single parameterizations .the main results of our analysis are summarized in figure [ posteriorplots ] .conditional on each sample , generated under the selected parametrization , we simulated 20,000 future transitions using our predictive scheme .once is , given the data .bottom : box plot of the fraction of time spent at each state in these simulations .only the twenty most populated states are shown ; below the dashed line , we show the fraction of time spent at states not observed in the dataset . ]conditionally sampled , the predictive simulations become straightforward with the reinforcement scheme . to provide a measure of species richness and the associated uncertainty ,we histogram the number of new states discovered in our simulations in figure [ posteriorplots ] .only a few states are predicted to be found for dataset 1 , while a large number of new states are predicted for dataset 2 .this result is not surprising because the alanine dipeptide dataset has a limited number of rarely observed states , while in the ww domain data a significant number of states are observed once .this result also seems consistent with the selected values of and in these two experiments . as previously mentionedthe scheme is a bayesian tool for predicting any characteristic of the future trajectories .the bottom panels in figure [ posteriorplots ] show confidence bands for the predicted fractions of time that will be spent at the most frequently observed states in the next 20,000 transitions .each box in the plots refers to a single state and shows the quartiles and the 10th and 90th percentiles of the predictive distribution ; states are ordered according to their mean observed frequency .we only show these occupancies for the 20 most populated states , and below the dashed line , we show the total occupancy for states that do not appear in the original data . in the ww domain example , the simulation is expected to spend between 2.5% and 5% of the time at new states . , where is the length of the validation set and is the training set .the blue line shows the mean of these samples .the red line shows the actual number of new species found in the validation set .note that in the right panel , the lines overlap due to the small separation between them . ] to assess the predictive performance of the model we split each dataset into a training set and a validation set .the rationale of this procedure is identical to routinely performed cross validations for i.i.d. data . in our setting ,the training and validation sets are independent portions of a homogeneous markov chain .the first part of the procedure , which uses only the training set , includes selection of the parameters and posterior computations .then , we contrast bayesian predictions to statistics of the validation set .overall , this approach suggests that our model generates reliable predictions .figure [ posteriorcrossvalidation ] shows histograms for the number of new species found in predictive simulations of equal length as the validation set . in each panel ,the blue line is the bayes estimate and the red line corresponds to the number of species that was actually discovered in the validation set .this approach also supports the inference reported with box plots in figure [ posteriorplots ] .we repeated the computations for deriving the results in figure [ posteriorplots ] using only the training data , and considering a future trajectory equal in length to the validation data . in this case , 37 out of 42 of the true state occupancies in the validation set were contained in the 90% posterior confidence bands .we introduced a reinforced random walk with a simple predictive structure that can be represented as a mixture of reversible markov chains .the model generalizes exchangeable and partially exchangeable sequences that have been extensively studied in the literature .our nonparametric prior , the de finetti measure of the scheme , can be viewed as a distribution over weighted graphs with a countable number of vertices in a possibly uncountable space . as is the case for other well - known bayesian nonparametric models such as the dirichlet process , the hierarchical dirichlet process and the infinite hidden markov model ,it is possible to represent our model as a function of two independent components , a species sampling sequence and a process which determines the species locations .this property is fundamental in applications including dirichlet process mixture models and the infinite hidden markov model .a natural extension of our model , not tackled here , is the definition of hidden reversible markov models .a simple construction would consist of convolving our vertices with suitable density functions .we hope reversibility can be an advantageous assumption in relevant applications ; in particular we think reversibility can be explored as a tool for the analysis of genomic data and time series from single - molecule biophysics experiments .proof of proposition [ recurrence ] consider the latent process on .the transition probability from to with is of the form . between successive visits to ,the denominator is increased by at most , and the numerator may only increase .assume that almost surely , the process visits infinitely often .there exist , such that if is the event that we do not traverse between the and visits to , ,\ ] ] which goes to 0 as .therefore the edge is a.s .traversed infinitely often .thus , if is a.s .visited infinitely often , by induction the process a.s .returns infinitely often to all visited states .suppose a state in is visited infinitely often a.s ., then the process visits infinitely often by the previous argument .otherwise , the process must visit an infinite number of states in , and since the set of pairs with a positive initial weight is a finite subset of , we must go through an infinite number of times .we conclude that is visited infinitely often a.s .and therefore the process returns to every state visited infinitely often .if , then the edge is crossed infinitely often , and we see an infinite number of distinct states .proof of proposition [ partialexchangeability ] for there is a latent variable that determines in which of the three ways outlined in figure [ mechanism ] the transition proceeded .the probability of is the sum of its joint probability with every latent sequence .we will show that there is a one - to - one map of the latent sequences such that , letting , one has \\[-8pt ] & & \qquad = p\bigl(z_{1}=x'_1,\ldots , z_{n}=x'_n;u_{1}=u_{1}',\ldots , u_{n-1}=u_{n-1}'\bigr ) .\nonumber\end{aligned}\ ] ] the proposition follows from this claim .let denote the transposition such that .define .the map is defined so that for any and , satisfying for some and , if then . note that we can define the joint probability of and through a reinforcement scheme identical the one defined in section [ tabschemesection ] . precisely , the probability of each transition and associated category is of the form where .the factors , which appear in the denominator when , are reinforced by between successive visits to .therefore , their product only depends on the number of mediated transitions , which is invariant under .similarly , factors increase by between successive occurrences ; their product is identical when we compute the two sides of ( [ ad ] ) because the number of mediated transitions with discovery remains identical .also , the factors in the denominators increase by between successive occurrences of the same state ; their product is identical when we compute the two sides of ( [ ad ] ) because the number of transitions out of any state ( or toward ) remains identical .finally , we need to prove the identity between and the identity between ( [ ads1 ] ) and ( [ ads2 ] ) follows by combining the definitions of and with the reinforcement mechanism .specifically , the factors and are increased by between successive occurrences . since appears as many times in the left - hand side of ( [ ad ] ) as does in the right - hand side of ( [ ad ] ) , the product of these factors is identical in each case .the remaining factors may increase by different amounts between successive occurrences .their product is a function of the subsequence of with indices . by the definition of ,this subsequence is the same in the left and right - hand sides of ( [ ad ] ) , which completes the proof of our claim .proof of proposition [ representationtheorem ] let be a scheme .the process returns to infinitely often a.s .let be the -block .define and .proposi - tion [ partialexchangeability ] implies .let be the last element of a vector .define this limit exists a.s . because is recurrent in ; therefore , the sequence of blocks that form is conditionally i.i.d . from a distribution which a.s .assigns positive probability to blocks containing , which implies visits after a finite time a.s ., at which point the limit settles . in lemma [ markovexchangeabilityofw ]we show that is markov exchangeable and recurrent .therefore , by de finetti s theorem for markov chains ( [ definettistheoremformarkovchains ] ) , it is a mixture of markov chains . finally , by lemma [ iidorderdoesntmatter ], we obtain the representation claimed in the proposition .[ markovexchangeabilityofw ] without loss of generality , let .the process is markov exchangeable and returns to every state in infinitely often a.s .the recurrence of , which is a consequence of proposition [ recurrence ] , implies the recurrence of .thus , we have left to show markov exchangeability .the sequence can be mapped through to , which is a species sampling sequence for the scheme .take any sequence and let .we have latexmath:[\[\begin{aligned } \label{probw } & & p(w_1=w_1,\ldots , w_n = w_n)\nonumber\\ & & \qquad= p(z_1=z_1,\ldots , z_n = z_n ) \\ & & \qquad\quad{}\times p(w_1=w_1,\ldots , w_n = w_n consider any pair of sequences and related by a transposition of two blocks with identical initial and final states .proposition [ partialexchangeability ] implies we have left to show that the second factor on the right - hand side of ( [ probw ] ) is identical for and .the identity of the conditional distribution of given equal to or equal to proves the lemma .[ iidorderdoesntmatter ] the process has the same distribution as . by definition .note that is an i.i.d .sequence , independent from , and .these facts imply that .proof of proposition [ reversibility ] let be the -blocks of the scheme . consider a map on the -blocks space ; if , then . we can observe , following the same arguments used for proving proposition [ partialexchangeability ] , that for any -block and any integer , let be a random measure distributed according to the de finetti measure of the -blocks .the above expression and the equality where is a generic measurable set , imply that a.s .the distance in total variation between and is null .we are grateful to an associate editor and three referees for their constructive comments and suggestions .we would like to thank de shaw research for providing the molecular dynamics simulations of the ww domain .the markov models analyzed in section [ applications ] were generated by kyle beauchamp , using the methodology described in .we would also like to thank persi diaconis and vijay pande for helpful suggestions .
we introduce a three - parameter random walk with reinforcement , called the scheme , which generalizes the linearly edge reinforced random walk to uncountable spaces . the parameter smoothly tunes the scheme between this edge reinforced random walk and the classical exchangeable two - parameter hoppe urn scheme , while the parameters and modulate how many states are typically visited . resorting to de finetti s theorem for markov chains , we use the scheme to define a nonparametric prior for bayesian analysis of reversible markov chains . the prior is applied in bayesian nonparametric inference for species sampling problems with data generated from a reversible markov chain with an unknown transition kernel . as a real example , we analyze data from molecular dynamics simulations of protein folding . ,
one of the radical advances that optical astronomy has seen in recent years is the advent of wide - field ccd - based surveys .key ingredients for these surveys are the availability of instruments with sufficiently large arrays of high quality ccd s , as well as information systems with sufficient computing and data storage capabilities to process the huge data flows .the scientific importance of such surveys , particularly when freely available to researchers , is clearly demonstrated by the impact that surveys such as 2mass and the sloan digital sky survey ( sdss ) , have had in several fields in astronomy . due to their telescopes being located in the northern hemisphere , both sdss and the currently on - going panstarrs surveymainly survey the northern sky .so far , a similar large scale survey has not been performed from the south and no dedicated optical survey telescope were operational until recently . for european astronomy ,however , the southern hemisphere is especially important , due to the presence of eso s very large telescope ( vlt ) and its large array of instruments .this is now remedied with the arrival of eso s own two dedicated survey telescopes : vista in the ( near-)infrared and the vlt survey telescope ( vst ) in the optical .both have become operational during the past two years .the lion s share of the observing time on both survey telescopes will be invested in a set of ` public surveys ' . in terms of observing time , the largest of the optical surveys is the kilo - degree survey ( kids ) , which is imaging 1500 square degrees in four filters ( ,,, ) over a period of 34 years . combined with one of the vista surveys , viking , which is observing the same area in zyjhk , this will provide a sensitive , 9-band multi - colour survey . specifically for the handling of surveys from the vst the astro - wise system has been designed .it allows processing , quality control and public archiving of surveys using a distributed architecture .the kids survey team , that is spread over different countries , performs these survey operations in astro - wise as a single virtual team making intensive use of web - based collaborative interfaces .this paper will discuss both the observational set - up of the kids survey and its primary scientific goals , as well as how the astro - wise system will be used to achieve these goals .the vst is located at paranal observatory in chile and operated by eso .regular observations with the system commenced on october 15th 2011 . with a primary mirror of 2.6-m diameterit is currently the largest telescope in the world specifically designed for optical wide - field surveys .the sole instrument of the vst is omegacam , a 268 megapixel wide - field camera that provides a 1 field - of - view .the focal plane array is built up from 32 2048 pixel ccds , resulting in 16k pixels with a pixel scale of 0.214 arcseconds / pixel .the optics of the telescope and camera were designed to produce a very uniform point - spread - function over the full field - of - view , both in terms of shape and size . for a more detailed descriptionwe refer the reader to the omegacam paper in this issue .l|lll field & ra range & dec range & area + kids - s & 22 3 & & 720 sq.deg + kids - n & 10 15 & & 712 sq.deg + & 15 15 & & + kids - n - w2 & 8 9 & & 68 sq.deg + kids - n - d2 & 9 10 & & 2 sq.deg + kids will cover 1500 square degrees , some 7% of the extragalactic sky .it consists of two patches , ensuring that observations can take place year - round .the northern patch lies on the celestial equator , while the southern patch straddles the south galactic pole ; see fig .[ fig : areas ] and table [ tab : fields ] for the detailed lay - out .together the two patches cover a range of galactic latitudes from 40 to 90 degrees , and the 10 degree width of the strips ensures that the full 3d structure of the universe is sampled well .these specific areas were chosen because they have been the target of massive spectroscopic galaxy surveys already : the 2df redshift survey covers almost the same area , and kids - n overlaps with the sdss spectroscopic and imaging survey as well .this means that several 100,000 galaxy spectra and redshifts are already known in these fields , and hence that the cosmological foreground mass distribution in these fields is well mapped out .extinction in the fields is low .the exposure times for kids and viking have been chosen to yield a median galaxy redshift of 0.8 , so that the evolution of the galaxy population and matter distribution over the last half of the age of the universe can be studied .they are also well - matched to the natural exposure times for efficient vst and vista operations , and balanced over the astro - climate conditions on paranal ( seeing and moon phase ) so that all bands can be observed at the same average rate .this strategy makes optimal use of the fact that all observations are queue - scheduled , making it possible to use the best seeing time for deep -band exposures , for example , and the worst seeing for . all exposure times and observing constraints are listed in table [ tab : exptimes ] , where seeing refers to the full - width - half - maximum ( fwhm ) of the point - spread - function ( psf ) measured on the images . lcccccc filter & exposure time & mag limit & psf fwhm & moon & adc & airmass + & ( seconds ) & ( ab 5 2 ) & ( arcsec ) & phase & used & + & 900 & 24.8 & 0.91.1 & dark & no & 1.2 + & 900 & 25.4 & 0.70.9 & dark & no & 1.6 + & 1800 & 25.2 & .7 & dark & no & 1.3 + & 1080 & 24.2 & .1 & any & no & 2.0 + since the omegacam ccd mosaic consists of 32 individual ccds , it is not contiguous but contains gaps . to avoid holes in the kids images, observations will use 5 dithered observations per field in , and and 4 in .the dithers form a staircase pattern with dither steps of 25 in x and 85 in y. these offsets bridge the inter - ccd gaps of omegacam .the survey tiling is derived using a tiling strategy that can tile the full sky efficiently for the omegacam instrument .neighboring tiles have an overlap in ra of 5% and in dec of 10% .this will allow us to derive the photometric and astrometric accuracies required for the most stringent science cases : internal astrometric error rms and 1% photometric errors .the atmospheric dispersion corrector ( adc ) of omegacam could be used for all kids bands except .however , kids does not make use of the adc to avoid the small losses in sensitivity . instead, the dispersion is limited by constraining the maximum airmass ( see table [ tab : exptimes ] ) . particularly for is important since this band will be used for weak lensing analyses and therefore requires a well - behaved psf . by constraining the maximum airmass in to 1.3, the spectral dispersion will be .2 .after completion of the main survey of 1500 square degrees , the whole survey area will be imaged once more in -band .the observational set - up of this repeat pass is the same as for the main survey observations in , with the additional requirement that it should provide at least a 2 year baseline over the whole survey area . with the average -band seeing of 0.8 this will allow for proper motion measurements with accuracies of 40 mas yr and better for sources detected at signal - to - noise of 10 . the central science case for kids and viking is mapping the matter distribution in the universe through weak gravitational lensing and photometric redshift measurements .however , the enormous data set that kids will deliver , will have many more possible applications .the main research topics that the kids team members will explore are outlined below .dark energy manifests itself in the expansion history of the universe , as a repulsive term that appears to behave like einstein s cosmological constant .understanding its properties more accurately is one of the central quests of cosmology of recent years ( e.g. ) .with kids we intend to push this question as far as possible , while recognising that the limiting factor may well be systematic effects rather than raw statistical power .measurements done with kids will therefore also serve as a learning curve for future ( space - based ) experiments .the use of weak gravitational lensing as a cosmological probe is nicely summarized in and .essentially , its power relies on two facts : gravitational lensing is a very geometric phenomenon , and it is sensitive to mass inhomogeneities along the lines of sight.this makes it a good probe of the growth of structure with time ( redshift ) , as well as being a purely geometric distance measure .as it happens , the distance - redshift relation and the speed with which overdensities grow with cosmic time are the two most fundamental measures of the energy content of the universe : both depend directly on the rate at which the universe expands . making such a measurement , for which weaklensing is an excellent method , is therefore of great interest .these lensing measurements are not easy , as they require systematics better than 1% accuracy , and photometric redshifts unbiased at a similar level .however , with kids we have put ourselves in the optimal position to attempt this , by ensuring the best image quality in our instrument , by choosing a survey depth and area appropriately , and having a wide wavelength coverage that will make the photometric redshifts as free of error as is possible with wide - band photometry ( see fig .[ fig : cosmo_parameters ] ) .as noted above , the expansion history can be deduced from lensing tomography in several ways , and requiring consistency is a powerful check as well as , further down the line , an interesting test of einstein gravity theory . and the dark energy equation of state ) , based on simulated photometry for each of the surveys .the + represents the input truth .the coloured contours assume perfect redshift information , while the dashed contours show the effect of redshift errors .flat geometry was assumed here , but otherwise no external information were included .once external information is folded in , the constraints tighten and systematic effects become even more significant , demonstrating the greater robustness of the kids survey to this type of systematic error . ] an independent way to study the expansion history of the universe is by measuring the baryon acoustic oscillations ( bao ) .bao is the clustering of baryons at a fixed co - moving length scale , set by the sound horizon at the time that the universe recombined and photons decoupled from baryonic matter .this scale length , which has been measured accurately in the cosmic microwave background , is therefore a standard ruler , whose angular size on the sky provides a direct measurement of the angular diameter - redshift relation and hence of the expansion history . using photometric redshifts from kids we can make an independent measurement of the bao scale .comparison of the results with recent and ongoing spectroscopic bao surveys such as wigglez and boss provides a potent test of systematics .simulations of photometric redshift measurements using full 9-band coverage , ugrizyjhk as will be provided by kids and viking , have shown that the accuracy needed for detection of the bao signal ( rms photometric redshift error ) can be reached .tests of the detectability of the bao with particle and monte - carlo simulations , provided by peter schuecker , have shown that imaging surveys of the size and sensitivity of kids can yield values of with % accuracy .simulations of structure formation provide detailed information about the shape of dark matter halos on large scales .however , at small scales such as the inner parts of galaxy halos , complex physics that these simulations can not represent realistically ( e.g. star formation , cooling , feedback etc . ) starts to play an important role ( e.g. ) .the relation between light ( baryons ) and mass ( dark matter ) is crucial for our understanding of the influence of the dark matter on galaxy formation , and vice - versa .galaxy - galaxy lensing ( ggl ) provides a unique way to study this relation between galaxies and their dark halos .the gravitational lensing effect of foreground galaxies on the images of background galaxies is very weak , and can only be measured statistically .this is done by stacking large numbers of foreground galaxies and measuring the net image distortion of the background galaxies . on small scales ( 330 arcsec )the signal is dominated by the profile of the foreground galaxies inner dark matter halos , at radii of 10 to 100s of kpc . at scales of several arcminutesggl probes the galaxy mass correlation and the bias parameter , while at even larger scales the distribution of the foreground galaxies in their parent group halos dominates .ggl can therefore be used to probe halos over a large range of scales and help to test the universality of the dark halo profile .the strength of kids for ggl is again twofold .the shear size and thus enormous numbers of available galaxies , makes it possible to split the foreground galaxies in bins and study different galaxy types separately .the accurate photometric redshifts also allow splitting up the samples in redshift bins , thus enabling the redshift dependence to be analyzed .furthermore , the fact that kids targets areas where wide - field redshift surveys have already been carried out , means that the foreground large scale structure is known , enabling the measurement of the galaxy mass correlation for galaxy groups , clusters , and even filaments . compared to earlier ggl studies with , for example , sdss or cfhtls , the image quality and sensitivity of kids will provide many more foreground background pairs , more accurate shape measurements , and the ability to probe the galaxy population at higher redshifts . within the current cosmological paradigm of cold dark matter ( cdm ) , structure formation is hierarchical and the profiles of cdm halos are universal , i.e. the same at all scales .several of the ramifications of this picture have so far eluded rigorous observational testing .for example , various observational constraints on the influence of galaxy mergers on the evolution of the galaxy population at redshifts higher than .5 differ up to an order of magnitude ( see e.g. ) .the observational studies targeted small numbers of galaxies ( ) at high spatial resolution ( e.g. , ) or small areas ( square degree , e.g. ) . also , galaxy clusters are probes of the highest mass peaks in the universe , but at redshifts of the number of known galaxy clusters is yet too small to constrain cosmological models .kids can play a major role in this field .the sensitivity of the kids photometry will result in the detection of an estimated galaxies .this galaxy sample will have a median redshift of , with % having . based on this sample the evolution of the galaxy luminosity function ,the build - up of stellar mass and the assembly of early - type stellar systems can be traced back to unprecedented look - back times .cluster finding will be possible directly from the multi - colour kids catalogues .in total we expect kids to provide clusters , and with the red sequence detectable out to approximately 5% of these will be located at redshifts beyond 1 .this will be a very important sample to further constrain cosmological parameters , provided that the relation between cluster richness and cluster mass can be calibrated .this calibration is possible since the weak lensing measurements that will be done as part of kids will probe the cluster mass distribution , demonstrating the pivotal advantage of combining high image quality with uniform multi - band photometry .a different perspective of galaxy evolution will be provided by virtue of the fact that that kids - s overlaps two nearby superclusters ( pisces - cetus and fornax - eridanus ) .thus , the relation between galaxy properties ( e.g. star formation rate ) and environment , can be studied all the way from cluster cores to the infall regions , and to the filaments that connect clusters in the cosmic web .detailed studies of the stellar halo of the milky way require photometry of faint stars over large areas of sky .the sdss , although primarily aimed at cosmology and high - redshift science , has proved a milestone in milky way science as well , unveiling many stellar streams and unknown faint dwarf spheroidal galaxies . while kids will image a smaller area than sdss , it is deeper and thus will provide a view on more distant parts of the halo . but more importantly , sdss only covered the northern sky , leaving the southern hemisphere as uncharted territory . particularly in the kids - s area ,new discoveries are bound to be made in the direct vicinity of our own galaxy .proper motions with accuracies of will be available in the kids area , owing to the planned g - band repeat pass that will provide a 2-year baseline .several applications are possible , among others the detection and study of high proper - motion white dwarfs .`` ultracool '' white dwarfs ( k ) , relics from the earliest epochs of star formation , are among the oldest objects in the galaxy and can be used to trace the very early star formation history of our galaxy . due to its multicolour photometry combined with proper motion information , kids will be able to increase the sample of known , ultracool white dwarfs significantly .being a public survey , all kids data will be made publicly available .the kids catalogue will contain some 100,000 sources per square degree ( 150 million sources over the full survey area ) , and for each square degree there will be 10 gb of final image data , 15 tb for the whole survey .these data will be of no use if they are not uniformly and carefully calibrated and made available in an easily accessible archive .the astrometric calibration is done per tile ( stack of dithers ) using the gnomonic tangential projection and using the 2mass point source catalog ( 2mass psc , ) as survey astrometric reference .the first step is a local astrometric solution .an astrometric model for a single chip is constrained by the star positions from a single exposure .the local solution is the input for global astrometry .the global solution uses a global model of the focal plane that allows for variations over the dither positions that make up the tile ( e.g. , due to telescope flexure and field rotation tracking residuals ) .the observational constraint comes from the internal positional residuals from dither overlaps and external residuals with the 2mass psc .first results for local astrometry indicate rms in relative astrometry for kids .global astrometry is expected to yield rms .the photomeric calibration will be done over the survey as a whole in sloan photometric system using ab magnitude system .the calibration plan of the survey telescope includes nightly zeropoints on sa standard fields in u , g , r and i. a dedicated omegacam calibrational observing program is obtaining secondary standards in the sa fields covering the full omegacam fov .the program is expected to run until the last quarter of 2012 .each night observations are taken of a fixed polar standard field near the southern equatorial pole for atmospheric monitoring using a composite filter with u , g , r and i quadrants .daily domeflats are taken in all bands to measure the system throughput ( telescope plus camera ) .commissioning results indicate that the on - sky observations yield photometric scales to accuracy and the in dome observations an accuracy .the overlaps between kids science observations constrain the relative photometric calibration over the full survey .for the absolute photometric calibration the standard field observations are used .the fellow atlas survey using omegacam on the vst has full overlap with the kids - s area and overlap with the kids - n area in u , g , r and i. tying the kids survey to these atlas areas plus the standard fields shall prevent calibration `` creep '' from the tile - to - tile photometric calibration . a detailed description of the astro - wise calibration pipeline .the omegacam and vst calibration is discussed in detail in .they also discuss the instrument photometric characterization , including illumination variations .basic data products will be made public , both through eso and through the astro - wise database , within a year after any part of the survey area has been observed in all filters .this set of basic data products includes the following : * astrometrically and photometrically calibrated coadded and regridded images with weight maps ; * calibration images : twilight flats , dome flats , biases , fringe maps , etc . ; * single - band source catalogues ; * multi - color ( i.e. combined single - band ) catalogues .in addition , and on a longer time - scale , we intend to provide more refined and advanced data products . in the context of the lensing project for kids several innovative image processing techniqueshave been developed , and to the extent possible these will be used to generate high - level data products in the kids database .many of the parameters developed for the sdss survey will be provided .furthermore we will look at including : * images with gaussianized psf .versions of all images convolved with kernels chosen to result in a homogenized , round and gaussian psf , to ease comparison with images taken at different times or with different filters or instruments . * aperture - matched colour catalogues . catalogues with colours measured only from the high s / n inner regions of sources , for applications that only require flux ratios , rather than total fluxes .* unsharp masked images .a wealth of underlying galaxy structure can be obtained from images in which the low frequencies have been removed ( dust structures , disks , etc . ) .we plan to provide images filtered in various ways . * morphological parameters .the popular galaxy profile fitting programmes galfit and galphot have been implemented in astrowise , and will be run on the sources and published in the kids database .although the large - scale data processing for the viking survey will not be done by the kids team , certain viking data products , and combined kids - viking data products will be made available through astro - wise . for a more extensive discussion on how viking data products will be ingested and processed ( where needed ) within astro - wise , see the data zoo paper in this issue .the kids survey team is an international collaboration with team members at institutes spread around europe and beyond .the team uses astro - wise for kids survey handling : data processing ( image calibration , stacking , cataloging ) , data quality control and data management ( on - line archiving and publishing in the virtual observatory ) . by logging on to a single system , astro - wise, team members can make use of a distributed pool of storage , compute and database resources spread over europe , and do their survey work , irrespective of where this person is .day - to - day survey handling is done via webservices .thus , a web browser with internet connection is all that is required to start doing kids survey work .this whoever , wherever approach is possible because all aspects of survey handling in astro - wise are implemented from a data - centric viewpoint .for example , calibration scientists add information on the time - validity of a calibration item ( e.g. a master flatfield image ) to this item : the information _ becomes part of _ the data item itself .quality control is handled similarly , since the verdicts of both automatic and human quality assessors become part of the data items themselves .the same is true for data management , because rather than users knowing which data they can reach , in astro - wise each data item knows ( contains information on ) which users it can reach .processing is implemented as data reaching compute clusters , not as users reaching compute clusters , and all survey data can access all hardware that is pooled by the kids team .moreover , a survey data item also knows how it was made , and whether it could be made better using new , improved calibrations .this is possible because all processing operations are implemented as actions by data objects acting upon themselves and/or other data objects .each type of survey product , from raw to final , is represented by a class of data objects , and each survey product is a data object : an informational entity consisting of pixel and/or metadata , where metadata is defined as _ all _ non pixel data .final survey products also carry the information on how they can be created out of intermediate survey objects .this backward chaining procedure is recursively implemented up to the raw data ( see figure [ f : targetdiagram ] and ) .thus , a request by a kids team member for a survey product , a target , triggers a backward information flow , in the direction of the raw data .the net effect is a forward work flow description , towards the target , that is then executed .the backward information flow is implemented as queries to database initiated by the requested target itself .the database is queried for survey objects on which the target depends with the right characteristics including validity and quality .either they exist and are returned or the query is backwarded to the next level of survey objects closer to the raw data . in conclusion , astro - wise takes a data - centric approach to survey handling and control .attributes of data objects solely determine which calibration data are applied to which science data , which survey products have been qualified and which products should be considered experimental or baseline .data processing is realized by backward information flows that results in a forward processing flow .see the wise paper in this special issue for more information on astro - wise itself .handling of kids survey data ( and any dataset in astro - wise ) is based on a number of parameters and data attributes that regulate which users can interact with which data objects , and contain information on the quality and validity of data objects .now follows a detailed description of these paramaters , summarized in table [ t : attributesforsurveycontrol ] .lp4.5cmp4 cm attribute & content description & possible values + _ creator & astro - wise user that created the data object & any astro - wise user name + _ project & project data object belongs to & any astro - wise project name + _ privileges & operational level at which data object resides & 1,2,3,4,5 + is_valid & validity indicator set by user & 0(bad),1(no verdict),2(good ) + quality_flags & quality flag set by system & any integer ( bitwise ) , with 0 + timestamp_start & start and end of validiy range in time for a calibration object + timestamp_end & + creation_date & time of creation of data object + 1 .* creator . * each data object is associated with a single astro - wise user , its creator , the user that created / ingested the data object .an astro - wise user is a person with an astro - wise database account consisting of an i d number and name .each person has only one account and therefore a single identity within the astro - wise system .once created , the creator of a data object can not be changed .* project .* a project in astro - wise is a group of astro - wise users that share a set of data objects .a project has a project i d , a name , a description , project members and optionally an instrument .one or more users can be member of a project , and a user can be member of more than one project .each data object belongs to one and only one project , which is chosen upon the creation / ingestion of the data entity and can not be changed after that .some projects have all astro - wise users as members ( public projects ) , while other projects have a subset of astro - wise users as members ( private projects ) .the kids project is a private project and contains all data objects resulting from processing of kids survey data .* privileges .* survey data management is facilitated by having pools of data at five different levels named privileges levels .the term privileges stems from the data - centric viewpoint of astro - wise .each data object has a _ privileges attribute that defines its privileges level .an object has increasing privileges to access users with numerical increase of its privileges level .table [ tab : privileges ] lists the five levels and which users a data object can reach at each level .the initial privileges of a data object are set by the creator upon the creation / ingestion of the data entity .this can be changed later by creator and project managers , a process called publishing .4 . * is_valid .* this data attribute is the validity indicator as set by users .it stores the quality assessment performed by a survey team member .its default value upon creation of a data item is is_valid , meaning no user assessment has taken place .the team member can change this to is_valid , meaning bad quality , or to is_valid , meaning data is qualified as good .* quality_flags . *this data attribute collects the quality flags as set by the system .automatically the quality of objects ( i.e. , survey products ) is verified upon creation . if the quality is compromised the quality_flags are set to a value .it is a bitwise flag .each type of object ( raw science frames , calibrated science frames , astrometric solutions ) has its own definition of what each bit means .. * timestamp_start , timestamp_end . * these attributes of a calibration data item define the range in time for which it is applicable .calibration data get default timestamp ranges upon creation .these can be modified by survey team members .for example , it might be decided that a zeropoint might apply to one or more nights , or just a few hours instead .* creation_date . * upon creation every object that results from processing has an attribute that stores the moment of its creation .this information is relevant as `` newer is better '' is a general rule for objects to determine which calibration data item should be applied to them from the pool of applicable calibration data items .handling of kids survey data starts by filtering the pool of available data on which the survey handling should act .this is called setting a context in astro - wise , and is done using the parameters project and privileges described above . after logging into astro - wise , the kids team member selects the project kids and a minimum privileges level of 1 or 2 .all results the member produces will be part of the kids project , and the member is able to see all data available within kids. the baseline kids survey products are all part of the project kids and reside at privileges level 2 ( named project ) .these data can be accessed only by kids survey team members .each kids team member can experiment in her / his own privileges level 1 ( mydb ) to create improved versions of these baseline products . only the single team member can access survey data at mydb level and promote it to the project level .a kids project manager can promote baseline survey data from privileges level 2 to all higher levels .survey data at privileges level 3 ( astro - wise ) can be accessed by all astro - wise users . at privilegeslevel 4 ( world ) the data become publicly accessible , that is to users without an astro - wise account ( anonymous users ) . finally at priveleges level 5 ( vo ) , the data are accessible also from the virtual observatory . thus , it is the combination of context parameters user , project and privileges level that determine what data is filtered to be accessible for survey handling .data handling itself is done either by using a command - line interface ( cli ) or via webservices . how - to s and tutorials for using the cli can be found online ; and for a description of available webservices we refer to the paper on astro - wise interfaces in this issue .ll privileges level & data is shared with + 1 : mydb & only the creator + 2 : project & every member of the project to which the data object belongs + 3 : astro - wise & all astro - wise users + 4 : world & the world : astro - wise users and persons + & without an astro - wise account ( latter via webservice dbviewer ) + 5 : vo & the whole world and data also available through + & the virtual observatory webservices + three types of quality assessment are facilitated in astro - wise : 1 .* `` verify '' * : automatic verification by the system upon creation of a data object 2 .* `` compare '' * : comparison of a processing result to an earlier similar result 3 .* `` inspect '' * : manual inspection by a human user of a data object although this approach is generic , and all classes of data objects resulting from processing ( i.e. , `` processing targets '' ) have the three types of methods implemented , the actual content of the method is specific for the type of data . the result of the application of a methodis stored as a data attribute , namely either the quality_flags or the is_valid attribute discussed in the previous section .additional quality control information can be stored in a comment object as a free string that links to the process target .the quality of an end product depends on the pipeline configuration and the quality of ( calibration ) data at all intermediate processing stages .this generic approach to quality assessment together with the data lineage in astro - wise ( for details see the paper on data lineage in this issue ) facilitates tracing back these dependencies .the qualitywise webservice provides an overview of the quality assessment information ( verdicts , inspection figures , numbers ) for human inspection , including links to the qualitywise pages of data items on which it depends .figure [ f : qualitywise ] gives an impression of this service and for an in - depth discussion we refer to the qualitywise paper in this issue . to zoom in on quality issues , the user can use the webservice for database querying and viewing ( dbviewer ) which provides links to the qualitywise page of returned data items . at the command - linethe user can customize the quality assessment to particular needs with maximum freedom .for example , batch scripts can be used for mass quality control , with switches to an interactive mode when needed . for quality controlit is important that baseline data products readily distinghuished from experimental versions of data products .the privileges levels discussed earlier serve to keep such versions apart . a kids team member can experiment to improve data quality at the mydb level . at this level ,project data at all privileges levels is available , but the resulting products are only visible to the team member .bad outcomes are discarded by invalidating the data .promising outcomes can be shared with the team by publishing to the project level ( see figure [ f : contextgraphwithqc ] ) .fellow team members can then inspect the data and provide feedback ( e.g. , using comments objects in astro - wise ) .the final verdict is set in the is_valid attribute ( 0=bad , 2=good ) . upon team acceptancethe survey object becomes baseline and can be published higher up , eventually for delivery to the outside world .compromises in data quality identified only after publishing can be handled adequately .for example , owing to the data lineage provided by astro - wise , database queries with few lines of code can isolate all data derived using a specific calibration file .these can then easily be invalidated at all privileges levels . like science data ,calibration data are represented as objects in astro - wise .in addition to quality parameters , these objects carry a creation date and editable timestamps that mark their validity period .a request for a processing target generates a database query that returns all good - quality calibration objects with a validity period that covers the observation date . the newest good - quality calibration object is then selected , following the survey handling rule `` newer is better '' .the calibration scientists in the kids team , who are spread over europe , collaborate using the calibration control webservice calts to manipulate this eclipsing of older calibrations by new ones ( see figure [ f : calts ] and paper on user interfaces in this issue for details ) . as for science data ,the context parameters are used to limit the survey calibration operations to the appropriate subset of calibration data available in the system .as the survey progresses the calibration scientists will build up a set of calibration objects with a continuous time coverage .this build up of calibration data will make subtle trends as a function of observational state ( instrument configuration , telescope position , atmospheric state ) statistically significant .investigation of such trends can be done using the astro - wise cli in combination with scripts that can be as short as a few lines of code .the resulting deeper physical understanding of the omegacam instrument and the paranal atmosphere will lead to better calibrations which eclipse the older ones .the final result is that the instrument plus atmosphere become continuously calibrated rather than establishing calibrations on a per dataset basis .this continuous calibration coverage can be pooled with all astro - wise users working on omegacam data by publishing the data to privileges 3 or higher .kids survey operations have started officially on 15 october 2011 . the kids survey fields are currently being calibrated with initial calibration data .quality control on these calibration data and the processed science data will lead to improvements of both the calibration data objects as well as of the pipeline configuration and methods .this in turn will allow the production of improved versions of the initial survey data products .owing to the direct access to data lineage in astro - wise it is straightforward to re - process only the data objects that are affected by these changes .the resulting high quality , calibrated survey data can then be used for the production of advanced products , such as photometric redshifts , galaxy morphometry , source variability analysis . following this approachthe kids team will move from a quick - look version of the first survey products towards publishing a complete , high quality , value - added kids public survey data set .all the time , the team will benefit from astro - wise as its live archive , that captures the accumulation of knowledge about omegacam , vst and the kids survey data over the years .this work is financially supported by the netherlands research school for astronomy ( nova ) and target .target is supported by samenwerkingsverband noord nederland , european fund for regional development , dutch ministry of economic affairs , pieken in de delta , provinces of groningen and drenthe .target operates under the auspices of sensor universe .the authors thank the referee for the constructive comments that helped to improve the paper .abazajian , k. n. , et al . : the seventh data release of the sloan digital sky survey , apjs , 182 , 543 ( 2009 ) begeman , k. g. , belikov , a. n. , boxhoorn , d. & valentijn , e. : wise technologies , experimental astronomy special issue : astro - wise , in preparation ( 2012 ) belikov , a. n. , vriend , w .- j . &sikkema , g. : astro - wise interfaces - scientific information system brought to the user , experimental astronomy special issue : astro - wise , in preparation ( 2012 ) belokurov , v. , et al . : the field of streams : sagittarius and its siblings , apj , 642 , l137 ( 2006 ) belokurov , v. , et al . :cats and dogs , hair and a hero : a quintet of new milky way companions , apj , 654 , 897 ( 2007 ) cole , s. , et al . :the 2df galaxy redshift survey : power - spectrum analysis of the final data set and cosmological implications , mnras , 362 , 505 ( 2005 ) colless et al . : the 2df galaxy redshift survey : spectra and redshifts , mnras , 328 , 1039 ( 2001 ) cooper , m. c. , griffith , r. l. , newman , j. a. , et al .the deep3 galaxy redshift survey : the impact of environment on the size evolution of massive early - type galaxies at intermediate redshift , mnras , 419 , 3018 ( 2012 ) drinkwater , m. j. , jurek , r. j. , blake , c. , et al . :the wigglez dark energy survey : survey design and first data release , mnras , 401 , 1429 ( 2010 ) eisenstein , d. et al . : detection of the baryon acoustic peak in the large - scale correlation function of sdss luminous red galaxies , apj , 633 , 560 ( 2005 ) franx , m. , illingworth , g. & heckman , t. : multicolor surface photometry of 17 ellipticals , aj , 98 , 538 ( 1989 ) harris , h. c. , gates , e. , gyuk , g. , et al . : additional ultracool white dwarfs found in the sloan digital sky survey , apj , 679 , 697 ( 2008 ) jain , b. & zhang , p. : observational tests of modified gravity , phrvd , 78 , 063503 ( 2008 ) kilic , m. , munn , j. a. , williams , k. a. , et al . : visitors from the halo : 11 gyr old white dwarfs in the solar neighborhood , apj , 715 , l21 ( 2010 ) kuijken , k. & rich , r. m. : hubble space telescope wfpc2 proper motions in two bulge fields : kinematics and stellar population of the galactic bulge , aj , 124 , 2054 ( 2002 ) kuijken , k. : omegacam : eso s newest imager , eso messenger , 146 , 8 ( 2011 ) lotz , j. m. , jonsson , p. , cox , t. j. , et al . :the major and minor galaxy merger rates at z 1.5 , apj , 742 , 103 ( 2011 ) man , a. w. s. , toft , s. , zirm , a. w. , wuyts , s. , & van der wel , a. : the pair fraction of massive galaxies at 0 3 , apj , 744 , 85 ( 2012 ) mandelbaum , r. et al . : density profiles of galaxy groups and clusters from sdss galaxy - galaxy weak lensing , mnras , 372 , 758 ( 2006 ) mcfarland , j. p. , helmich , e. m. & valentijn , e. a. : the astro - wise approach to quality control for astronomical data , experimental astronomy , 129 , submitted , arxiv:1203.4208 ( 2012 ) mcfarland , j. p. , verdoes kleijn , g. , sikkema , g. , et al . :the astro - wise optical image pipeline : development and implementation , experimental astronomy , 129 , accepted , arxiv:1110.2509 ( 2012 ) mellier , y. : probing the universe with weak lensing , ara&a , 37 , 127 ( 1999 ) mwebaze , j. , boxhoorn , d. & valentijn , e. a. : data lineage in scientific data processing , experimental astronomy special issue : astro - wise , in preparation ( 2012 ) parker , l. c. , hoekstra , h. , hudson , m. j. , van waerbeke , l. & mellier , y. : the masses and shapes of dark matter halos from galaxy - galaxy lensing in the cfht legacy survey , apj , 669 , 21 ( 2007 ) peacock , j. a. , et al .: esa - eso working group on fundamental cosmology , eso / esa working group 3 report ( 2006 ) peng , c. y. , ho , l. , impey , c. d. & rix , h .- w . : detailed structural decomposition of galaxy images , aj , 124 , 266 ( 2002 ) schlegel , d. , white , m. & eisenstein , d. : astro2010 : the astronomy and astrophysics decadal survey , science white papers , no .314 ( 2009 ) skrutskie , m. f. et al . ,cutri , r. m. , stiening , r. , et al . : the two micron all sky survey ( 2mass ) , aj , 131 , 1163 ( 2006 ) spergel , d. n. , et al . :three - year wilkinson microwave anisotropy probe ( wmap ) observations : implications for cosmology , apjs , 170 , 377 ( 2007 ) szomoru , d. , hildebrandt , h. , and hoekstra , h. , private comm .( 2012 ) valentijn , e. a. , et al .: astro - wise : chaining to the universe , aspc , 376 , 491 ( 2007 ) van daalen , m. p. , schaye , j. , booth , c. m. & dalla vecchia , c. : the effects of galaxy formation on the matter power spectrum : a challenge for precision cosmology , mnras , 415 , 3649 ( 2011 ) verdoes kleijn , g. , valentijn , e. , kuijken , k. , et al .: omegacam plus astro - wise : an infoscope , experimental astronomy , 129 , in preparation ( 2012 ) verdoes kleijn , g. , belikov , a. n. , heraudeau , p. , et al . : the data zoo in astro - wise , experimental astronomy special issue : astro - wise , submitted ( 2012 ) zhao , g .- b .& zhang , x. : probing dark energy dynamics from current and future cosmological observations , phrvd , 81 , 043518 ( 2010 )
the kilo degree survey ( kids ) is a 1500 square degree optical imaging survey with the recently commissioned omegacam wide - field imager on the vlt survey telescope ( vst ) . a suite of data products will be delivered to the european southern observatory ( eso ) and the community by the kids survey team . spread over europe , the kids team uses to collaborate efficiently and pool hardware resources . in the team shares , calibrates and archives all survey data . the data - centric architectural design realizes a dynamic live archive in which new kids survey products of improved quality can be shared with the team and eventually the full astronomical community in a flexible and controllable manner .
we consider the 1d scalar conservation law associated to the conserved variable , where is a non - linear flux function .the numerical approximation for the solution of ( [ nonlin ] ) is done by the discretization of the spatial and temporal space into equispaced cells ,\ ; i=0,1,\dots n ] of length respectively .let and denote the cell center of cell and the time level respectively then a conservative numerical approximation for ( [ nonlin ] ) can be defined by where and is the numerical flux function defined at the cell interface at time level .the characteristics speed associated with ( [ nonlin ] ) can be approximated as , where . in generaldue to non - linearity of ( [ nonlin ] ) , beyond a small finite time , even for a smooth initial data the evolution of discontinuities in the solution is inevitable .therefore , it is required to have a conservative approximation of the solution with high accuracy and crisp resolution of such discontinuities with out numerical oscillations .contrary to this need , most classical high order schemes despite of being linearly von - neumann stable give oscillatory approximation for discontinuities even for the trivial case of transport equation i.e. , . such oscillatory approximation can not be considered as admissible solution since it violets the following global maximum principle satisfied by the physically correct solution of ( [ nonlin ] ) i.e. , in order to overcome these undesired numerical instabilities , various notion of non - linear stability are developed in the light of maximum principle ( [ mp ] ) .examples of maximum principle satisfying schemes are monotone schemes , total variation diminishing ( tvd ) schemes .some uniformly high order maximum - principle satisfying and positivity preserving schemes are .there are other non - oscillatory schemes which do not strictly follow maximum principle but practically give excellent numerical results e.g. , essentially non - oscillatory ( eno ) and weighted eno schemes see and references therein .it is known that among global maximum principle satisfying schemes , the monotone and total variation diminishing ( tvd ) schemes experience difficulties at data extrema . on the one hand ,such high order schemes locally degenerate to first order accuracy at non - sonic data extrema and on the other hand , even such a uniformly first order accurate schemes may exhibit induced local oscillations at data extrema . in this workthe focus is on the construction of improved tvd schemes at smooth data extrema .the above global maximum principle ( [ mp ] ) satisfying monotone and tvd schemes have been of great interest mainly due to excellent convergence proofs for entropy solution and respectively .the key idea is , any maximum principle satisfying scheme produce a bounded solution sequence and convergence follows due to compactness of solution sequence space .it can be shown that monotone stable scheme tvd scheme monotonicity preserving scheme ( or local extremum diminishing ( led ) ) scheme .unfortunately , monotone as well tvd schemes experience difficulty at data extrema .the monotone stability relies on monotone data and therefore a monotone scheme preserves the monotonicity of a data set by mapping it to a new monotone data set but fails to preserve the non - monotone solution region i.e. , at data extrema .these monotone schemes are criticized mainly due to barrier theorem which state that a linear three point monotone scheme can be at most first order accurate .later , second order non - linear conservative monotone schemes are constructed using limiters but again by compromising on second order accuracy at extrema , e.g. .the tvd stability mimics the maximum principle as it relies on the condition that global extremum values of solution must remain be bounded by global extremum values of initial solution . in , harten gave the concept of total variation diminishing scheme by measuring the variation of the grid values as follows [ def1 ] conservative scheme ( [ s2eq2 ] ) is called total variation diminishing if where . notethat that the definition [ def1 ] is global as it is defined on the whole computational domain and ensures that global maxima or minima of initial solution will not increase or decrease respectively .such conservative tvd schemes are heavily criticized because , even if they are higher order accurate in most solution region , they give up second order of accuracy at non - sonic critical values of the solution ._ we emphasize that these depressing results on degeneracy of accuracy of tvd method are given for * conservative * schemes and in the above * global sense*_. more precisely the global nature of tvd definition ( [ tvddef ] ) allows shift in indices technique in sign and is extensively used in different terms of the infinite sums in the tvd proofs of various schemes and results in the literature including the following one due to harten .[ lem1 ] a conservative scheme in incremental form ( i - form ) is tvd iff . in , proof for degeneracy to first order accuracy at non - sonic critical points of solution i.e. , points s.t . is mainly based on modified equation analysis and a _ conservative _ semi - discrete version of lemma [ lem1 ] . in , using a trade off between second order accuracy and tvd requirement along with shift in indices technique , it is shown that second order accuracy must be given up by a _ conservative _ tvd scheme at non sonic critical values which corresponds to extreme values i.e. , .[u(x_{i},t)-u(x_{i}-\delta\,x , t)]<0 \neq f^{'}(u_{i})$ ] .it is also worthy to note that problem of degenerate accuracy by modern high resolution tvd schemes is also due to their construction procedure .for example , the numerical flux function of flux limiters or slope limiters based tvd schemes is essentially design in such a way that it reduces to first order accuracy at extrema and high gradient region by forcing limiter function to be zero see and references therein .this makes it impossible for a limiter based tvd schemes to achieve higher than first order accuracy at solution extrema as well at steep gradient region .thus every high order tvd ( in global sense ( [ tvddef ] ) ) scheme suffers from clipping error and cause flatten approximation for smooth extrema though they sharply capture discontinuities .apart from compromise in uniform high accuracy , it is notable that global maximum principle satisfying monotone and tvd schemes do not necessarily ensure preservation of non - monotone data set i.e. , for a data set with extrema as demonstrated in figure [ f1a](b ) .in particular first order monotone and tvd schemes with * large coefficient of numerical viscosity * can allow the occurrence of induced oscillations at data extrema and formation of new local extremum values as shown in figure [ f1b](a ) .this phenomena of generation of local oscillations at extrema is reported and analyzed for well known monotone and tvd three point _ lax - friedrichs _ scheme in similar to figure [ f1b](a ) . [cols="^,^ " , ] * comments *in this work lmp / tvd bounds are obtained for uniformly second order accurate schemes in non - conservative form .these bound show that higher than second order tvd accuracy can be achieved at extrema and steep gradient region in limiting sense i.e. , when . based on the lmp / tvd bounds hybrid local maximum principle satisfying schemesare constructed and applied on various benchmark test problems .numerical results show improvement in tvd approximation of solution region with extreme points , smooth rarefaction as well contact discontinuity compared to existing higher order tvd method . for a separate work ,the focus is on tvd bounds for multi - step methods and efficient use of a shock detector . as , the algorithm [ algo4 ] recovered the shock at right location for scalar case , it would be interesting to devise a hybrid scheme for system by modifying the wave speed choice in section [ algo4sys ] , with out a shock detector . ritesh kumar dubey .total variation stability and second - order accuracy at extrema . in goddard jerome and zhu jianping , editors ,_ ninth msu - uab conference on differential equations and computational simulations , electron ._ , pages 5363 , sept 2012 .kadalbajoo and ritesh kumar .a high resolution total variation diminishing scheme for hyperbolic conservation law and related problems ., 175(2):1556 1573 , 2006 .ritesh kumar dubey . a hybrid semi - primitive shock capturing scheme for conservation laws .eighth mississippi state - uab conference on differential equations and computational simulations .conference 19 ( 2010 ) , pp .65 - 73 .chi - wang shu .high order eno and weno schemes for computational fluid dynamics . in timothyj . barth and herman deconinck ,editors , _ high - order methods for computational physics _ , volume 9 of _ lecture notes in computational science and engineering _ ,pages 439582 .springer berlin heidelberg , 1999 .
the main contribution of this work is to construct higher than second order accurate total variation diminishing ( tvd ) schemes which can preserve high accuracy at non - sonic extrema with out induced local oscillations . it is done in the framework of local maximum principle ( lmp ) and non - conservative formulation . the representative uniformly second order accurate schemes are converted in to their non - conservative form using the ratio of consecutive gradient . these resulting schemes are analyzed for their non - linear lmp / tvd stability bounds using the local maximum principle . based on the bounds , second order accurate hybrid numerical schemes are constructed using a shock detector . numerical results are presented to show that such hybrid schemes yield tvd approximation with second or higher order convergence rate for smooth solution with extrema . * keyword * hyperbolic conservation laws;smoothness parameter ; non - sonic critical point , total variation stability , finite difference schemes . + * ams classification * : 65m12 , 35l65 , 35l67 , 35l50 , 65m06 , 65m15 65m22 .
next generation sequencing ( ngs ) technologies paved the way for heterogeneous data extraction from different biological samples .genomic datasets are of fundamental importance as dna molecules are responsible to encode genetic instructions that are necessary for all the organisms to function .these datasets are used to recover the exact nucleotide composition of a set of dna molecules , the so called dna sequencing .sequencing a set of dna molecules can be categorized into two extreme cases . in the first case ,the dna molecules are evolutionary diverged enough such that it is safe to assume the problem is equivalent to sequencing a single dna molecule with the total length of all the molecules .an evident example of such scenarios is the presence of numerous chromosomes within a cell which considerably differ from each other in terms of nucleotide content . in , and limits of sequencing of a single dna molecule are obtained . in the second case ,dna molecules are sampled from a population of closely related individuals where the molecules mainly differ by single nucleotide polymorphisms ( snps ) .this paper aims to obtain fundamental limits on the number and length of dna reads where exact reconstruction of all molecules is possible and reliable .emergence of long dna read technologies in the recent years has provided dna reads with lengths up to almost kilo base - pair ( kbp ) , . using long dna reads makes it possible for researchers to go beyond conventional limitations in a variety of computational methods in bioinformatics , such as haplotype phasing and dna read alignment . throughout this paper , sequencing of closely related dna moleculesis called pooled - dna sequencing . in pooled - dna sequencing ,the individual molecules may have been pooled by nature or by design .the former case can be approached by capturing each individual molecule for separate sequencing which is not a cost effective strategy .even if the molecules are available separately , in the latter case , the molecules are pooled together to reduce the overall cost of sequencing .this can be done by reduction in overhead expenses through library preparation of a number of individuals simultaneously , .pooled - dna sequencing has diverse applications in modern biology and medicine . in cancer genomics , exact isolation of healthy tissues from cancerous cellscan not be guaranteed during sample extraction . in sequencing of bacteria populations ,different colonies can not be easily separated , the so called metagenomics .more importantly , in diploid individuals such as humans , somatic cells contain two homologous autosomal chromosomes that need to be sequenced , the so called haplotype phasing .computational approaches to the problem of pooled - dna sequencing has several advantages over experimental approaches , . on the computational side , a number of similar issues have been addressed in earlier works . here , we consider the ones that are more relevant to this paper .haplotype phasing is addressed in many papers , mostly through algorithmic approaches ; see for a review . in , authors have shown that the optimal algorithm for haplotype assembly can be modeled as a dynamic programming problem . in , authors have introduced shapeit2 , a statistical tool which adopts a markov model to define the space of haplotypes which are consistent with given genotypes in a whole chromosome . a coverage bound based on the problem from the perspective of decoding convolutional codes is proposed in .particlehap is also an algorithm proposed in to address the assembly problem with joint inference of genotypes generating the most likely haplotypes .recently , another tool for the haplotype phasing problem using semi - definite programming is introduced in , which can be applied to polyploids as well .many haplotype phasing methods proposed so far considered cohorts with nominally unrelated individuals , while in a general framework for phasing of cohorts that contain some levels of relatedness is proposed .the aim of this paper is to show that it is feasible to reconstruct all the individual molecules if certain conditions hold . in particular, we will show that if the number and length of dna reads are above some specific thresholds , which are functions of dna sequence statistics , then reconstruction is possible .this paper is organized as follows . in section [ sec : mainmodel ] , the mathematical model underlying the pooled - dna sequencing problem from dna reads is introduced .we have presented a summary of our results and main contributions in section [ sec : results ] . in section [ sec : noiseless ] , we have derived necessary and sufficient conditions for unique and correct assembly of all molecules in the noiseless regime . in this regard , analytic upper and lower bounds on the error probability of the reliable assembly are obtained and asymptotic behaviors of the derived bounds are provided .section [ sec : noisy ] is devoted to analyze the problem of correct assembly from noisy dna reads .accordingly , we have formulated two upper bounds on the assembly error probability in the noisy regime via exploiting two novel denoising methods , i.e. maximum likelihood denoising and graph - based techniques motivated by community detection algorithms .section [ sec : conclusion ] concludes the paper .in this paper , we are interested in collective sequencing of a population of individuals from a given specie .it is desirable to identify the genome of each individual within the population using a single run of a sequencing machine , through the so - called pooled - dna sequencing . in this section ,we first provide a formal definition of the problem in addition to statistical models employed for analytic formulations .we consider a population of distinct individuals where a reference genome of length is already sequenced and available .genetic variations among individuals and the reference genome are assumed to be originated solely from single nucleotide polymorphisms ( snps ) .snps are a group of randomly spread nucleotides across the genome that differ in independent individuals with a high probability ( ) .it should be noted that the effect of linkage disequilibrium ( ld ) is ignored in this study and snps are assumed to be generated independently with respect to each other and also other individuals of the species .the frequency vector usually consists of only two non - zero elements , i.e. major and minor allele frequencies . for the purpose of library preparation , distinct dna fragmentsare extracted randomly and uniformly from each individual . in the current study, we simply assume dna reads are not tagged and hence no information regarding the membership of fragments is available .all the fragments are pooled together and fed to a sequencing machine to produce single - end reads with length . in section [ sec : noiseless ] , we assume sequencing machine does not produce any mismatch or indel errors while producing the reads. however , section [ sec : noisy ] assumes that dna reads are altered with an error probability of .recovering the genomes of all pooled individuals from the reference genome and reads generated by the sequencing machine is the assembly problem .we would like to determine sufficient conditions for and such that for a given set of parameters , , and dna sequence statistics the assembly of genomes belonging to each of individuals can be carried out uniquely and correctly . in this paper , we assume that all reads are mapped uniquely and correctly to the reference genome . although this assumption is far from reality for short reads , it is quite valid for long and very long reads which are the main focus of this paper .we exploit a biologically plausible probabilistic model to mathematically formulate our problem and assumptions .the latent independent snp set of the individual is denoted as for all , where denotes the total number of snps .also , for all and .moreover , we suppose that the loci of snps are known and randomly spread across the reference genome according to a _process with rate .consequently , we have .the reason behind the _ poisson _ assumption lies in the fact that snp - causing mutations have happened uniformly and independently across the whole genome during the course of evolution .snp values are assumed to be drawn from multinomial distributions with frequency vector , such that indicates the frequency of base for the snp . for the sake of simplicity , different snps in a single individual , and also all snps across different individualsare assumed to be statistically independent which indicates that linkage disequilibrium and mutual ancestral properties are not taken into account .the set of pooled dna fragments are denoted by , where represents the sequenced dna read . for all , the read is assumed to be correctly aligned to the reference genome . in this regard , dna read arrivals for each individualwill be known and randomly spread across the genome according to a _ poisson _process with rate .arrivals for different individuals are assumed to be independent from each other . regarding the dna reads, we will consider two regimes in this paper , i.e. noiseless and noisy regimes . in the former case, it has been assumed that dna reads do not contain any sequencing error ( mismatch or indel ) . in the noisy case, it has been assumed that sequencing system alters the nucleotide content of dna reads . for simplicity ,we assume that sequencing error does not produce any new alleles and simply transforms a minor allele into a major one and vice versa , both with probability of , . in cases where an error event produces a new allele in a read, the allele will be randomly mapped to either minor or major alleles .the ultimate goal of this study is to investigate the conditions under which inference of the hidden snp sets for all individuals can be carried out correctly and without ambiguity .also , we present a number of procedures for multiple genome assembly in such cases . moreover , we have shown that there are conditions under which unique genome assembly is almost impossible regardless of the algorithms being employed .0.49 0.49 individuals .sequencing depth is fixed regardless of read length and equals . as can be seen ,a sharp phase transition in assembly error rate is evident as the read length is being increased.,scaledwidth=80.0% ] in the noiseless case , our main contribution includes necessary and sufficient conditions for unique and correct genome assembly of all individuals , in addition to respective lower and upper bounds on the assembly error probability .let us denote as the error event in genome assembly in the noiseless regime .then , in section [ sec : noiseless ] it is shown that when dna read density is above some threshold and dna read length is sufficiently large , the following inequalities hold : where and is an attribute of genome statistics .exact lower and upper bounds for the non - asymptotic scenario can be found in section [ sec : noiseless ] as well . as genome length is increased while the number of individuals is fixed , the lower and upper bounds become tight and almost coincide with each other .however , for a practical configuration of parameters such as the ones given in fig .[ fig : assemblyregion ] , the theoretical gap between lower and upper bounds becomes significant . in fig .[ fig : assemblyregion ] we have demonstrated lower and upper bounds of the assembly region in - plain , where the bounds correspond to assembly error rate of .the number of individuals is assumed to be in fig .[ fig : a : assemblyregion ] , and in fig .[ fig : b : assemblyregion ] , respectively . in this simulation , we have assumed an snp rate of and an effective minor allele frequency of , i.e. , where this numerical configuration corresponds to human dna statistics .it is clear that when , dna read length must be above some critical length in order to ensure that the assembly error rate remains sufficiently small , i.e. .evidently , for many other species of interest such as bacteria populations , the minimum required read length is significantly smaller due to their higher snp rate and smaller genome length .[ fig : phasetransition ] shows the phase transition of the lower and upper bounds of the assembly error rate when sequencing depth is fixed and read length is increased .the upper and lower bounds correspond to the case with individuals , and the sequencing depth is chosen as which is a common value in real - world applications . to summarize , the key observation is existence of a rather sharp phase transition as is increased in the case of both upper and lower bounds . in the noisy regime, we have proposed a set of sufficient conditions for unique and reliable assembly of all the genomes .in addition , two denoising algorithms are proposed for block - wise inference of true genomic contents from noisy reads , denoted by maximum - likelihood ( ml ) and spectral denoising . in this regard , the following upper bound on the error probability of reliable genome assembly via ml denoisingis derived for the case of individuals : & ~\text{subject to}~\quad0<d < d\leql , \label{eq : sornoisy}\end{aligned}\ ] ] where denotes the error event in assembly of all genomes in the noisy case .exact non - asymptotic upper bound in the noisy case , in addition to an approximate analytical solution of the minimization problem in and also the generalization of for any can be found in section [ sec : noisy ] .moreover , a similar upper bound and an approximate mathematical analysis is presented for the case of spectral denoising .our proposed spectral method has a similar performance to that of ml method when is small , however , it is much more computationally efficient . in fig .[ fig : mlasemblyregion ] , upper bounds on the assembly region in - plain are shown for the assembly error rate of , when ml denoising is employed .results are shown for six different sequencing error rates .other configurations such as , and are identical to those used for fig .[ fig : assemblyregion ] which resemble the human genetic settings . as can be seen , regardless of the sequencing error rate , by increasing upper bounds converge to a specified dna read length which is close to that of the noiseless case ( sufficient condition for the greedy algorithm ) .[ fig : mlnoiseperf ] demonstrates the upper bounds on assembly error probability as a function of sequencing error rate for three different dna read densities .again , ml technique is employed for denoising of reads .dna read length is fixed to for all values of .it can be seen that upper bounds undergo a drastic phase transition as sequencing error is increased .however , larger values of correspond to shifting the phase transition into larger sequencing error rates . it should be noted that for both fig .[ fig : mlasemblyregion ] and fig .[ fig : mlnoiseperf ] the number of individuals is assumed to be .[ fig : sdassemblyregion ] shows the upper bounds of assembly region in - plain for different sequencing error rates , when spectral denoising is employed .upper bounds indicate maximum assembly error rate of .other parameters such as , and are assumed to be the same as those presumed in fig .[ fig : assemblyregion ] for the human genetic settings .upper bounds corresponding to are very close to each other and also to the upper bound associated with the noiseless regime ( greedy algorithm ) . as the sequencing erroris increased , assembly region is shifted toward larger values of and . moreover , when the upper bound for the maximum assembly error rate of can not be satisfied by .[ fig : sdnoiseperf ] demonstrates the upper bounds on assembly error probability via spectral denoising as a function of sequencing error rate for three different dna read lengths .dna read density is assumed to be fixed and equal to in all graphs . as can be seen ,similar to ml denoising , drastic phase transitions can be observed as one increases the sequencing error which can be shifted toward larger values of provided that is chosen sufficiently large .in this section , pooled - dna sequencing of individuals for the simple case of tagless and noiseless dna reads is studied . for any assembling strategy , one can define the error event as the event of failure in reconstructing the genomes of all individuals _ uniquely _ and _ correctly_. this error event is denoted by .we aim to analytically compute lower and upper bounds for based on the statistical models given in section [ sec : model ] . to this end, we first obtain necessary and sufficient conditions of unique and correct assembly of all individuals . the conditions , then, are used to derive tight bounds on . , and in addition to all inter - snp segments are bridged .consequently , one can uniquely and correctly assemble the two genomes . ] in theorem [ thm : maintheorem ] , we show that two conditions are enough for necessity and sufficiency of correct assembly , namely _ coverage _ and _ bridging _ conditions .the coverage condition is met whenever every single snp in each individual is covered by at least one dna read from that individual .the bridging condition is met whenever every single _ identical region _ between each pair of individuals is bridged by at least one dna read from either of the individuals .an _ identical region _ between two particular individuals refers to any segment of genome in which they possess completely identical genomic content .if a dna read starts before an identical region and ends after it , that region is said to be bridged . fig .[ fig : bridge ] demonstrates the concept of bridging and coverage in a pooled - dna sequencing of two individuals .blue and red dna reads are associated with the first and second individuals , respectively .bold reads have bridged the three long identical regions in addition to the identical regions corresponding to inter - snp segments .the following theorem mathematically formulates the necessary and sufficient conditions for unique and correct genome assembly : [ thm : maintheorem ] in a pooled - dna sequencing scenario with tagless and noiseless dna reads , the following conditions are necessary and sufficient for unique and correct genome assembly : * snp coverage condition ( sc ) : every snp must be covered by at least one dna read from each individual , * bridging condition ( b ) : every identical region between any two individuals must be bridged by at least one dna read from either of the individuals .we will prove that if any of the conditions mentioned in theorem [ thm : maintheorem ] does not hold , then unique and correct assembly becomes impossible .first , assume at least one snp is not covered by any dna read for at least one individual .then , due to lack of information regarding the value of that allele for at least one individual and the final assembly is not unique .second , assume an identical region between two particular individuals is not bridged . even if it is possible to correctly assemble both sides of the unbridged identical region , absence of any bridging read inhibits the flow of membership information from one side of the region to the other . as a result ,at least two completely distinct and yet legitimate genomes can be assembled which contradicts the required uniqueness of assembly .it is desirable to design an algorithm that assembles the genomes efficiently and correctly .we will prove that a simple greedy algorithm can assemble the genomes , if the conditions provided in theorem [ thm : maintheorem ] are met . to this end, we first sort the list of reads according to their starting positions .this ordering is possible because it is assumed that reads are mapped correctly and uniquely to the reference genome .let us denote the sorted read set by .the proposed algorithm is detailed in algorithm [ alg : greedy ] . *inputs * * output * * initialization * * greedy merging * we next show that the greedy algorithm assembles all the genomes correctly given the conditions in theorem [ thm : maintheorem ] are met .the proof is by induction .assume that the algorithm is correct before merging read , i.e. , the partially constructed genomes , referred to as contigs , correspond to the true genomes and all reads prior to are merged correctly to their corresponding contigs . without loss of generality , we assume is a read from the first genome .the algorithm fails if there exists with with overlap size greater than or equal to that of .we will show that this event does not happen if the conditions in theorem [ thm : maintheorem ] are met .let denote the overlap size between and for all .we need to show that for all .assume .this case corresponds to the existence of an identical region between first and individuals . from the third condition of the theorem , this identical regionis bridged by at least one read from either individuals .the bridging read starts earlier than and based on our assumption is correctly merged to either or .if the read comes from the first individual , then the overlap size between and is always less that that of and which contradicts the assumption . on the other hand ,if the read comes from the individual , can not be in as it is not consistent with .this completes the proof .an implication of theorem [ thm : maintheorem ] is that the error event is the union of two events , denoted by and , which are due to snp coverage and bridging conditions , respectively .consequently , one can obtain lower and upper bounds on as in the following subsections , we first analyze the probability of and in terms of and .in particular , we obtain the exponents of decay of each of these probabilities when one increases and .these exponents are then used to obtain asymptotic bounds on . the first condition in theorem [ thm : maintheorem ]states that each snp should be contained in at least one read from each of individuals .equivalently , for each individual , the starting point of at least one read should fall within the distance of each snp . as dna fragments are randomly spread along genome with the density of , the distance of two consecutive fragments has an exponential distribution of the form .let denote the event that there exists at least on snp in the individual which is not covered by any read .for any , the probability of occurring conditioned on the number of reads can be exactly written as : where has a _ poisson _ distribution with average .therefore , this implies that if one wishes the snp coverage condition to hold for a single individual with probability ( for and ) , then read length should be chosen close to : instead of obtaining exact formulation for the case of individuals , we derive tight upper and lower bounds on .the upper bound can be simply attained via the union bound as follows : for the lower bound , we have derived two asymptotically tight formulations , where one formulation is appropriate for large and the other is tight for large .when the number of individuals , , is not very large , one can bound from below simply by considering the fact that .therefore , we have : this formulation is not tight when the number of individuals is very large. therefore , we offer another lower bound which linearly grows with .let us divide the whole genome into non - overlapping segments of length .if one individual does not have a read starting within the segment , then there exists an interval with length which is not covered by all individuals .moreover , if there exists one snp which arrives within this interval , then the coverage condition does not hold . in this way, can be lower bounded as : where denotes the total number of intervals not covered by all individuals . in addition, represents the probability of arriving dna reads belonging to less than individuals in a single segment of length , and can be formulated as : it can be shown that the optimal value of which maximizes the lower bound does not have a closed form analytic formulation . however , asymptotic analysis of for the case of results in a simpler mathematical formulation whose maximal point is analytically tractable and provides a good approximation for the optimal value of , denoted by . for large values of ,the lower bound can be simplified as : and it is easy to show that the maximizer of the above inequality can be closely approximated by substitution of this approximate maximizer in results in the following lower and upper bounds for in the asymptotic regime , where : the results for the non - asymptotic case can be easily computed by substituting into , since any value of implies a lower bound . from the asymptotic analysis it is evident that both lower and upper bounds have the same exponent of decay with respect to .moreover , the duality in the formulation of lower bound yields that for small values of , the error probability of snp coverage in a single individual is an acceptable approximation for . on the other hand , when is large , the alternate lower bound is more favorable .more precisely , when , the transition between the two lower bounds occur when .in this subsection , we first obtain lower and upper bounds on for the case of two individuals ( ) .interestingly , it is shown that exponents of decay of both lower and upper bounds are the same with respect to read length and read density .we also provide an exact analysis of the error probability in appendix [ app : exactbridging ] which can be used for numerical analyses . generalization to more than two individuals is carried out at the end of this subsection .let us denote a snp which has different alleles in the two individuals as a discriminating snp .identical regions between the two genomes correspond to segments between any two consecutive discriminating snps .the bridging condition for the two individual case implies that all of these identical regions must be bridged by at least one read from one of the two individuals . in the following ,we first show that under the assumptions of section [ sec : model ] the arrival of discriminating snps corresponds to a_ poisson _ process .we subsequently derive lower and upper bounds on based on this property . per definition ,the difference between genomes of two particular individuals is solely due to discriminating snps .the snp , where , has identical alleles in the two individuals with a probability of : it has been further assumed that each allele frequency vector is sampled independently from a fixed and known multi - dimensional distribution . for a given segment of length , if is the total number of snps falling within this segment , then it has a _ poisson _ distribution of the form : similarly , if , , is the total number of discriminating snps in this segment , then its distribution can be given by : where represents the set of all possible subsets of with cardinality .since the sequence of is i.i.d . and , we will have : where .thus , the set of discriminating snps can also be modeled by a _process with rate .we first focus on the error probability of bridging a single identical region .this event is denoted .one can show that : \left(1+lp\left(1-\eta\right)\right)e^{-p\left(1-\eta\right)l } & , ~ p\left(\frac{1-\eta}{2}\right)= \lambda . \end{array } \right.\end{aligned}\ ] ] for sufficiently large values of , the parameter appears in our lower and upper bounds explicitly as : we will prove these inequalities in the sequel , where we rigorously derive lower and upper bounds for all ranges of parameters . [ [ upper - bound ] ] upper bound + + + + + + + + + + + to obtain an upper bound , we simply employ the union bound , i.e. with indicating the number of discriminating snps . the obtained upper bound can be formulated as follows : [ [ lower - bound ] ] lower bound + + + + + + + + + + + to obtain a lower bound on , we obtain an upper bound on .let represent the event that there are less than two discriminating snps in the whole genome .clearly , bridging condition is satisfied under . moreover , let represent the event that for any fragment of length between the first and the last discriminating snps , there exists one read starting within the fragment and covering at least one discriminating snp . if does not happen , then bridging condition fails as the flow of information does not go through discriminating snps and the assembly stops at the fragment violating the condition .therefore , it can be easily verified that to obtain an upper bound on , we note that if the distance between the first and the last discriminating snps , denoted by , is partitioned into non - overlapping segments of length , then it is necessary that for all segments , at least one read starts and then covers at least one discriminating snp .clearly , focusing on such non - overlapping segments provide an upper bound on .moreover , since the segments are non - overlapping , the corresponding events are independent .let denote the event corresponding to the segment . then , we further obtain an upper bound on using the following arguments .if there are total arrivals within this interval and of them belong to discriminating snps and the rest belong to dna reads from one of the two individuals , then the event does not happen if all snps arrive before all dna reads .first , note that the probability of observing discriminating snps and reads within the segment can be obtained as averaging over all and yields a simple calculation reveals that substituting into yields the distribution of is needed to compute the expectation .it is easy to show that the final formulation for upper bound can be written as follows : in the asymptotic regime where and ( which is the case in all practical situations ) , such lower bound can be approximated by the following formulation : clearly , both lower and upper bounds for bridging error probability have the same exponent of decay with respect to .this property indicates a sharp phase transition for as is increased .the upper and lower bounds obtained in the two individual case can be readily generalized to individuals with a tight asymptotic performance .this result is presented in the following theorem .[ thm : mtheorem ] the probability of error event in bridging condition for individuals , , can be bounded both from above and below as : where is defined as follows : \left(1+p\left(1-\eta\right)l\right)e^{-p\left(1-\eta\right)l } & , ~ \lambda = p\left(\frac{1-\eta}{m}\right ) \end{array } \right .\label{eq : deltadef}\ ] ] and is defined as in .the upper bound has been directly derived from union bound in which the factor of corresponds to all pairwise comparisons between every two individuals .each of such comparisons has an error probability of at most which justifies one of the inequalities . for the lower bound , we follow similar arguments to those in the case of two individuals .let denote the distance between the first and last snps in genome .we divide this region into non - overlapping segments of length . in order to have a unique assembly ,it is necessary that at least dna reads from distinct individuals arrive in each segment and cover at least one discriminating snp .if this condition is violated , for at least two individuals , the two sides of the segment are informatically disconnected and therefore unique assembly is impossible . the probability of error for such event in a single segment can be derived as follows .assume that there are discriminating snps and dna reads from the individual in a particular segment , where .obviously , and each are random variables with corresponding _ poisson _ distributions .also , let denote the number of all possible permutations of snps and dna reads arrivals : an error happens whenever for at least two individuals all dna reads arrive after the last discriminating snp .we first consider the probability of this event for two particular individuals , namely the and the ones .the number of all possible permutations that correspond to such scenario can be calculated by first ordering the discriminating snps and then ordering dna reads associated with the and individuals .the rest of dna reads which belong to other individuals can then be arbitrarily distributed among these arrivals .consequently , the total number of _ bad _ permutations is , which implies that the probability of occurring an error with respect to only the and individuals is .one can readily show that when there are individuals with indices , this probability generalizes to . by using the inclusion - exclusion principle, it can be shown that the probability of occurring an error event in a single segment can be obtained via the following formulation : which can be simplified into the formulation given in . again , pursuing the same path we have followed in the case of two individuals one can show that : which completes the proof .next , we address asymptotic analysis of the attained bounds . forthe sake of simplicity , let us assume for all .the special cases where the equality holds for some lead to very similar analyses as the following arguments . based on this assumption, it can be shown that satisfies the following inequalities : consequently , when and , bridging error probability can be simply bounded as : and if , then the lower bound can be simply approximated by it is evident that the increase in both upper and lower bounds of are polynomial with respect to and , while the decrease in error remains exponential with respect to .the exponent of decay for the upper bound is .this result implies that if , then any further increase in sequencing depth does not lead to a significantly better asymptotic behavior . in this subsection, we combine lower and upper bounds associated with snp coverage and bridging conditions to obtain asymptotically tight bounds on the assembly error probability . based on ,when and a global upper bound on the assembly error probability can be obtained as follows : the lower bound can also be written as : if one aims at increasing the sequencing depth such that , then the above asymptotic lower and upper bounds can be further simplified resulting in the following bounds : it should be reminded that exact non - asymptotic lower and upper bounds can be found in theorem [ thm : mtheorem ] , and . in this section ,we have derived asymptotically tight lower and upper bounds on when dna reads are tagless and noiseless .however , all practical sequencing machines produce dna reads that are contaminated with sequencing noise which alters the sequenced nucleotides by a particular error rate .next section deals with theoretical bounds on the assembly error rate in noisy scenarios .in this section , we consider the case of noisy dna reads where biallelic snp values are randomly altered with an error probability of .moreover , we have assumed binary snp values , although generalization to more than two alleles is straightforward .surprisingly , we have shown that for any and for sufficiently large number of reads , unique and correct genome assembly from noisy reads is possible .note that corresponds to the case where reads do not carry any information related to snps and assembly is impossible .the following theorem provides sufficient conditions for genome assembly in the noisy case .the conditions are so stringent that a simple greedy algorithm such as the one presented in the noiseless case can reconstruct all the genomes correctly and uniquely .[ thm : mainnoisytheorem ] assume genome is divided into a number of overlapping segments of length , where each two consecutive segments have an overlapping region of length ( ) .also assume the following conditions hold : * discrimination condition ( disc ) : in each overlapping region between two consecutive segments , every two individuals are distinguishable based on their true genomic content , * denoising condition ( den ) : in each segment , one can phase all the individuals based on reads covering the segment .then , correct and unique genome assembly is possible for all individuals .the proof is straightforward .if the above - mentioned conditions hold , then the conditions in theorem [ thm : maintheorem ] also hold and genome assembly becomes possible by the greedy algorithm .let us denote the error event in correct and unique assembly of all genomes in the noisy regime by .from the fact that the genome can be divided into overlapping segments of length , one can write where is the event that in an interval of length one of the sufficient conditions does not hold .let us denote and as the error events corresponding to discrimination and denoising conditions in a single segment , respectively .then , from we have : where ( a ) is obtained by conditioning on which is the number of snps within a given segment of length , and ( b ) is obtained by upper bounding by .the event is independent of reads .therefore , one can simply attain an upper bound on using the fact that the number of snps in an overlap region of length has a _ poisson _ distribution .therefore , ^n}{n ! } \left ( 1- \left(1-\eta^n\right)^{\binom{m}{2 } } \right ) \nonumber \\ & = \sum_{m=1}^{\binom{m}{2 } } \binom{\binom{m}{2}}{m}\left(-1\right)^{m-1 } e^{-p\left(d - d\right)\left(1-\eta^m\right)},\end{aligned}\ ] ] where ( a ) follows from that fact that is an upper bound on the probability that every two individuals are distinguishable based on a set of observed snps .the event depends on the reads as well as the algorithm used for denoising .we propose two algorithms in this regard . the first one , which leads to the optimal solution , is based on maximum likelihood and will be presented in section [ sec : perfect_denoising ] .this algorithm yields the best performance at the expense of a prohibitive computational complexity . in section [ sec : spectral_denoising ] , we present an algorithm which is motivated by random graph theory as an alternative approach demonstrating high performance during the experiments while retaining similar interesting asymptotic properties . evidently , decision making based on maximum likelihood ( ml ) is the optimal denoising algorithm which searches among all possible choices of the hidden snp sets and chooses the one with the maximum probability of observation conditioned on the snp sets .theorem [ thm : noisytheorem ] guarantees that given the genomic contents of each two individuals are distinguishable in each segment , then denoising error associated with ml goes to zero when the sequencing depth goes to infinity .moreover , it has been proved that decoding error exponentially decays as the number of observations per segment is increased .[ thm : noisytheorem ] assume distinct and hidden snp sets each having snps . then , there exist non - negative real functions such that also , we have for all , if and only if . the problem model for theorem [ thm : noisytheorem ]is depicted in fig .[ fig : noisychannels ] .let us denote as the set of all possible sequences of a binary snp segment with length .also , let represent the set of all possible subsets of that have a cardinality of . in this regard , the true set of snp sequences underlying the segmentwill be denoted by .it is straightforward that for any , the probability of observing in the observation pool can be written as : where denotes the _ hamming _ distance between and the sequence in .we attempt to find through a series of pairwise hypothesis testings .in other words , maximum likelihood decoding implies the following maximization problem : where denotes the set of observations .an error in ml decoding occurs whenever for at least one and , we have . as a result, one can use the union bound to upper bound the decoding error probability as follows . forall , assume a subset of such that every member of has a _ hamming _ distance of from .in other words , through each alternation of out of snps in we obtain a member of .evidently , one can show that . in this regard , the error probability in decoding a segment consisting of snps with observations , , can be bounded as also , we have the inequality in is based on the deviation of a sum of i.i.d .random variables around its mean . by using chernoff bound, one can attain the following inequality which upper bounds the denoising error probability : \right ) \nonumber \\ & \triangleq \sum_{i=1}^{m\kappa } \binom{m\kappa}{i } e^{-n\mathcal{d}_i\left(\epsilon,\kappa\right)}. \label{eq : finalmlbound}\end{aligned}\ ] ] it is easy to show that the minimal ( worst ) exponent of error corresponds to , since it considers all two distinct members of that have the minimum _ hamming _ distance .furthermore , we need to show that the exponent of error is positive when .lemma [ lemma : positiveexponent ] and [ lemma : uniqueness ] guarantee the positivity of error exponent under the above - mentioned conditions .assume and are two arbitrary distributions over a finite -filed .then , we have : = \log\left[\left(\sum_{i}\sqrt{p_i q_i}\right)^{-1}\right ] \ge 0 .\label{eq : lemma1eq}\ ] ] in addition , the equality holds only for .[ lemma : positiveexponent ] the proof of lemma [ lemma : positiveexponent ] is given in appendix [ app : dmin ] .according to lemma [ lemma : positiveexponent ] , the error exponents in are strictly positive given that each two different members of result into different statistical distributions over . mathematically speaking :assume and , then .[ lemma : uniqueness ] the proof of lemma [ lemma : uniqueness ] is discussed in appendix [ app : dmin ] .we have also derived analytical formulations for the main exponent , i.e. for special cases of and .surprisingly , it can be shown that the main error exponent does not depend on and is only a function of and .the main error exponent for the special cases of and has the following analytical formulations : [ lemma : sampledmin ] again , the interested reader can find the proof of lemma [ lemma : sampledmin ] in appendix [ app : dmin ] .the procedure used for obtaining analytic formulation of can be used for cases where .moreover , it is easy to show that for all , we have : and is a decreasing function of . according to lemma [ lemma : positiveexponent ] , when then .inequality should be marginalized with respect to in order to attain a formulation for . indicates the number of dna reads that cover a segment of length , which has a _ poisson _ distribution with parameter : this completes the proof .although the algorithmic approach described in theorem [ thm : noisytheorem ] reaches the optimal result with respect to the sufficient conditions described in theorem [ thm : mainnoisytheorem ] , however , it is np - complete with respect to as it requires an exhaustive search among all members of .this amount of complexity is not affordable in real - world applications .motivated by community recovery techniques from random graphs , we propose an alternative denoising framework which requires much less computational burden for moderate values of and and leads to a very good performance in practice .the whole problem of snp block denoising can be viewed from a community detection perspective . in thisregard , each noisy observation of a snp block denotes a node in a weightless and undirected random graph , where represents the vertices of the graph .the set of edges is obtained from the adjacency matrix of denoted by which is constructed as follows .let denote a matrix corresponding to noisy observations of snps .without loss of generality , we have assumed , where denotes the presence of major allele .let us define the sample cross - correlation matrix as then , we define : 0 & { \text { o.w . } } \end{array}\right .i , j=1,2,\ldots , n , \label{eq : graphconstructformula}\end{gathered}\ ] ] where is a predefined threshold .let us picture the snp blocks belonging to an individual as a community in the graph .therefore , the ultimate goal is to partition into communities , , where represents the set of nodes ( equivalently snp blocks ) that belong to the individual .equivalently , we wish to find which maps each observation to its corresponding individual . when the correct is provided , one can denoise the observations via snp - wise majority voting among those nodes that belong to particular individuals .spectral techniques in random graph theory have shown promising performances in community detection .in particular , we will make use of the algorithm presented by which works very well in practice . in order to analyze the algorithm, we assume that is generated based on erds - rnyi graphs with inter ( resp .intra ) community connection probability of ( resp . ) . in this case, has shown that the hidden mapping function can be completely recovered with a probability less than where , is a constant depending on the algorithm employed for community detection . has also shown that for large the lower bound in is achievable via a spectral technique based on eigen - decomposition of graph adjacency matrix .we obtain an upper bound on , and a lower bound on as follows .let denote the sequence difference matrix , where represents the _ hamming _ distance between the and the individuals .we define as the minimal non - diagonal entry in .it is easy to show that from , we choose the threshold in edge connectivity as . since we are conditioned on the discrimination among individuals , .the worst case analysis can be carried out by simply setting . on the other hand ,the average case analysis is carried out by setting .the average case analysis is valid for scenarios where is large .let us denote and as the events in which two nodes that belong to the same community are not connected , and two nodes that belong to different communities become connected , respectively . since entries of cross - correlation matrix are sum of independent random variables , and can be bounded by chernoff - hoeffding theorem : it is easy to show that if , then the probability of reconstruction in can be bounded from above by in order to correctly cluster the read set in the asymptotic regime , i.e. , it suffices that the following condition holds : which simplifies to the following inequality : {\frac{\kappa\log 2}{\nu^2_{\min } } } \xrightarrow{\kappa\gg 1 } \frac{1}{2}\left(1-\sqrt[4]{\frac{\log 2}{\kappa\left(1-\eta\right)^2}}\right ) .\label{eq : spectralerrorineq}\ ] ] as a result , the maximum affordable error rate in ( [ eq : spectralerrorineq ] ) converges to as long as is chosen sufficiently large .given a procedure to perfectly cluster the read set , denoising can be performed by taking snp - wise majority vote among those observations that belong to the same individual . in this way, one can arbitrarily reduce the error rates if a sufficiently large number of independent noisy reads is provided . for finite , probability of error event due to majority voting , denoted by , can be upper bounded via chernoff bound as follows : since the error event in spectral denoising is the union of error in community detection and majority voting , and by assuming that the lower bound in is achievable , one can simply write : where is defined as follows : so far we have derived upper bounds on the error probabilities associated to discrimination and denoising conditions , i.e. and , respectively .the derived bounds hold for any . in this part, we will integrate the obtained results in order to propose appropriate upper bounds on the assembly error rate .the achieved error upper bounds in addition to assembly regions are depicted for both maximum likelihood and spectral denoising algorithms . also , we have analyzed the asymptotic behavior of our bounds under certain circumstances .one can combine the attained results for the maximum likelihood ( ml ) denoising to obtain an upper bound on the assembly error probability in the noisy regime as follows : where the minimization is constrained to .evidently , is supposed to be _ poisson _ distribution with parameter . in the following ,we first derive the optimal and parameters which minimize the upper bound in the asymptotic regime .in fact , when , the term corresponding to can be consistently approximated by the first summand in the summation of , i.e. .same argument holds for the second term corresponding to denoising error bound in . as a result, the simplified upper bound on in the asymptotic case can be written as : an intuitive investigation of the upper bound in reveals that minimization with respect to parameters and has a non - trivial solution since all the terms , and must become large in order to lower the error . taking derivatives with respect to and and solving for the equations result in the following approximations for and : the relations imply that for large values of and , we have and . it should be highlighted that in the case of ml denoising , if one chooses a sufficiently large dna read density , then the term associated with denoising goes to zero independent of sequencing error . as a result ,the upper bound of error solely depends on the discrimination condition , which for large is highly close to the upper bound of the noiseless case in section [ sec : noiseless ] when we choose . in the case of employing spectral denoising algorithm, one achieves the following upper bound on the error assembly : where ( sd ) represents the association with spectral denoising and is defined as in .when is sufficiently large , one can simply replace the averaging over in with .solving for optimal and in this case is more complicated , however , numerical results indicate close values to those obtained in the ml denoising case in .in this paper , the information theoretic limits of tagless pooled - dna sequencing have been achieved and discussed .pooled - dna sequencing is gaining wide - spread attention as a tool for massive genetic sequencing of individuals .moreover , the problem of haplotype - phasing , which is addressed in this paper as a special case of tagless pooled - dna sequencing is an interesting area of research in many fields of bioinformatics .we have mathematically shown that the emergence of long dna read technologies in recent years has made it possible to phase the genomes of multiple individuals without relying on the information associated with linkage disequilibrium ( ld ) .this achievement is critically important since assessment of ld information is expensive , and can only be exploited for the inference of dna haploblocks , rather than whole genomes .moreover , ld information are different from one specie to another ; and even within a particular specie , from one sub - population to another one .we have derived necessary and sufficient conditions for whole genome assembly ( phasing ) when dna reads are noiseless .extensive theoretical analysis have been proposed to derive asymptotically tight upper and lower bounds on the assembly error rate for a given genetic and experimental setting . as a result, we have shown that genome assembly is impossible when dna read length is lower than a specific threshold . moreover , when dna reads satisfy certain lengths and densities , then a simple greedy algorithm can attain the optimal result with computations , where denotes the total number of dna reads .for the case of noisy dna reads , a set of sufficient conditions for correct and unique assembly of all genomes are proposed whose error bounds fall close to the noiseless scenario when dna read density is sufficiently large .we have employed two different decoding procedures to denoise dna reads , denoted by maximum likelihood and spectral denoising , respectively .tight asymptotic upper bounds are provided for the former method , while an approximate analysis is given in the latter case which is based on recent advances in community detection literature . in ourfuture works we will focus on deriving both necessary and sufficient conditions in the noisy case to present tight error bounds , in addition to the optimal assembly algorithm. moreover , the scenarios where snp values are statisitcally linked ( which resemble more realistic cases in real world applications ) are among the other possible approaches for research in this area .in this section , we present the exact error probability in bridging of all identical regions for the case of two individuals .the formulation of is based on mathematical analysis of bridging consecutive identical regions via markovian random process models . in this regard, one should consider the arrivals of both dna reads and discriminating snps to investigate the information flow from each discriminating snp to its next .this procedure is described as a foregoing process which is described as follows .we start with the first discriminating snp in genome and denote it as the _current snp_. also , we denote the last dna read in genome which includes the current snp as the _ current read_. in accordance with these definitions , one of the following arguments hold : * the current snp does not exist . * the current snp exists , but the current read does not exist . *both the current snp and read exist , and the current read contains the last discriminating snp in genome as well . * none of the above .obviously , occurrence of any of the conditions ( 1 ) or ( 3 ) indicates that the bridging condition holds , i.e. does not occur .on the other hand , occurrence of condition ( 2 ) indicates a failure in bridging condition , since at least one identical region can not be bridged . in the case of condition( 4 ) , let us denote the new current read as the last dna read that starts strictly after the current read and includes the last discriminating snp in the current read . in this regard, the new current read again falls into one of the conditions described so far .this procedure continues until bridging condition fails or is satisfied . in the following , we model the above - mentioned procedure via a pair of markovian random processes , facilitating the mathematical formulation of .let us assume at least two discriminating snps exist in genome . also , at least one dna read exists that includes the first discriminating snp .we denote the last read containing this snp as the current read .let us define two coupled and finite markovian random processes and , as follows .let be the distance of last discriminating snp in the current read from the read s end position .in order to define , we consider the last dna read starting strictly after the current read which includes the last discriminating snp in the current read . then , represents the distance of this read from the current read s starting position .based on these definitions , the probability of failure is defined as the probability that one can not find a pair , given that exists . therefore : the processes and terminate at step with probability , otherwise they continue to the next step . at each step , and sampled as follows : \ell_{n}\vert d_n~\sim~ & \frac{2\lambda e^{-2\lambda\left(l - d_{n}-\ell_n\right ) } } { 1-e^{-2\lambda\left(\ell_{n-1}+d_{n-1}-d_n\right ) } } ~,~ l-\ell_{n-1}-d_{n-1}\leq \ell_{n } \leq l - d_n.\end{aligned}\ ] ] if at the time of failure , all discriminating snps are covered by the sequence of current reads , then bridging condition is satisfied , otherwise it has failed .therefore , the probability of error in bridging all identical regions can be formulated as : where indicates the distance between the first and the last discriminating snps .although the relation in does not give an explicit mathematical formulation for , it provides a rigorous numerical procedure to approximate the error probability of the bridging condition with an arbitrary high precision .moreover , upper and lower bounds obtained in section [ sec : noiseless ] can be reattained by bounding the formulation in .in this section , we present the proofs of lemmas [ lemma : positiveexponent ] , [ lemma : uniqueness ] and [ lemma : sampledmin ] . as a result ,the optimal , denoted by , which minimizes must be within the range . according to this assumption , derivative of with respect to at the optimal point results in the following equation : it has been shown by that for a broad range of finite distribution functions and , the solution of with respect to corresponds to .consequently , we can use the cauchy - schwarz inequality to show that : where the equality holds if and only if for all .we will show that when and , then there exists such that .the proof is by contradiction .assume two sets that result in the same statistical distribution over all sequences in .mathematically speaking : or alternatively : where . and represent the _ hamming _ distances of from the snp sequence in and , respectively . by assuming ( ) , all the equations in hold regardless of the choice of and .however , if then .each equation in is a polynomial of degree at most , and therefore has at most roots .acceptable roots must be real and fall in the interval . moreover , the roots must be common among all equations .consequently , one can deduce that the constraints hold only when all the coefficients of the polynomial become zero .for any , if then one of the _ hamming _ distances becomes zero and one term appears in the equation . since the equation holds globally , i.e. for all , then a term must also appear in the summation .this implies that for one , is zero implying .consequently , and since , we can deduce that .this result leads to the fact that each is strictly positive when .we calculate for and by using lemma [ lemma : positiveexponent ] and discussions made in theorem [ thm : noisytheorem ] . in this regard , can be formulated as \nonumber \\ & = \min_{\psi,\psi'\in\psi_{\kappa}\atop \psi\neq\psi ' } \log \left [ \left ( \frac{\left(1-\epsilon\right)^{\kappa}}{m } \sum_{i=1}^{2^{\kappa } } \sqrt{\sum_{j , j'=1}^{m } \left(\frac{\epsilon}{1-\epsilon}\right)^{\rho_{i , j}+\rho'_{i , j ' } } } \right)^{-1 } \right ] .\label{eq : lemma3:dmin}\end{aligned}\ ] ] in the following we compute an explicit formulation for . evidently , the worst case analysis for which gives the minimal corresponds to a setting where is composed of two adjacent -ary snp sequences and differs only in one locus with . from a geometric point of view, members of form the vertices of a -dimensional hyper - cube .a hyper - cube is a symmetric structure .hence , without loss of generality , we can assume coordinations of and as follows : \\[2 mm ] \left[1,0,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \end{array}\right\}\quad,\quad \psi=\left\{\begin{array}{c } \left[0,0,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \\[2 mm ] \left[0,1,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \end{array}\right\},\ ] ] where denotes a row vector of zeros with length . in order to compute, we have to sum over all vertices of hyper - cube , where , and and denote the _ hamming _ distances of the vertice from the sequence in and sequence in , respectively . focusing on the last of each vertice , vertices of the hyper - cube have a _ hamming _ distance of with each of the sequences in and for all .for the first two coordinates which separates the vertices of and from each other , one can simply count the differences to attain analytic values for and . in this regard, it can be shown that : in the case of , many of the arguments are the same to those of scenario .therefore , one just needs to specify the worst case opponent subsets , i.e. and when there are three underlying individuals .similar to the previous part and taking into account the symmetry property of -dimensional hyper - cubes , it can be shown that the worst case and can be chosen as follows : \\[2 mm ] \left[1,0,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \\[2 mm ] \left[1,1,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \end{array}\right\}\quad,\quad \psi=\left\{\begin{array}{c } \left[0,0,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \\[2 mm ] \left[1,0,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \\[2 mm ] \left[0,1,\boldsymbol{0}_{1\times\left(\kappa-2\right)}\right ] \end{array}\right\}.\ ] ] as can be seen , which is the dominant exponent of error for large dna read densities , does not depend on , which simplifies many of the formulations in the asymptotic and total error analysis .
in this paper , fundamental limits in sequencing of a set of closely related dna molecules are addressed . this problem is called pooled - dna sequencing which encompasses many interesting problems such as haplotype phasing , metageomics , and conventional pooled - dna sequencing in the absence of tagging . from an information theoretic point of view , we have proposed fundamental limits on the number and length of dna reads in order to achieve a reliable assembly of all the pooled dna sequences . in particular , pooled - dna sequencing from both noiseless and noisy reads are investigated in this paper . in the noiseless case , necessary and sufficient conditions on perfect assembly are derived . moreover , asymptotically tight lower and upper bounds on the error probability of correct assembly are obtained under a biologically plausible probabilistic model . for the noisy case , we have proposed two novel dna read denoising methods , as well as corresponding upper bounds on assembly error probabilities . it has been shown that , under mild circumstances , the performance of the reliable assembly converges to that of the noiseless regime when , for a given read length , the number of dna reads is sufficiently large . interestingly , the emergence of long dna read technologies in recent years envisions the applicability of our results in real - world applications . # 1 \ { # 1 } [ section ] [ thm2]corollary [ thm2]note
game theory is a great theoretical outcome of the 20th century and is used widely in interpreting social and economic phenomena . because of the universality of games , games are also widely used in science and engineering . on one side , game theory has been applied in many domains successfully . on the other side , using different tools to interpret game theory is also an interesting direction. in computer science , there are various kind of tools to capture the computation concept concerned with the nature of computability .there is no doubt that process algebra is one of the most influential tools .milner s ccs ( calculus of communicating systems ) , hoare s csp ( communicating sequential processes ) and acp ( algebra of communicating process ) are three dominant forms of process algebra , and there are also several kinds of process calculi .such process algebras often have a formal deductive system based on equational logic and a formal semantics model based on labeled transition systems , can be suitable to reason about the behaviors of parallel and distributed systems .the combination of games and computer science is a fascinating direction , and it gains great successes , such as the so - called game semantics . since there exist lots of game phenomena in parallel and distributed systems , especially interactions between a system and its environment , interactions among system components , and interactions among system components and outside autonomous software agents , the introduction of games into traditional computation tools , such as the above mentioned process algebra , is attractive and valuable .the computation tools extended to support games can be used to reason about the behaviors of systems in a new viewpoint . using these computation tools to give game theoryan interpretation is an interesting problem .this direction has subtle difference with introducing games and ideas of games into the computation tools .it not only can make these tools having an additional ability to using games in computation , but also gives game theory a new interpretation which will help the human to capture the nature of games and also the development of game theory .although in some process algebras , such as csp , there are an internal choice and an external choice , process algebra acp does not distinguish the internal choice and the external choice . in this paper , we introduce external choice into process algebra acp in a game theory flavor .we give games an axiomatic foundation called gameacp based on process algebra acp .because of acp s clear semantic model based on bisimulation or rooted branching bisimulation and well designed axiomatic system , gameacp inherits acp s advantages in an elegant and convenient way .this is the first step to use computation tools to interpret games in an axiomatic fashion as far as we know .this paper is organized as follows . in section 2 , we analyze the related works .application scenarios called submittingorder , transacting and purchasing are illustrated in section 3 . in section 4, we briefly introduce some preliminaries , including equational logic , structural operational semantics , process algebra acp and also games . in section 5 , the extension of bpa ( basic process algebra ) for games is done , which is called gamebpa , including opponent s alternative composition operator and another new operator called playing operator of gameacp processes , and their transition rules and the properties of the extension , and we design the axioms of opponent s alternative composition and playing operator , including proving the soundness and completeness of the axiomatic system . in section 6 , we do another extension based on acp , which is called gameacp . we give the correctness theorem in section 7 . in section 8 , we show the support for multi - person games . finally , conclusions are drawn in section 9 .as mentioned above , the combination of computation tools and game semantics includes two aspects : one is introducing games or idea of games into these computation languages or tools to give them a new viewpoint , and the other is using these computation tools to interpret games .the first one has plenty of works and gained great successes , but the second one has a few works as we known .we introduce the two existing works in the following .it is no doubt that the so - called game semantics gained the most great successes in introducing games into computer science .game semantics models computations as playing of some kind of games , especially two person games . in the two person game, the player ( p ) represents the system under consideration and the opponent ( o ) represents the environment in which the system is located . in game semantics , the behaviors of the system ( acts as p ) and the environment ( acts as o )are explicitly distinguished .so the interactions between the system and the environment can be captured as game plays between the opponent and the player , and successful interactions can be captured by the game strategy .for example , the function where can be deemed as the games played in fig . [ f(x ) ] .firstly , the opponent ( the environment ) moves to ask the value of , then the player ( the function ) moves to ask the value of , and then the opponent moves to answer that the value of is 5 , the player moves to answer that the value of is 25 finally . where . ]game semantics has gained great successes in modeling computations , such as an initial success of modeling the functional programming language pcf ( programming computable functions ) , multiplicative linear logic , idealized algol , general reference , etc . to model concurrency in computer science with game semantics , a new kind of game semantics called asynchronous game is established and a bridge between the asynchronous game and traditional game semantics is founded . moreover , asynchronous games perfectly model propositional linear logic and get a full completeness result .another kind of game semantics to describe concurrency is concurrent game , and a work to bridge asynchronous game and concurrent game is introduced in .algorithmic game semantics is the premise of implementation of game semantics for further automatic reasoning machine based on some specific game semantics model . andgame semantics can be used to establish the so - called interaction semantics among autonomous agents , and can be used to model and verify compositional software .game semantics utilizes such dialogue games to model interactions between the system under consideration and the environment , and pays more attention to the playing process of the two players . andit develops some key concepts which have correspondents to traditional computation concepts , such as innocence to context independence and bracketing to well - structured property .different to game semantics , there are also several works to use computation tools to model games of two agents .game - ctr introduces games into ctr ( concurrent transaction logic ) to model and reason about runtime properties of workflows that are composed of non - cooperative services such as web services .game - ctr includes a model and proof theory which can be used to specify executions under some temporal and causality constraints , and also a game solver algorithm to convert such constraints into other equivalent game - ctr formulas to be executed more efficiently .chatzikokolakis et al develop a game semantics for a certain kind of process calculus with two interacting agents .games and strategies on this process calculus are defined , and strategies of the two agents determine the execution of the process . and also , a certain class of strategies correspond to the so - called syntactic schedulers of chatzikokolakis and palamidessi . in these works ,the games used are not dialogue games , and there are no interactions such as questions and answers and also no wining concept . more like game - ctr and chatzikokolakis s work , we introduce games into acp , or we use acp to give games an interpretation . unlike and , our work gameacp is an attempt to do axiomatization with an extension of process algebra acp for games .it has the following characteristics : 1 .we introduce the external choice into process algebra acp in a game theory flavor . as a result of axiomatization, gameacp has not only an equational logic , but also a bisimulation semantics .the conclusions of gameacp are without any assumption or restriction , such as epistemic restrictions on strategies in .though the discussions of gameacp are aimed at two person games , gameacp can be naturally used in multi - person games .gameacp provides a new viewpoint to model interactions between one autonomous agent and other autonomous agents , and can be used to reason about the behaviors of parallel and distributed systems with game theory supported .in this section , we will illustrate the universality of game phenomena that exist in computer systems through three different examples . using these examples throughout this paper, we illustrate our core concepts and ideas .graphical user interface is the most popular human - machine interface now .[ submittingorder]-a illustrates the flow of submitting an order for a user through a graphical interface .the flow is as follows . 1 .the interface program startsthe user writes an order via the interface .3 . when the order is completed , the user can decide to submit the order or cancel the order .if the order is submitted , then the order is stored and the program terminates .if the order is canceled , then the program terminates . in this submittingorder example , the selection of submitting or canceling the order is done by the user , but not the program according to its inner states .this situation is suitable to be captured by use of a game between the user and the interface program .transaction processing is the core mechanism of database and business processing .traditional transaction has acid properties and is illustrated in fig .[ transaction]-a .the flow of traditional database transaction is following . 1 .the transaction is started .operations on the data are done by a user .3 . the user can decide to submit the transaction or abort the transaction .if the transaction is submitted , the data are permanently stored and the transaction terminates .if the transaction is aborted , the data are rollbacked and the transaction also terminates . in this transaction example, the selection of submitting or aborting the transaction is also done by the user , but not the database or business processing system according to its inner states .this situation is also suitable to be modeled by use of a game between the user and the database or business processing system .web service is a quite new distributed object and web service composition created new bigger web services from the set of smaller existing web services .a composite web service is defined by use of a kind of web service composition language and is executed by interpreting the definition of the composite web service .ws - bpel is a kind of such language . in ws - bpel, the atomic function units are called atomic activities and the corresponding structural activities define the control flow among these atomic activities . *pick * activity is a kind of choice structural activity in which the decision is made by outside autonomous web services , and is different from the * if * activities , in which the decision is made by the composite web service according to its inner states . in fig .[ purchasing1 ] , a composite web service implements the following flow of purchasing goods and can be used by a user through a user agent web service . 1 .the composite web service is started by a user through a user agent web service .2 . the user shops for goods . 3 . after the shopping is finished , the user can select the shipping way : by truck , by train or by plane .4 . if the truck way is selected , then the user should order a truck and pay online for the fees .5 . if the train way is selected , then user should order a train and pay online for the fees .6 . if the plane way is selected , then the user should order a plane , if the money amount is greater than 1000 dollars , he / she should pay offline for the fees ; and if not , he / she should pay online . the ws - bpel skeleton of the purchasing composite web service is shown in fig .[ purchasing2 ] .note that the first choice is modeled by use of a * pick * activity and the second choice is modeled by use of an * if * activity . in this purchasing composite web service , the selection of shipping waysis also done by the user through a user agent web service , and not the composite web service according to its inner states .this situation is also suitable to be modeled by use of a game between the user ( or the user agent web service ) and the composite web service .in this section , we introduce some preliminaries , including equational logic , structural operational semantics , process algebra acp and games . in the following , the variables range over the collection of process terms , the variables range over the set of atomic actions , , are closed items , is the special constant silent step , is the special constant deadlock , is the special constant non - determinacy , and the predicate represents successful termination after execution of the action .we introduce some basic concepts about equational logic briefly , including signature , term , substitution , axiomatization , equality relation , model , term rewriting system , rewrite relation , normal form , termination , weak confluence and several conclusions .these concepts are coming from , and are introduced briefly as follows . about the details ,please see .* definition [ elp].1 ( signature)*. a signature consists of a finite set of function symbols ( or operators ) , where each function symbol has an arity , being its number of arguments . a function symbol of arity _ zero _is called a constant , a function symbol of arity one is called unary , and a function symbol of arity two is called binary .* definition [ elp].2 ( term)*. let be a signature .the set of ( open ) terms over is defined as the least set satisfying : ( 1)each variable is in ; ( 2 ) if and , then .a term is closed if it does not contain free variables .the set of closed terms is denoted by .* definition [ elp].3 ( substitution)*. let be a signature .a substitution is a mapping from variables to the set of open terms .a substitution extends to a mapping from open terms to open terms : the term is obtained by replacing occurrences of variables in t by .a substitution is closed if for all variables .* definition [ elp].4 ( axiomatization)*. an axiomatization over a signature is a finite set of equations , called axioms , of the form with .* definition [ elp].5 ( equality relation)*. an axiomatization over a signature induces a binary equality relation on as follows .( 1)(substitution ) if is an axiom and a substitution , then .( 2)(equivalence ) the relation is closed under reflexivity , symmetry , and transitivity .( 3)(context ) the relation is closed under contexts : if and is a function symbol with , then .* definition [ elp].6 ( model)*. assume an axiomatization over a signature , which induces an equality relation . a model for consists of a set together with a mapping .( 1 ) is sound for if implies for ; ( 2 ) is complete for if implies for . *definition [ elp].7 ( term rewriting system)*. assume a signature .a rewrite rule is an expression with , where : ( 1 ) the left - hand side is not a single variable ; ( 2 ) all variables that occur at the right - hand side also occur in the left - hand side .a term rewriting system ( trs ) is a finite set of rewrite rules . *definition [ elp].8 ( rewrite relation)*. a trs over a signature induces a one - step rewrite relation on as follows .( 1 ) ( substitution ) if is a rewrite rule and a substitution , then .( 2 ) ( context ) the relation is closed under contexts : if and f is a function symbol with , then .the rewrite relation is the reflexive transitive closure of the one - step rewrite relation : ( 1 ) if , then ; ( 2 ) ; ( 3 ) if and , then . * definition [ elp].9 ( normal form)*. a term is called a normal form for a trs if it can not be reduced by any of the rewrite rules . *definition [ elp].10 ( termination)*. a trs is terminating if it does not induce infinite reductions .* definition [ elp].11 ( weak confluence)*. a trs is weakly confluent if for each pair of one - step reductions and , there is a term such that and .* theorem [ elp].1 ( newman s lemma)*. if a trs is terminating and weakly confluent , then it reduces each term to a unique normal form . * definition[ elp].12 ( commutativity and associativity)*. assume an axiomatization .a binary function symbol is commutative if contains an axiom and associative if contains an axiom .* definition [ elp].13 ( convergence)*. a pair of terms and is said to be convergent if there exists a term such that and .axiomatizations can give rise to trss that are not weakly confluent , which can be remedied by knuth - bendix completion .it determines overlaps in left hand sides of rewrite rules , and introduces extra rewrite rules to join the resulting right hand sides , witch are called critical pairs .* theorem [ elp].2*. a trs is weakly confluent if and only if all its critical pairs are convergent .the concepts about structural operational semantics include labelled transition system ( lts ) , transition system specification ( tss ) , transition rule and its source , source - dependent , conservative extension , fresh operator , panth format , congruence , bisimulation , etc .these concepts are coming from , and are introduced briefly as follows . about the details ,please see .we assume a non - empty set of states , a finite , non - empty set of transition labels and a finite set of predicate symbols .* definition [ sosp].1 ( labeled transition system)*. a transition is a triple with , or a pair ( s , p ) with a predicate , where .a labeled transition system ( lts ) is possibly infinite set of transitions .an lts is finitely branching if each of its states has only finitely many outgoing transitions .* definition [ sosp].2 ( transition system specification)*. a transition rule is an expression of the form , with a set of expressions and with , called the ( positive ) premises of , and an expression or with , called the conclusion of .the left - hand side of is called the source of .a transition rule is closed if it does not contain any variables . a transition system specification ( tss )is a ( possible infinite ) set of transition rules . *definition [ sosp].3 ( proof)*. a proof from a tss of a closed transition rule consists of an upwardly branching tree in which all upward paths are finite , where the nodes of the tree are labelled by transitions such that : ( 1 ) the root has label ; ( 2 ) if some node has label , and is the set of labels of nodes directly above this node , then ( a ) either is the empty set and , ( b ) or is a closed substitution instance of a transition rule in .* definition [ sosp].4 ( generated lts)*. we define that the lts generated by a tss consists of the transitions such that can be proved from .* definition [ sosp].5*. a set of expressions and ( where ranges over closed terms , over and over predicates ) hold for a set of transitions , denoted by , if : ( 1 ) for each we have that for all ; ( 2 ) for each we have that .* definition [ sosp].6 ( three - valued stable model)*. a pair of disjoint sets of transitions is a three - valued stable model for a tss if it satisfies the following two requirements : ( 1 ) a transition is in if and only if proves a closed transition rule where contains only negative premises and ; ( 2 ) a transition is in if and only if proves a closed transition rule where contains only negative premises and . *definition [ sosp].7 ( ordinal number)*. the ordinal numbers are defined inductively by : ( 1 ) is the smallest ordinal number ; ( 2 ) each ordinal number has a successor ; ( 3 ) each sequence of ordinal number is capped by a limit ordinal . * definition [ sosp].8 ( positive after reduction)*. a tss is positive after reduction if its least three - valued stable model does not contain unknown transitions . *definition [ sosp].9 ( stratification)*. a stratification for a tss is a weight function which maps transitions to ordinal numbers , such that for each transition rule with conclusion and for each closed substitution : ( 1 ) for positive premises and of , and , respectively ; ( 2 ) for negative premise and of , for all closed terms and , respectively .* theorem [ sosp].1*. if a tss allows a stratification , then it is positive after reduction . *definition [ sosp].10 ( process graph)*. a process ( graph ) is an lts in which one state is elected to be the root . if the lts contains a transition , then where has root state . moreover , if the lts contains a transition , then .( 1 ) a process is finite if there are only finitely many sequences .( 2 ) a process is regular if there are only finitely many processes such that . *definition [ sosp].11 ( bisimulation)*. a bisimulation relation is a binary relation on processes such that : ( 1 ) if and then with ; ( 2 ) if and then with ; ( 3 ) if and , then ; ( 4 ) if and , then .two processes and are bisimilar , denoted by , if there is a bisimulation relation such that .* definition [ sosp].12 ( congruence)*. let be a signature . an equivalence relation on is a congruence if for each , if for , then . *definition [ sosp].13 ( panth format)*. a transition rule is in panth format if it satisfies the following three restrictions : ( 1 ) for each positive premise of , the right - hand side is single variable ; ( 2 ) the source of contains no more than one function symbol ; ( 3 ) there are no multiple occurrences of the same variable at the right - hand sides of positive premises and in the source of .a tss is said to be in panth format if it consists of panth rules only .* theorem [ sosp].2*. if a tss is positive after reduction and in panth format , then the bisimulation equivalence that it induces is a congruence . *definition [ sosp].14 ( branching bisimulation)*. a branching bisimulation relation is a binary relation on the collection of processes such that : ( 1 ) if and then either and or there is a sequence of ( zero or more ) -transitions such that and with ; ( 2 ) if and then either and or there is a sequence of ( zero or more ) -transitions such that and with ; ( 3 ) if and , then there is a sequence of ( zero or more ) -transitions such that and ; ( 4 ) if and , then there is a sequence of ( zero or more ) -transitions such that and . two processes and are branching bisimilar , denoted by , if there is a branching bisimulation relation such that .* definition [ sosp].15 ( rooted branching bisimulation)*. a rooted branching bisimulation relation is a binary relation on processes such that : ( 1 ) if and then with ; ( 2 ) if and then with ; ( 3 ) if and , then ; ( 4 ) if and , then .two processes and are rooted branching bisimilar , denoted by , if there is a rooted branching bisimulation relation such that . *definition [ sosp].16 ( lookahead)*. a transition rule contains lookahead if a variable occurs at the left - hand side of a premise and at the right - hand side of a premise of this rule .* definition [ sosp].17 ( patience rule)*. a patience rule for the i - th argument of a function symbol is a panth rule of the form * definition [ sosp].18 ( rbb cool format)*. a tss is in rbb cool format if the following requirements are fulfilled .( 1 ) consists of panth rules that do not contain lookahead . ( 2 ) suppose a function symbol occurs at the right - hand side the conclusion of some transition rule in .let be a non - patience rule with source .then for , occurs in no more than one premise of , where this premise is of the form or with .moreover , if there is such a premise in , then there is a patience rule for the i - th argument of in .* theorem [ sosp].3*. if a tss is positive after reduction and in rbb cool format , then the rooted branching bisimulation equivalence that it induces is a congruence . *definition [ sosp].19 ( conservative extension)*.let and be tsss over signatures and , respectively .the tss is a conservative extension of if the ltss generated by and contain exactly the same transitions and with . *definition [ sosp].20 ( source - dependency)*. the source - dependent variables in a transition rule of are defined inductively as follows : ( 1 ) all variables in the source of are source - dependent ; ( 2 ) if is a premise of and all variables in are source - dependent , then all variables in are source - dependent .a transition rule is source - dependent if all its variables are .a tss is source - dependent if all its rules are . *definition [ sosp].21 ( freshness)*. let and be tsss over signatures and , respectively . a term in said to be fresh if it contains a function symbol from .similarly , a transition label or predicate symbol in is fresh if it does not occur in . * theorem [ sosp].4*. let and be tsss over signatures and , respectively , where and are positive after reduction . under the following conditions , is a conservative extension of .( 1 ) is source - dependent . ( 2 ) for each , either the source of is fresh , or has a premise of the form or , where , all variables in occur in the source of and , or is fresh .acp is a kind of process algebra which focuses on the specification and manipulation of process terms by use of a collection of operator symbols . in acp, there are several kind of operator symbols , such as basic operators to build finite processes ( called bpa ) , communication operators to express concurrency ( called pap ) , deadlock constants and encapsulation enable us to force actions into communications ( called acp ) , linear recursion to capture infinite behaviors ( called acp with linear recursion ) , the special constant silent step and abstraction operator ( called with guarded linear recursion ) allows us to abstract away from internal computations .bisimulation or rooted branching bisimulation based structural operational semantics is used to formally provide each process term used the above operators and constants with a process graph .the axiomatization of acp ( according the above classification of acp , the axiomatizations are , , , + rdp ( recursive definition principle ) + rsp ( recursive specification principle ) , + rdp + rsp + cfar ( cluster fair abstraction rule ) respectively ) imposes an equation logic on process terms , so two process terms can be equated if and only if their process graphs are equivalent under the semantic model .acp can be used to formally reason about the behaviors , such as processes executed sequentially and concurrently by use of its basic operator , communication mechanism , and recursion , desired external behaviors by its abstraction mechanism , and so on .acp can be extended with fresh operators to express more properties of the specification for system behaviors .these extensions are required both the equational logic and the structural operational semantics to be extended .then the extension can be done based on acp , such as its concurrency , recursion , abstraction , etc .the process graph of the submittingorder example is illustrated in fig .[ submittingorder]-b . since acpdoes not distinguish the choice decision made by outside agent or inner states , the process of the submittingorder example can be expressed by the following process term in acp . .the process graph of the transaction example is illustrated in fig .[ transaction]-b .the process of the transaction example can be expressed by the following process term in acp . .the process graph of the purchasing composite web service is illustrated in fig .[ purchasing3 ] .the process of the purchasing composite web service can be expressed by the following process term in acp . . in the above application scenarios , one agent interacts with other autonomous agents or human beings . in the agent s viewpoint ,some branch decisions are made by outside agents or human beings , but not the inner states .in this situation , a two person game is suitable to model the interaction . in the game ,the agent is modeled as the player ( denoted as p ) and the other agent or the human being is modeled as the opponent ( denoted as o ) .corresponding to a process graph , there exists a game tree , for example , the game tree corresponding to process graph in fig .[ purchasing3 ] is illustrated in fig .[ purchasinggametree ] .we define move and strategy as follows .* definition [ games].1 ( move ) * every execution of an action in the process graph causes a move in the corresponding game tree .and we do not distinguish the action and the move .* definition [ games].2 ( p strategy ) * a strategy of p in a game tree is a subtree defined as follows : 1 . the empty move ; 2 . if the move is a p move , then exactly one child move of and ; 3 .if the move is an o move , then all children of are in , that is . since p and o are relative , the strategy of o can be defined similarly .* definition [ games].3 ( o strategy ) * a strategy of o in a game tree is a subtree define as follows : 1 . the empty move ; 2 . if the move is a o move , then exactly one child move of and ; 3 . if the move is an p move , then all children of are in , that is . in the game tree illustrated in fig .[ purchasinggametree ] of purchasing example , there are two choice decisions .one is made by the user agent ( or the user ) , and the other is made by the composite service . in this game , we model the composite service as p and the user agent ( or the user ) as o. a strategy of p is illustrated as fig .[ purchasingplayer ] shows . and a strategy of o is as fig .[ purchasingopponent ] illustrates .we can see that the actual execution a game tree are acted together by the p and the o. for a p strategy and an o strategy of a game tree , has the form according to the definition of strategy .we can get that the maximal element of exactly defines an execution of the game tree . for the p strategy illustrated in fig .[ purchasingplayer ] and o strategy illustrated in fig .[ purchasingopponent ] , the maximal element of defines an execution of the process as illustrated in process graph fig .[ purchasing3 ] .this is shown in fig .[ execution ] .gamebpa is based on bpa . in bpa , there are two basic operators called alternative composition and sequential composition .we give the transition rules for bpa as follows . the axioms of bpa are in table [ axiomofbpa ] ..axioms of bpa [ cols= " < , < " , ] the axioms dl1-dl2 are presented for the deadlock constant , and the axioms po1-po14 are for the playing operator .there are not axioms for the association of the deadlock constant and the playing operator , just because the function of the playing operator is eliminating all non - deterministic factors .* theorem 7 * + dl1-dl2 + po1-po14 is sound for gamebpa with playing operator and deadlock constant modulo bisimulation equivalence .since bisimulation is both an equivalence and a congruence , we only need to check that the first clause in the definition of the relation is sound .that is , if is an axiom in gamebpa and is a closed substitution that maps the variables in and to process terms , then we need to check that .we only provide some intuition for soundness of the axioms in table [ axiomofpo ] . 1 .the axiom dl1 says that displays no behavior , so the process term is equal to the process term .the axioms dl2 , po3 and po4 say that blocks the behavior of the process term , and .the axioms po1 and po2 say that the co - action of two same actions will lead to the only action , otherwise , it will cause a deadlock .the axioms po5-po10 say that makes as initial transition a playing of initial transitions from and .if the execution sequence of is not matched with that of , a deadlock will be caused .the axioms po11-po12 say that the function of playing operator makes two non - deterministic gamebpa processes deterministic .the axioms po13-po14 say that the playing operator satisfies right and left distributivity to the operator .these intuitions can be made rigorous by means of explicit bisimulation relations between the left- and right - hand sides of closed instantiations of the axioms in table [ axiomofpo ] .hence , all such instantiations are sound modulo bisimulation equivalence . * theorem 8 * + dl1-dl2 + po1-po14 is complete for gamebpa with playing operator and deadlock constant modulo bisimulation equivalence . the proof is based on the proof of the theorem 4 .the proof consists of three main step : ( 1 ) we will show that the axioms dl1 , dl2 and po1-po14 can be turned in to rewrite rules ( see section [ elp ] ) , and the resulting trs is terminating ( see section [ elp ] ) ; ( 2 ) we will show that norm forms ( see section [ elp ] ) do not contain occurrences of the fresh opponent s alternative composition operator ; ( 3 ) we will prove that + dl1-dl2 + po1-po14 is complete for gamebpa with playing operator and deadlock constant modulo bisimulation equivalence .\(1 ) the axioms dl1-dl2 + po1-po14 is turned into rewriting rules directly from left to right , and added to the rewriting rules in the proof the completeness of ( see proof of theorem 4 ) .the resulting trs is terminating modulo ac ( associativity and commutativity ) of operator through defining new weight functions on process terms . we can get that each application of a rewriting rule strictly decreases the weight of a process term , and that moreover process terms that are equivalent modulo ac of + have the same weight . hence , the trs is terminating modulo ac of .( 2)we will show that the normal form are not of the form .the proof is based on induction with respect to the size of the normal form . *if n is an atomic action , then it does not contain . *suppose or .then by induction , the normal forms and do not contain , so does not contain .* suppose . by induction, the normal form does not contain .we distinguish the possible forms of the normal form : * * if , then the directed version of po1 , po2 , po5 or po6 apply to ; * * if , then the directed version of po7-po10 apply to ; * * if , then the directed version of po13 applies to ; * * if , then the directed version of po11 applies to .( actually , we already prove that can not occur in the norm forms , see the proof of theorem 4 ) .+ these four cases , which cover the possible forms of the normal form , contradict the fact that is a normal form .similarly , we can induce the possible forms of the normal form .so , we conclude that can not be of the form .we proved that normal forms are all basic process terms .( 3)we proceed to prove that the axiomatization + dl1-dl2 + po1-po14 is complete for gamebpa with playing operator and deadlock constant modulo bisimulation equivalence .let the process terms and be bisimilar .the trs is terminating modulo ac of the , so it reduces and to normal forms and , respectively . since the rewrite rules and equivalence modulo ac of the + can be derived from + dl1-dl2 + po1-po14 , and .soundness of + dl1-dl2 + po1-po14 then yields and , so .we shown that the normal forms and are basic process terms .then it follows that implies .hence , .gamebpa extends to process algebra bpa and does not use the full outcomes of acp , such as concurrency , recursion , abstraction , etc .now , we make gamebpa be based on the full acp ( exactly with guarded linear recursion ) and this extension is called gameacp .gameacp remains the opponent s alternative composition operator , the playing operator . because the deadlock constantis already existing in acp , we remove the duplicate definition of deadlock constant in gameacp .the transition rules of the opponent s alternative composition operator and the playing operator are the same as those in gamebpa . through defining , we extend to . we can get the following two conclusions .* theorem 9 * gameacp ( exactly with guarded linear recursion , opponent s alternative composition operator , playing operator is a conservative extension of acp ( exactly with guarded linear recursion ) ( see section [ acpp ] ) .the sources of transition rules of opponent s alternative composition operator and playing operator contain one fresh function symbol and . andit is known that the transition rules of with guarded linear recursion are source - dependent .according to the definition of conservative extension ( see section [ sosp ] ) , gameacp is a conservative extension of with guarded linear recursion .* theorem 10 * rooted branching bisimulation equivalence is a congruence with respect to gameacp ( exactly with guarded linear recursion , opponent s alternative composition operator , playing operator .we introduce successful termination predicate .a transition rule is added into transition rules of gameacp . replacing transition rules occurring by , the result transition rules of gameacp are in rbb cool format according to the definition of rbb cool format .so rooted branching bisimulation equivalence is a congruence with respect to gameacp according to the definition of congruence ( see section [ sosp ] ) . because of the remove of the deadlock constant in gameacp, the axiomatization of gameacp ( exactly with guarded linear recursion , opponent s alternative composition operator , playing operator ) only contains + rdp , rsp , cfar and oa1 , po1-po14 .now , we get the following two conclusions. * theorem 11 * ( + rdp , rsp , cfar + oa1 + po1-po14 ) is sound for gameacp ( exactly with guarded linear recursion , opponent s alternative composition operator , playing operator ) modulo rooted branching bisimulation equivalence .because rooted branching bisimulation is both an equivalence and a congruence , we only need to check that if is an axiom and a closed substitution replacing the variables in and to get and , then .we only provide some intuition for soundness of the axioms ao1 , po1-po14 . 1. the axiom oa1 says a gamebpa process term is the same as the process term in the view of o. 2 .the axioms po3 and po4 say that blocks the behavior of the process term , and .the axioms po1 and po2 say that the co - action of two same actions will lead to the only action , otherwise , it will cause a deadlock .the axioms po5-po10 say that makes as initial transition a playing of initial transitions from and .if the execution sequence of is not matched with that of , a deadlock will be caused .the axioms po11-po12 say that the function of playing operator makes two non - deterministic gamebpa processes deterministic .the axioms po13-po14 say that the playing operator satisfies right and left distributivity to the operator .these intuitions can be made rigorous by means of explicit rooted branching bisimulation relations between the left- and right - hand sides of closed instantiations of the axioms in table [ axiomofpo ] .hence , all such instantiations are sound modulo rooted branching bisimulation equivalence .* theorem 12 * ( + rdp , rsp , cfar + oa1 + po1-po14 ) is complete for gameacp ( exactly with guarded linear recursion , opponent s alternative composition operator , playing operator ) modulo rooted branching bisimulation equivalence . we need to prove that each process term in gameacp is equal to a process term with a guarded linear recursive specification .that is , if for guarded linear recursive specifications and , then can be gotten from .this proof is based on the completeness proof of + rdp , rsp , cfar .we apply structural induction the size of process term .the new case is .first assuming with a guarded linear recursive specification and with a guarded linear recursive specification , we prove the case of .let consists of guarded linear recursive equations for .let consists of guarded linear recursive equations for . then we can use the axioms po1-po10 into the above equation .this will lead to several cases and we do not enumerate all these cases .but , we can see that every case will lead to a guarded linear recursive specification .* theorem 13 * if is a p strategy and is an o strategy as illustrated in section [ games ] , and if the gameacp process term corresponds to and the gameacp process term corresponds to , then the process term exactly defines an execution of and .we will show that exactly results in the maximal element of .the axioms po11 , po12 make non - deterministic gameacp processes deterministic .the axioms po13 , po14 inspect all deterministic branches .the axioms dl2 , po2 , po3 , po4 , po6 , po8 , po10 assure that the mismatched execution sequence will cause a deadlock .the axiom dl1 eliminates the deadlock branches in a gameacp process .the axioms po1 , po5 , po7 and po9 assure occurrence of the matched execution sequence of two gameacp processes .the axioms po5 , po7 assure the selection the maximal execution sequence .we illustrate the correctness theorem through three examples in section [ as ] . for the p strategy corresponding to the gameacp process term and the o strategy corresponding to the gameacp process term in fig .[ submittingorder ] , the maximal element of defines an execution of process graph fig .[ submittingorder]-b . for the p strategy corresponding to the gameacp process term and the o strategy corresponding to the gameacp process term in fig .[ transaction ] , the maximal element of defines an execution of the process as illustrated in process graph fig .[ transaction]-b . for the p strategy corresponding to the gameacp process term in fig .[ purchasingplayer ] and the o strategy corresponding to the gameacp process term in fig .[ purchasingopponent ] , the maximal element of defines an execution of the process as illustrated in fig .[ execution ] .in fact , the axioms in table [ axiomofoac ] and table [ axiomofpo ] can be naturally used in multi - person games without any alternation . for a three - person games ,let be a gameacp process term corresponding to a strategy of the first player , a gameacp process term corresponding to a strategy of the second player and a gameacp process term corresponding to a strategy of the third player .the process term can be deduced to an execution of these strategies by use of the above axioms .we show this situation in the section .the process graph of the extended purchasing composite web service is illustrated in fig .[ purchasing21 ] .the process of the purchasing composite web service can be expressed by the following process term in acp . . the game tree to process graph in fig .[ purchasing21 ] is illustrated in fig .[ purchasinggametree2 ] . in the game tree illustrated in fig .[ purchasinggametree2 ] of the extended purchasing example , there are three choice decisions .the first is made by the user agent ( or the user ) , and the second is made by the composite service , and the third is made by the air corporation . in this game , we model the user agent as player 1 , the composite service as player 2 and the air corporation as player 3 . a strategy of player 1is illustrated as fig .[ purchasingplayer1 ] shows . and a strategy of player 2 is as fig .[ purchasingplayer2 ] illustrates . and[ purchasingplayer3 ] shows a strategy of player 3 .we can see that the actual execution a game tree are acted together by all players . for a strategy ,a strategy and a strategy of a game tree , has the form according to the definition of strategy .we can get that the maximal element of exactly defines an execution of the game tree . for the strategy illustrated in fig .[ purchasingplayer1 ] , the strategy illustrated in fig .[ purchasingplayer2 ] , and the strategy illustrated in fig .[ purchasingplayer3 ] the maximal element of defines an execution of the process as illustrated in process graph fig .[ purchasing2 ] .this is shown in fig .[ execution2 ] . in extended purchasing example , in the view of the player 1, the process can be expressed by the following process term in gameacp . .so the subtrees corresponding to term , and are all strategies of the player 1 . in the view of the player 2, the process can be expressed by the following process term in gameacp . .so the subtrees corresponding to term and are all strategies of the player 2 . in the view of the player 3, the process can be expressed by the following process term in gameacp . .so the subtrees corresponding to term and are all strategies of the player 3 .if is a strategy of the player 1 , is a strategy of player 2 and is a strategy of player 3 as illustrated in section [ games ] , and if the gameacp process term corresponds to , the gameacp process term corresponds to and the gameacp process term corresponds to , then the process term exactly defines an execution of , and . for the strategy corresponding to the gameacp process term in fig.[purchasingplayer1 ] , the strategy corresponding to the gameacp process term in fig.[purchasingplayer2 ] , and the strategy corresponding to the gameacp process term in fig.[purchasingplayer3 ] , the maximal element of defines an execution of the process as illustrated in fig.[execution2 ] . order to describe game theory and external choice in acp , we do extension of acp with an opponent s alternative composition operator which is called gameacp . to model the playing process of games , an extension of gameacp with a playing operator and a deadlock constant is also made . and two sound and complete axiomatic system are designed . as a result of axiomatization, gameacp has several advantages , for example , it has both a proof theory and also a semantics model , it is without any assumptions and restrictions and it can be used in multi - person games naturally .gameacp can be used to reason about the behaviors of parallel and distributed systems with game theory supported . andalso , gameacp gives games an axiomatization interpretation naturally , this will help people to capture the nature of games . it must be explained that any computable process can be represented by a process term in acp ( exactly with guarded linear recursion ) . that is , acp may have the same expressive power as turing machine .although gameacp can not improve the expressive power of acp , it still provides an elegant and convenient way to model game theory in acp . as pointed out, the combination of computation tools and game theory , not only includes using computation tools to interpret games or introducing games into the computation tools , but also includes using game theory to give computation concepts an interpretation .the second one remains an open problem and is a future work we would do .lam94 j. c. c. mckinsey : _ introduction to the theory of games . _ dover publications , 2003 . j. c. m. baeten : _ a brief history of process algebra _ theor comput sci in process algebra , 2005 , 335(2 - 3 ) : 131146 .luca aceto and kim g. larsen and anna inglfsdttir : _ an introduction to milner s ccs ._ http://www.cs.auc.dk/ luca / sv / intro2ccs.pdf , 2004 . c. a. r. hoare : _ communicating sequential processes . _http://www.usingcsp.com , 1985 .w. fokkink : _ introduction to process algebra 2nd ed ._ springer - verlag , 2007 .g. d. plotkin : _ a structural approach to operational semantics ._ aarhus university , 1981 , tech .report daimifn-19 .s. abramsky and r. jagadeesan and p. malacaria : _ full abstraction for pcf ( extended abstract ) ._ proc theoretical aspects of computer software 1994 , 1994 : 115 .h. nickau : _ hereditarily sequential functionals ._ proc the symposium on logical foundations of computer science , 1994 . s. abramsky and g. mccusker : _ game semantics ._ computational logic 1999 , 1999 .s. abramsky and r. jagadeesan : _ games and full completeness for multiplicative linear logic . _ j. symbolic logic , 1994 , 59(2 ) : 543 - 574 .s. abramsky and c. mccusker : _ linearity , sharing , and state : a fully abstract game semantics for idealized algol with active expressions ._ electronic notes in theoretical computer science , 1996 , 3(2 ) : 214 .s. abramsky and k. honda and g. mccusker : _ fully abstract game semantics for general reference ._ proc ieee symposium on logic in computer science , 1998 .a. mellis : _ innocence in 2-dimensional games ._ http://www.pps.jussieu.fr//papers.html , 2002 .a. mellis : _ asynchronous game 1 : uniformity by group invariance ._ http://www.pps.jussieu.fr//papers.html , 2003 .a. mellis : _ asynchronous game 2 : the true concurrency of innocence ._ proc concur 2004 , 2004 .a. mellis : _ asynchronous game 3 : an innocent model of linear logic ._ electronic notes in theoretical computer science , 2005 .a. mellis : _ asynchronous game 4 : a fully complete model of propositional linear logic. _ proc 20th annual ieee symposium on logic in computer science , 2005 : 386395 .p. a. mellis and s. mimram : _ asynchronous games : innocence without alternation ._ proc concur 2007 , 2007 .s. abramsky : _sequentiality vs. concurrency in games and logic ._ mathematical structures in computer science , 2003 , 13(04 ) : 531565 .s. abramsky and p. a. mellis : _ concurrent games and full completeness . _proc the fourteenth annual ieee symposium on logic in computer science , 1999 .s. abramsky : _ algorithmic game semantics : a tutorial introduction ._ proc marktoberdorf , 2001 . s. abramsky : _ semantics of interaction ._ proc the 21st international colloquium on trees in algebra and programming , 1996 : 130 . s. abramsky and d. ghica and a. murawski : _ applying game semantics to compositional software modeling and verifications ._ proc tacas , 2004 : 421435 .a. dimovski and r. lazi : _ compositional software verification based on game semantics and process algebra .j. softw tools technol transfer , 2007 , 9 : 3751 .h. davulcu : _ a game logic for workflows of non - cooperative services ._ state university of new york , 2002 .k. chatzikokolakis and s. knight and c. palamidessi and p. panangaden : _ epistemic strategies and games on concurrent processes .acm trans .computational logic , 2012 , 13(4 ) : 140 .oasis : _ web services business process execution language version 2.0 ._ oasis , 2007 .j. c. m. baeten and j. a. bergstra and j. w. klop : _ on the consistency of koomen s fair abstraction rule ._ theoretical computer science , 1987 , 51(1/2 ) : 129176 .d.e . knuth and p.b .bendix . : _simple word problems in universal algebras ._ computational problems in abstract algebra , pergamon press , 1970 , 263297 .
using formal tools in computer science to describe games is an interesting problem . we give games , exactly two person games , an axiomatic foundation based on the process algebra acp ( algebra of communicating process ) . a fresh operator called opponent s alternative composition operator ( oa ) is introduced into acp to describe game trees and game strategies , called gameacp . and its sound and complete axiomatic system is naturally established . to model the outcomes of games ( the co - action of the player and the opponent ) , correspondingly in gameacp , the execution of gameacp processes , another operator called playing operator ( po ) is extended into gameacp . we also establish a sound and complete axiomatic system for po . finally , we give the correctness theorem between the outcomes of games and the deductions of gameacp processes . [ firstpage ] games ; process algebra ; algebra of communicating processes ; axiomatization
emerging various widespread human diseases speeds up the growing rate of genomics .accordingly , analysis of deoxyribonucleic acid ( dna ) sequences , as a medium storing by far more important information about properties of an organism , has intrigued many researchers to extract significant knowledge about life sciences . as a common event in evolution process, mutation would modify dna data sequences comprising of a finite number of basic elements known as nucleotides , i.e. , adenine ( a ) , cytosine ( c ) , guanine ( g ) , and thymine ( t ) , which are independent of each other .since each sequence data conceals in a collection of one - dimensional ( 1d ) strings forming a genome , the role of string data alignment or pattern matching against a sequence of genomes is much more critical for comparison and interpretation of dna - based structures . due to rapidly evolving dna - sequencing , investigating through highly extensive dna databases to identify occurrence of exchange , deletion , and insertion of specific data , find target dna strings or newly genes and classify species is becoming a costly and challenging problem for researchers .all recent sequencing technologies , including roche/454 , illumina , solid and helicos , are able to produce data of the order of giga base - pairs ( gbp ) per machine day . however , with the emergence of such enormous quantities of data , even the fast digital electronic devices are not effective enough to align capillary reads .actually , today s electronics technology would not permit us to achieve high rate of analysis in sequence matching and information processing due to the time consuming nature of serial processing . to keep pace with the throughput of sequencing technologies ,many new alignment algorithms have been developed , but demands for faster alignment approaches still exist .as a result , the necessity of finding a novel implementation to provide high performance computational systems is undeniable .high data throughput , inherent parallelism , broad bandwidth and less - precise adjustment of optical computing provide highly efficient devices which can process information with high speed and low energy consumption .it is worth mentioning that visible light in optical computing systems realizes information visualization for human operators to more effectively carry out genome analysis . employing a powerful technique to encode dna information into an optical image besides optical computing capabilities would definitely guarantee to efficaciouslyperform genomes analysis .while recent implementations were static and relying on printed transparent sheets , herein , we theoretically and experimentally present dynamic string data alignment based on a spatially coded moir technique implemented on spatial light modulators ( slms ) which enables one to investigate useful hiding information in genomes .the remaining of the paper is organized as follows . in sectionii , the principle of string data matching using the spatially coded technique is explained . in sectioniii , bar and circular patterns as an effective scheme for string data alignment will be discussed . in section iv, the experimental optical architecture and obtained results will be appeared to verify practical feasibility of our proposed pattern , and section v concludes the paper .in this section , the principles of string alignment by moir technique are outlined .consider two data sequences .the goal of string alignment is evaluation of similarities and differences between them .in particular , we are interested in distinguishing insertion and deletion of elements in any strings with respect to each other .moir technique applies high speed parallel processing of light to perform string alignment . in this approach ,four components of strings , namely are encoded as , respectively .based on this coding , the strings are spatially coded into images where each component corresponds to four narrow stripes with one bright stripe as and three dark stripes as ( see fig .[ graphical ] ) .the coded images are then overlapped with a small relative angle , and by using this technique , correlating segments of the second string in various shifts of the first one can evidently be distinguished .the subsequent matched elements will be appeared as a bright line in the observed pattern of overlapped images . ,( b ) subsequent shifts of initial string , ( c ) output pattern by overlapping ( a ) and ( b ) , ( d ) output pattern by overlapping and ( b).,width=259 ] as an example , consider two strings s1 of length 40 and s2 of length 20 . now , we want to search for s2 in s1 .[ fig1](a ) shows with respect to the codes appeared in fig .[ graphical ] , and each row in fig .[ fig1](b ) shows subsequent shifts of initial string ; for example first row shows s1(1:20 ) , second row shows s1(2:21 ) , up to the last row .overlapping fig .[ fig1](a ) ans ( b ) results in the pattern shown in fig .[ fig1](c ) ; the bright line in the fourth row illustrates that a correlation has happened for a shift of 6 , i.e. , s2 and s1(6:25 ) are matched . .corresponding codes for polarized spatial patterns in figs .[ type1 ] and [ type2 ] . [ cols="^,^,^,^,^",options="header " , ] a two - dimensional array of photodetectors can be employed to capture the output pattern ; then digital processing would be done by a host computer to extract precise matching .alternatively , analyzing the output pattern can be realized by visual inspection or using a ccd camera . in order to verify our proposed method, we firstly show bar strings alignment between two dna - simulated sequences ; then the circular one will be demonstrated as our proposed new encoded pattern .one - dimensional strings to be aligned are illustrated in figs .[ exp1 ] and [ exp2 ] in which , , and were introduced earlier . to morestraightforwardly realize string alignment , a cylindrical lens could be employed between the third polarizer and the output display .it is well known that such a lens transforms plane wave to an ultra - thin line . as a result ,each horizontal bright line in the output pattern right behind the lens is mapped to a luminous point on the display which enables us to use a simple one - dimensional array of photodetectors to detect the occurrence of exact matching and the number of deleted or inserted elements .[ cylindrical](a ) and [ cylindrical](b ) respectively illustrate the transformed versions of the output patterns in figs . [ exp1](f ) and [ exp2](f ) at the focused plane of the cylindrical lens .additionally , simulated and experimental results for circular patterns presented in fig .[ circular ] are in good agreement .in conclusion , a simple and practical method based on spatially coded moir matching technique has been proposed for string alignment processing .easy interpretation and inherent parallelism with almost real - time processing are the main specifications of our approach which is compatible with digital devices . the processing gain and snr of the proposed patterns ,i.e. , bar and circular patterns , have numerically been calculated to show the effectiveness of our method . moreover , a preprocessing stage which remarkably decreases post - processing time needed for interpretation of output pattern has been introduced .the capability of our proposed method in dna sequence matching has been shown via simulation .finally , experimental results verify the performance of the method in genomics processing applications based on optical computing .j. m. kinser , `` mining dna data in an efficient 2d optical architecture , '' in _ 2000 international topical meeting on optics in computing ( oc2000)_.1em plus 0.5em minus 0.4eminternational society for optics and photonics , 2000 , pp .104110 . a. c. rajan , m. r. rezapour , j. yun , y. cho , w. j. cho , s. k. min , g. lee , and k. s. kim , `` two dimensional molecular electronics spectroscopy for molecular fingerprinting , dna sequencing , and cancerous dna recognition , '' _ acs nano _ , vol . 8 , no . 2 , pp .18271833 , 2014 .j. eid , a. fehr , j. gray , k. luong , j. lyle , g. otto , p. peluso , d. rank , p. baybayan , b. bettman _et al . _ ,`` real - time dna sequencing from single polymerase molecules , '' _ science _5910 , pp . 133138 , 2009 .j. m. rothberg , w. hinz , t. m. rearick , j. schultz , w. mileski , m. davey , j. h. leamon , k. johnson , m. j. milgrew , m. edwards _et al . _ , `` an integrated semiconductor device enabling non - optical genome sequencing , '' _ nature _ , vol .7356 , pp . 348352 , 2011 .j. tanida and k. nitta , `` string data matching based on a moir technique using 1d spatial coded patterns , '' in _ 2000 international topical meeting on optics in computing ( oc2000)_.1em plus 0.5em minus 0.4eminternational society for optics and photonics , 2000 , pp .1623 .j. tanida , k. nitta , and a. yahata , `` spatially coded moire matching technique for genome information visualization , '' in _ photonics asia 2002_. 1em plus 0.5em minus 0.4eminternational society for optics and photonics , 2002 , pp .k. niita , h. togo , a. yahata , and j. tanida , `` genome information analysis using spatial coded moire technique , '' in _ lasers and electro - optics , 2001 .cleo / pacific rim 2001 .the 4th pacific rim conference on _ , vol .2.1em plus 0.5em minus 0.4emieee , 2001 , pp .
in this paper , we present an optical computing method for string data alignment applicable to genome information analysis . by applying moir technique to spatial encoding patterns of deoxyribonucleic acid ( dna ) sequences , association information of the genome and the expressed phenotypes could more effectively be extracted . such moir fringes reveal occurrence of matching , deletion and insertion between dna sequences providing useful visualized information for prediction of gene function and classification of species . furthermore , by applying a cylindrical lens , a new technique is proposed to map two - dimensional ( 2d ) association information to a one - dimensional ( 1d ) column of pixels , where each pixel in the column is representative of superposition of all bright and dark pixels in the corresponding row . by such a time - consuming preprocessing , local similarities between two intended patterns can readily be found by just using a 1d array of photodetectors and post - processing could be performed on specified parts in the initial 2d pattern . we also evaluate our proposed circular encoding adapted for poor data alignment condition . our simulation results together with experimental implementation verify the effectiveness of our dynamic proposed methods which significantly improve system parameters such as processing gain and signal to noise ratio ( snr ) . string data alignment , moir pattern , dna sequencing , spatial light modulator .
the main aim of this paper is to let agents solve tasks by ultimately avoiding aversive signals forever .this approach entails an interesting and perhaps quite strong guarantee on the agent performance .the motivation is partly to understand how animals are successful in solving problems , like navigation , with limited sensory information and unpredictable effects in the environment .the animal should find food or return home before it gets lost or becomes exhausted .we study a general framework in which agents need to avoid problems in tasks .if the agent encounters a problem , an aversive signal is received . this way the agent could learn to avoid the problem , by avoiding the usage of actions and action - sequences that lead to aversive signals .the general idea is sketched in figure [ fig : aversive ] .before we discuss our approach , we first briefly discuss two important ingredients of the framework , namely , partial information and non - determinism . [ [ partial - information - and - non - determinism ] ] partial information and non - determinism + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + first , we assume that the agent is given only partial information , as follows : each encountered task state is projected to a set of features .this is a propositional representation , where each feature is a true / false question posed about the state .the number of features determines the granularity by which states can be perceived by agents .therefore , the behavior of the agent will be based on feature - action associations , and not on ( direct ) state - action associations .each application can choose its own features and its own way of computing them .examples of features are : detected edges in images , impulses through sensors , temporal events over streams , and - or combinations thereof , etc . in this paper , we assume that tasks have only a finite number of features , although there could still be many features .perhaps not surprisingly , theoretical investigations show how hard it is to solve tasks under partial information , see e.g. .. ] second , we allow tasks to be non - deterministic .this means that the effect of some action - applications to states can not be predicted . in this paper, we assume that non - determinism is an inherent property of tasks .although partial information also limits the reasoning within the agent , and therefore generally prevents accurate predictions , it remains a separate assumption to allow tasks themselves to be non - deterministic .for example , one may consider tasks in which features actually provide complete information , and where the agent could still struggle with non - determinism .[ [ strategies ] ] strategies + + + + + + + + + + the focus of this paper to understand agents based on their behavior in tasks , which could be a useful way to understand intelligence in general . as remarked earlier , in this paper , agent behavior will be based on feature - action associations .conceptually , we may think of the agent as having a set of possibly allowed feature - action pairs , and whenever the agent encounters a task state , the agent ( thinks it ) is allowed to perform all actions for which there is a feature observed in state such that .we also refer to as a policy .we say that a set of feature - action pairs constitutes a _ strategy _ for a start state if will never lead to an aversive signal when starting from that start state .we note that it is not always sufficient for the states near the aversive signals to steer away from them , because sometimes the agent may get trapped in a zone of the state space that does not immediately give aversive signals but from which it is impossible to reliably escape the aversive signals .the agent should avoid such zones , which could require that the agent anticipates aversive signals from very early on . our aim in this paper is to reason about the existence of such successful strategies for classes of tasks , and to discuss an algorithm to find such strategies automatically .a main challenge throughout this study is posed by the compression of state information into features and the uncontrollable outcomes due to non - determinism . [[ reward - based - value - estimation - seems - unsuitable ] ] reward - based value estimation seems unsuitable + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + before presenting more details of our algorithm , we first argue that algorithms based on ( numerical ) reward - based value estimation do not always appear suitable for reliably finding problem - avoiding strategies . on the theory side ,convergence proofs of value estimation algorithms often require the learning step - size to decrease over time , see e.g. .intuitively , convergence of the estimated values arises because the decreasing learning step - size makes it harder and harder for the agent to learn as time progresses .however , we would like to avoid putting such limits on the agent , because : ( 1 ) it is useful to also study more flexible agents because they might sometimes better describe real - world agents ; ( 2 ) in practice it might be difficult to estimate in what exact way the learning step - size should decrease ; and , ( 3 ) also in practice , there are no guarantees on what the estimated values will eventually be after a certain amount of time has passed , because the estimates depend strongly on random fluctuations during task performance ( due to non - determinism ) . in practice , a non - decreasing step - size , although potentially useful to model flexible agents that keep learning from their latest experiences , can lead to problems of its own .we illustrate this with the example task shown in figure [ fig : reward - task ] .there is a start state , and two actions and that lead back to state .we assume complete information for now , i.e. , state is presented completely to the agent as a feature with the same information , namely , the identity of state .suppose that the state - action pair always gives reward .but for the pair , the reward could be either or .although the pair is clearly preferable over the pair in case of positive reward , there is the risk of incurring a strong negative reward .the negative reward represents an aversive signal . in the perspective of strategies from above , note that constitutes a strategy : constantly executing action in state leads to an avoidance of aversive signals forever ..45 [ fig : reward - task ] .45 [ fig : aversive - task ] but the agent will not necessarily learn to avoid when a hidden task mechanism could periodically deceive the agent by issuing higher rewards under action . concretely , let be a strictly positive natural number . to represent the outcome of action , suppose that we constantly give reward during the first times is applied ; the next times we give ; the following times we again give , and so on .we call this the -swap semantics . for each outcome, the empirical probability would be : indeed , the observed frequency of each outcome converges to as we perform more applications of action .we can choose arbitrarily large ; this does not change the empirical probability of each outcome . without the restriction on learning step - size, it seems that value estimation algorithms can get into trouble on the above setting because we can set so large that after a while the agent starts to believe that the outcome would remain fixed .for example , we could start with reward for the pair during the first applications , and the agent starts believing that the reward really is . then come the next applications , where we repeatedly give reward , and the agent starts believing that the reward really is .we can swap the two outcomes forever , each for a period of applications , and the agent will never make up its mind about the behavior of action in state . this effect is illustrated in figure [ fig : reward - pattern ] ..45 , by alternating actions and , but with constant step - size ( and discounting factor ) .the resulting value estimates for the state - action pairs and are plotted against time .the outcome of was either or , as determined by the -swap semantics where ; this semantics is relative to the applications of , and not relative to the global time steps.,title="fig:",scaledwidth=100.0% ] .45 , by alternating actions and , but with constant step - size ( and discounting factor ) .the resulting value estimates for the state - action pairs and are plotted against time .the outcome of was either or , as determined by the -swap semantics where ; this semantics is relative to the applications of , and not relative to the global time steps.,title="fig:",scaledwidth=100.0% ] although the above example is very simple , real - world tasks could still exhibit problems similar to the -swap semantics . even if such problems are identified and understood , perhaps there are no good solutions for them as the problems might be outside the range of control for the agent . in this paperwe would like to learn to avoid the aversive signals forever , even under quite adversary semantics of tasks like the -swap semantics .[ [ avoidance - learning ] ] avoidance learning + + + + + + + + + + + + + + + + + + in the example of figure [ fig : reward - task ] , we would like the agent to make up its mind more quickly that action leads to aversive signals .an idea is to let the agent ( monotonically ) increase its estimate of the value of a feature - action pair .we should immediately observe , however , that this idea will not work when feedback remains to be modeled as reward , as in the example : once the outcome of is observed to be ; then remembering would lead to a preference of over , causing a reward - seeking agent to ( accidentally ) encounter negative rewards , i.e. , aversive signals , indefinitely under -swap semantics .fortunately , the idea of increasing estimates seems to work when feedback is modeled with aversive signals , even in face of non - determinism .indeed , has previously proposed a learning algorithm in tasks where actions have numeric costs , representing aversive signals . by repeatedly remembering the highest observed cost for a state - action pair ( with the -operator ) , and by choosing actionsto minimize such costs , the agent learns to steer away from high costs .we would like to further elaborate this idea and how it relates to the notion of aversion - avoiding strategies mentioned above . in our framework , we only explicitly model aversive signals , as boolean flags : the flag `` true '' would mean that an aversive signal is present .this leads to a framework that is conceptually neat and computationally efficient .because a policy is either successful in avoiding aversive signals forever , or it is not , the choice of a boolean model aligns well with our motivation to study the relationship between learning and successful strategies . to illustrate , the example of figure [ fig : reward - task ] would be represented by figure [ fig : aversive - task ] , where only the aversive signal is explicitly represented .in general , the boolean flags will act like borders , to demarcate undesirable areas in the state space .reward is now only implicit : by using a strategy , as mentioned earlier , the agent can stay away from the aversive signals forever . in the above setting with explicit aversive signals , we describe an avoidance learning algorithm , called a - learning , in which the agent repeatedly flags feature - action pairs that lead to aversive signals , or , as an effect thereof , to states for which all proposed actions are flagged ( based on the observed features ) .intuitively , the flags indicate `` danger '' . on the example of figure [ fig : aversive - task ] , a - learning flags at the first occurrence of an aversive signal under action ; and , importantly , the strategy is never flagged . ) , the flagged feature - action pairs are removed from the agent s memory . ]there is no second chance for changing the agents mind .this gives one of the strongest convergence notions in learning , namely , fixpoint convergence , where the agent eventually stops changing its mind about the outcome of actions .if there really is a strategy , avoidance learning will carve out a subset of good feature - action pairs from the mass of all feature - action pairs .this way , it seems that avoidance learning could be useful in making the agent eventually avoid aversive signals forever .this provides the guaranteed agent performance we would like to better understand , as remarked at the beginning of the introduction . [[ meaning - of - optimality ] ] meaning of optimality + + + + + + + + + + + + + + + + + + + + + in this paper we view an agent as being optimal if it can ( learn to ) avoid aversive signals forever .there is no explicit concept of reward .depending on the setting , or application , aversive signals can originate from diverse sources and together they can describe a very detailed image of what the agent is allowed to do , and what the agent is not allowed to do .one obtains a rich conceptual setting for reasoning about agent performance . for example , suppose a robotic agent should learn to move boxes in a storehouse as fast as possible .we could emit an aversive signal when the robot goes beyond a ( reasonable ) time limit .any other constraints , perhaps regarding battery usage , can be combined with the first constraint by adding more signals .[ [ outline ] ] outline + + + + + + + this paper is organized as follows .we discuss related work in section [ sec : relwork ] .we introduce fundamental concepts like tasks , and strategies , in section [ sec : fund ] .we present and analyze our avoidance learning algorithm in section [ sec : alg ] .one of our results is that if there is a strategy for a start state then the algorithm will preserve the strategy .this mechanism can be used to materialize strategies if they exist . to better understand the nature of strategies , we prove the existence of strategies for a family of grid navigation tasks in section [ sec : grid ] .the idea of avoiding aversive signals , or problems in general , is related to safe reinforcement learning . there, the goal is essentially to perform reinforcement learning , often based on approximation techniques for optimizing numerical reward , with the addition of avoiding certain problematic areas in the task state space .an example could be to train a robot for navigation tasks but while avoiding damage to the robot as much as possible . in the current paper ,feedback to the agent consists of the aversive signals .reward becomes more implicit , as it lies in the avoidance of aversive signals .therefore , the viewpoint in this paper is that the agent is called optimal when it eventually succeeds in avoiding all aversive signals forever ; there is no notion of optimizing reward .the approach is related to a trend identified by , namely , the modification of the optimality criterion . the work by is closely related to our work .the framework by provides feedback to the agent in the form of numerical cost signals , which , from the perspective of this paper , could be seen as aversive signals . similar to our -swapping example in the introduction ( figure [ fig : reward - task ] ), provides other examples to motivate that estimation of expected values is not suitable for reliably deciding actions . the learning algorithm proposed by maps each state - action pair to the worst outcome ( or cost ) , by means of the -operator . by remembering the highest incurred cost for a state - action pair, the agent in some sense learns about `` walls '' in the state space that constrain its actions towards lower costs .the avoidance learning algorithm discussed in this paper ( section [ sec : alg ] ) is similar in spirit to the one by .a deviation , however , is that we assume here a boolean interpretation of aversive signals , which leads to a neat and computationally efficient framework .we additionally identify the concept of strategies , under which the agent can avoid aversive signals forever . our interest lies in understanding such avoidance strategies and their relationship to the avoidance learning algorithm .moreover , we also focus on partial information , by letting the agent only observe features instead of full states .for a set , let denote the powerset of , i.e. , the set of all subsets of . a task is a tuple where * is a nonempty set of states ; * is a finite subset of start states ; stands for `` begin '' . ] * is a nonempty finite set of actions ; * is a nonempty finite set of features ; * is the transition function ; * is the feature function ; and * is the set of aversive signals , where all states are reachable in the sense that there is a sequence with , , and for each . the function maps each pair to a set of possible successor states , representing non - determinism .the function associates a set of features to each state ; an agent interacting with the task can only observe states through features and can therefore not directly observe states .the meaning of a pair is that the agent could witness an aversive signal when performing action in state . in state , then the agent witnesses an aversive signal infinitely often during the application of at state , but this signal could sometimes be omitted .see also section [ sub : fairness ] . ][ ex : task - two - states ] we define an example task as follows : ; ; ; ; regarding , we define regarding , we define and , we define . the task is depicted in figure [ fig : task - two - states ] . .the basic graphical notation is explained in figure [ fig : intro - task ] . inside the circles , we write the state identifier followed by a semicolon and the features of the state.,scaledwidth=30.0% ] note that the function maps each state to a set of features .similarly , the function maps each state - action pair to a set of successor states .however , an agent interacts with each function in a different way , as follows . for a state , we assume that an agent can always observe all features in simultaneously .this way , the function may be viewed as being deterministic . in contrast , for a state - action pair , we select only one successor state from to proceed with the task .the function remains deterministic throughout this paper .the framework still allows us to consider tasks in which the agent can sometimes observe a certain feature and sometimes it can not .thereto we can define richer states , in which , say , the status of sensors is stored ; if a state says that a sensor is broken , then could omit the feature that would otherwise be generated by the sensor . definition of task resembles that of a standard markov decision process , but we have added features and aversive signals. there can be many features , actions , and start states .and we allow an infinite number of states . since the agent may only see features , and not states directly , agent behavior has to be based on feature - action associations .let be a task .policy _ for is a total function .we allow features to be mapped to empty sets of actions .if the task is understood from the context , for a state we define i.e. , is the set of all actions that are proposed by the policy based on the features in .we say that a state is _ blocked in _ if , i.e. , the policy does not propose actions for . for a state , we do not view as an atomic signature to which actions should be associated .instead , the definition of indicates that each feature in may independently propose its own actions , regardless of what is proposed by other features .all proposed actions are collected into a set , by means of the union - operator .therefore , features are little actors that become active at appropriate times and that suggest to the agent what actions are ( supposedly ) allowed .this viewpoint resembles the way that an individual neuron ( or a small group of neurons ) in the brain could represent a distinct concept and could be individually linked to actions .it is the goal of the learning algorithm ( section [ sec : alg ] ) to remove feature - action associations that lead to aversive signals or , as a result of such removals , to blocked states .we now consider the following definition : [ def : strategy ] a policy is called a _strategy _ for a start state if 1 .[ enu : strategy - start ] ; 2 .[ enu : strategy - followup ] , , 1 .[ enu : strategy - successor ] we have ; and , 2 .[ enu : strategy - avs ] . in words : a policy is a strategy for a start state if the policy acts upon ; and , for any states upon which the policy acts , the reached successor states can also be acted upon , and the policy never causes aversive signals .intuitively , to use a strategy , for each encountered state we first select some ( arbritary ) feature that satisfies , and we subsequently select an arbitrary action .the definition of strategy demands properties in a global fashion , possibly also for states that would not be explored when strictly following the strategy .this condition however ensures that learning algorithms can never have negative experiences when they perform actions suggested by the strategy ; see section [ sec : alg ] .suppose is a strategy , and let be a feature with .intuitively , the definition of strategy says that is a reliable feature , in the sense that every time we see it , we may safely perform all actions in , without the risk of encountering blocked states and aversive signals .this is related to the markov assumption , because we do not have to remember any features that were seen during previous time steps , and we may instead choose actions based on just by itself . [ ex : partial - strategy ] consider the task from example [ ex : task - two - states ] .there is no strategy for start state , but there is a strategy for start state defined as : and . the following property illustrates that strategies are resilient to adding new features . in practical applications, this means that the addition of new kinds of features will not destroy previously existing strategies .[ result : features ] let be a task .let be a set of features that is disjoint from .let be another task that is almost the same as except that and for each state the constraint holds. uses the features of in the same way as .] let be a start state , and suppose that a policy is a strategy for in .then is also a strategy for in .we show that the conditions of strategy in definition [ def : strategy ] are satisfied for in . to better show which task is involved , for a state and a task index , we write when is used in , we assume for each .above we have also assumed that .[ [ conditionenustrategy - start ] ] condition [ enu : strategy - start ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + since is a strategy for in , we have .this implies there is some with .since by assumption , we obtain .[ [ conditionenustrategy - followup ] ] condition [ enu : strategy - followup ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let be a state and assume there is some action .first we argue that . there must be a feature with .but since only knows features in , we have .hence , .we first handle condition [ enu : strategy - successor ] .let . because and satisfies condition [ enu : strategy - successor ] in , we know .so there is a feature with . since , we know , as desirednow we handle condition [ enu : strategy - avs ] . because and satisfies condition [ enu : strategy - avs ] in , we know . since , we obtain , as desired .we present and study an avoidance learning algorithm , and its relationship to the concept of strategy introduced in section [ sub : strategy ] .algorithm [ alg : global ] is an avoidance learning algorithm .the algorithm describes how the agent interacts with the task , and how feature - action combinations are forgotten as the direct or indirect result of aversive signals .some aspects of the interaction are not under control of the agent , in particular how a successor state is chosen by means of function , and how features are derived from states by means of function .we now provide more discussion of the algorithm .henceforth , we will refer to algorithm [ alg : global ] as a - learning . [ line : init - mem ] : = choose from [ line : init - state ] the essential product of a - learning is a set that represents the allowed feature - action pairs ; the symbol stands for `` possibilities '' . at any time , the set uniquely defines a policy as follows : for each , we define . regarding notation , for any state , we write to denote the set of proposed actions , where is the unique policy defined by .we now explain the steps of a - learning in more detail .* line [ line : init - mem ] initializes with all feature - action pairs .we will gradually remove pairs if they lead to or to blocked states ( that are created by removals of the first kind ) .* line [ line : init - state ] selects a random start state .the control flow of the algorithm is redirected here each time we want to restart the task .but we never re - initialize .+ task restarts may be requested by a - learning itself ( see below ) , or externally by the training framework in which a - learning is running .* line [ line : start - fail ] requests a task restart in case the chosen start state is blocked .this allows more exploration from the other start states .as we will see later in theorem [ theo : learn]([enu : theo - learn - preserve ] ) , if no actions remain for a start state then this start state has no strategy .* at line [ line : loop ] , the algorithm enters a learning loop . the loop is only exited to satisfy task restart requests , at line [ line : desired - restart ] .* at line [ line : action ] , we choose an action to apply to current state based on the set of still allowed actions . at line [ line : succ - state ] , we are subsequently given a successor state , chosen arbitrarily from .* next , at line [ line : feedback ] , we check whether we have encountered or if successor state is blocked . in either casewe exclude from the feature - action pairs that caused us to apply action in state ( line [ line : exclude ] ) , and we restart the task ( line [ line : fail ] ) . * if we do not encounter and state is not blocked , then we proceed with the while loop ( line [ line : continue ] ). note that in general there are multiple runs of a - learning on a task , because of the choice on action selection and the choice on successor state .each run of a - learning is infinitely long .nonetheless , there is always an eventual fixpoint on the set because after the initialization we only remove feature - action pairs .there are only a finite number of possible feature - action pairs , although there could be many .when the run is clear from the context , we write to denote the fixpoint of obtained in that run . for conceptual convenience, we can divide each run of a - learning into trials by using the task restarts as dividers : whenever we execute line [ line : init - state ] , the previous trial ends and the next trial begins .each trial is thus a sequence , where is a start state , is the last state of the trial , and for each . ) to divide runs into trials , and not the encounter of start states .this means that in principle we allow for some or all . ]there is no stopping condition in the algorithm because in general we may not be able to detect when the agent has explored the task sufficiently to be successful at avoiding aversive signals . [ remark : greedy ] we would like to emphasize that a - learning is always greedy in avoiding .this is an important deviation from the -greedy exploration principle , where at each time step the agent chooses a random action with small probability $ ] .we do not use that mechanism here because otherwise the agent keeps running the risk of encountering aversive signals . the reason for requesting a task restart at line [ line : fail ] is that sometimes the agent could become stuck in a zone of the state space where there are only blocked states or aversive signals . in that case , if we want the agent to start removing feature - action pairs to prevent future aversive signals , we should first transport the agent to a zone in the state space without blocked states and aversive signals .for example , in a robot navigation problem , the robot could learn to avoid pits , but once it enters a pit it can perhaps not reliably escape without the help of an external supervisor . [ remark : memory ] algorithm [ alg : global ] explicitly stores the allowed feature - action pairs in a set .this is an intuitive perspective for the theory developed in this paper .however , in practice it may sometimes be more efficient to store the opposite information , namely , the removed feature - action pairs . this wayall allowed feature - action pairs can still be uniquely recovered . using the analogy of a planar map , where aversive signals are borders between neutral zones on the one hand and undesirable zones on the other hand, there could be a decreased memory usage in storing only the border ( i.e. , the removed feature - action pairs ) if the borders are simple shapes instead of irregular shapes with many protrusions . the following theorem helps to understand what a - learning computes .[ theo : learn ] for all tasks , for each , for each run of a - learning , where denotes the fixpoint , 1 .[ enu : theo - learn - preserve ] if there is a strategy for then .[ enu : theo - learn - discover ] if then every trial for after the fixpoint avoids blocked states and .we consider the two properties separately .[ [ propertyenutheo - learn - preserve ] ] property [ enu : theo - learn - preserve ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + suppose there is a strategy for .we show that the feature - action pairs of are preserved in , so that would imply . towards a contradiction ,suppose that a - learning removes a pair from where ; let be the first such pair that is removed .the removal has happened as follows : we reach a state with and we perform , and either the successor state is blocked or we receive an aversive signal .we discuss each case in turn .let denote the remaining feature - action pairs just before we remove .note that and together imply . *suppose that is blocked .since is a strategy , by condition [ enu : strategy - successor ] of definition [ def : strategy ] , we have assumed .so , there is a feature and an action .since is the first pair of that is removed , we still have .but then , and is actually not blocked ; we have found a contradiction .* suppose that an aversive signal was received when applying to , which implies .this immediately contradicts the assumption that satisfies condition [ enu : strategy - avs ] of definition [ def : strategy ] .[ [ propertyenutheo - learn - discover ] ] property [ enu : theo - learn - discover ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + suppose . towards a contradiction ,suppose that after the fixpoint there is a trial for start state where we encounter a state and we perform an action such that either the successor state is blocked or we receive an aversive signal .suppose we conceptually halt the offending trial at the first encountered problem . we have followed a path : for some .we have for each .we note in particular that .next we distinguish two cases , depending on the type of problem .* suppose that .then a - learning now removes from .but then we will no longer propose action for state , which was previously allowed by the fixpoint .then would be an invalid fixpoint , which is a contradiction . *suppose that an aversive signal is received when applying to , which implies .we make a similar reasoning as in the previous case : a - learning removes from .again would be an invalid fixpoint .suppose that a task has a strategy for each start state .in that case , theorem [ theo : learn ] tells us that every run of a - learning will eventually avoid blocked states and aversive signals .the agent therefore makes a transition from first discovering the strategies to later exploiting the strategies .the opposite is not necessarily true : there are tasks for which exist runs that eventually avoid blocked states and aversive signals , but without there being a strategy in the sense of definition [ def : strategy ] .this is illustrated by the task in figure [ fig : aversive - task - long ] .consider a run where the first application of action in state results in an aversive signal , and after which we immediately restart the task . in that run , there is no further exploration to state , which causes ; hence , .however , note that if the internal restart request at line [ line : fail ] of algorithm [ alg : global ] would sometimes not be handled immediately , but a few steps later , then some runs will not preserve the pair . .but there is a run of a - learning in which the feature - action pair is preserved .the graphical notation is explained in figure [ fig : task - two - states].,scaledwidth=40.0% ] the insights of theorem [ theo : learn ] could be used as follows .first , although proving that a strategy exists helps in understanding guarantees on the agent performance , programming the strategy by hand could be tedious and time - consuming .so , property [ enu : theo - learn - preserve ] could be used to materialize strategies once they are proven to exist .second , if one does not know whether a strategy exists , property [ enu : theo - learn - discover ] could be used to perform a preliminary search for strategies .although the discovered strategies might not be easily interpreted , they could serve as inspiration for a theoretical study of strategies for the tasks at hand . a practical consideration, however , is that it might not be possible to efficiently detect the fixpoint , i.e. , typically one does not know if a fixpoint has been reached when a - learning has not removed feature - action pairs for a while . so far we have silently allowed all possible runs of a - learning .for example , we did not explicitly demand that the agent actually must receive an aversive signal when applying an action to a state where .the aversive signal could also be omitted .this brings us to the topic of fairness .intuitively , for this paper , fairness would mean that there is sufficient exploration of the task . a practical application of a - learning ( algorithm [ alg : global ] )could take the following fairness assumptions into account : * if we execute line [ line : init - state ] infinitely often then we choose each start state infinitely often ; * to fully learn the task from each start state , we infinitely often issue external task restarts at line [ line : desired - restart ] ; those restarts are not requested by a - learning itself ; * at line [ line : action ] , if we encounter the same pair of a state and set infinitely often then we choose each action infinitely often ; * at line [ line : succ - state ] , if we apply action infinitely often to state then each successor state is visited infinitely often from an application of to ; * at line [ line : feedback ] , if we perform action in state infinitely often , where , then the agent should infinitely often receive an aversive signal when applying to ; the only aspect of fairness that can be directly influenced by the agent itself , is the action selection at line [ line : action ] . for this purpose, a random number generator can be used to select random indices in an array - representation of the proposed actions .note that theorem [ theo : learn ] also works for unfair runs .every run has a fixpoint on , whether the run is fair or not .but by exploring fewer states , or by issuing fewer aversive signals , an unfair run essentially makes it easier for the agent to avoid aversive signals .this way , some feature - action pairs could remain forever , even though a more fair exploration of the task could have removed them . also , because the notion of strategy in definition [ def : strategy ] is rather strong , it is not possible for a fair run or an unfair run to confront the agent with a situation that leads to the failure of a strategy .the agent will never be disappointed in the exploitation of the strategy .we study a simple class of grid navigation problems .let denote the set of integers .for any two points , denoting and , we recall the definition of -distance between and : a _ simple grid navigation problem _ is a quintuple , where * and are the dimensions of a terrain ; * is a set of start locations ; * is a set of possible target locations ; and , * is a time limit , with the following assumptions , * , we assume ; and , * , we assume .the intuition is that at the beginning of a session we select a start location and an initial active target location and we should navigate from to within time . whenever we reach the active target location we choose another target location and we should now navigate from to within time .this relocation of the active target may be repeated an arbitrary number of times .but at any moment we may also begin a new session , in which we again choose a start location and initial target location .there are infinitely many sessions .the available actions are : left , right , up , down , left - up , left - down , right - up , right - down , and wait . importantly : failure to respect the time results in an aversive signal ; we aim to eventually avoid such aversive signals . for a location and an action , we now define the possible successor locations that result from the application of to ; we denote this set as .a set of multiple possible successors is used to represent non - determinism .an empty set of of successors is used to say that the action would lead outside the considered terrain .we assume the following actions to be deterministic : left , right , up , and down . the other , `` diagonal '' , actions are non - deterministic .for example , for each , , we make the assumption that the direction of the positive y - axis corresponds to `` downward '' .we now define the task structure that corresponds to the above grid problem . hereit will be convenient to view states and features as structured objects , with components ; for an object with a component , we write to access the component .* the set consists of all triples with components _ agent _ , _ target _ , and _ time _ , satisfying the following constraints : and are both in the set , and ; * the set consists of those states where , , and ; * ; * the set consists of all pairs with components _ offset _ and _ time _ , satisfying the constraints : and ; * the transition function is described by algorithm [ alg : grid - trans ] ; for a state and action , the set consists of all states that could possibly be returned by algorithm [ alg : grid - trans ] upon receiving input ; * regarding , for each , we define where is the single feature for which and ; and , * . : = : = choose from [ result : grid ] for each grid problem , there is a strategy for each start state in .denote .we define one policy that is a strategy for all start states .first , we define an auxiliary set to consist of all features for which where is the -norm of a point .intuitively , such features indicate that the deterministic distance from the agent location to the target location where we only use the actions left , right , up , and down can be bridged within the remaining time .we now define a policy . for all define , and for each , denoting , we define as mentioned earlier , we define downwards as the direction of the positive y - axis .the case where occurs when the agent is located at the target .implies that the situation where only occurs when the agent reaches some target location and the next target location is the same as the old target location . ]let .we show that is a strategy for , according to definition [ def : strategy ] .[ [ conditionenustrategy - start - of - definitiondefstrategy ] ] condition [ enu : strategy - start ] of definition [ def : strategy ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we show that . by assumption on , we have , , and . by using the distance assumptions on locations in , we obtain . letting the single feature in , we see that , which implies that .hence , which implies .[ [ conditionenustrategy - successor - of - definitiondefstrategy ] ] condition [ enu : strategy - successor ] of definition [ def : strategy ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let .suppose there is some action .let denote the single feature of .we have , which implies .let .we must show that .let be the single feature of .we will show that , which implies , and further that . based on algorithm [ alg : grid - trans ] , we reason about what has happened during the application of action to state .* suppose the if - test at line [ line : grid - reach - target ] succeeds , i.e. , the agent reaches the target location .then , and where we use the distance assumption between target locations .overall , ; hence , .* suppose the if - test at line [ line : grid - reach - target ] does not succeed , i.e. , the agent did not yet reach the target location .it must be that , because otherwise , which implies , and the test at line [ line : grid - reach - target ] would have succeeded ( see previous case ) .so , .+ first , we observe that indeed , this property holds because ( 1 ) the locations and are inside the convex terrain ; ( 2 ) the action is given deterministic movement semantics ( i.e. , there is precisely one outcome ) , causing to be both inside the terrain and strictly closer to .+ second , we also observe that since by definition and ( which follows from ) .+ overall , we may now write in the second line we have used . we conclude that . [ [ conditionenustrategy - avs - of - definitiondefstrategy ] ] condition [ enu : strategy - avs ] of definition [ def : strategy ] + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let .suppose there is some action , which implies that the single feature of must be in .by definition of , we have .hence , , which implies , as desired .the policy defined in the proof of proposition [ result : grid ] is in general not the maximal strategy , in the sense that the policy could be extended with more actions than currently specified . for instance , if the time limit is high then the agent can randomly wander around before it becomes sufficiently urgent to reach a target location .the agent may also use the diagonal actions , like left - up , if the time limit is not violated under either of the three outcomes . it is possible to extend the above setting of grid navigation to richer state representations , by including for example the locations of additional objects ( that do not influence the agent ) .if this new information would be communicated to the agent with a set of features that is disjoint from the set of old features in section [ sub : grid - defs ] , then proposition [ result : features ] tells us that the strategy described in the proof of proposition [ result : grid ] is still valid .we have used the notion of strategies to reason about the successful avoidance of aversive signals in tasks .we have shown that our avoidance learning algorithm always preserves those strategies .now we discuss some interesting topics for further work . in this paperwe have considered a framework in which features are essentially black boxes , in the sense that we do not assume anything about the way that they are computed .hence , we do not know how features are related to the task environment. it would be interesting to develop more detailed insights into how features can be designed , to ensure that strategies , or similarly successful policies , are possible .in particular , it seems fascinating to explore possible connections between our framework and neuron - like models , where features would be represented by neurons or by small groups of neurons .it is currently an open question whether or not feature learning in the brain is a completely unsupervised process , i.e. , it is not known whether feature creation is influenced by rewarding or aversive signals .so , in a general theory , it might be valid to consider feature learning as a separate , unsupervised , module .this approach could lead to a conceptually simple framework of agent behavior and feature detection simultaneously . concretely, the approach could enable the results in this paper to be linked to various feature detector algorithms .in this paper we have assumed that the set of features is fixed at the beginning of the learning process .this could be suitable for many applications , as there is no fixed limit on how many features there are , as long as there are finitely many .but it seems intriguing to introduce new features while the agent is performing the task . in the technical approach of this paper , however , a newly inserted feature likely proposes wrong actions if we would initially associate all actions to the feature .in general we still insist that aversive signals are avoided , and therefore the wrong actions need to be unlearned as soon as possible. a way to soften the introduction of new features , could be to reintroduce reward into the framework . concretely, a feature may only propose an action if the pair has been observed to be correlated to reward , either directly , or transitively by means of eligibility traces .this idea introduces a threshold for proposing actions .of course any feature - action pairs introduced in this way could still lead to aversive signals .for example , there could be spurious features ( e.g. features that randomly appear ) to which no actions should be linked , or perhaps the rewarding signals contradict the aversive signals , or some actions that give reward could also give aversive signals ( as in the example of the introduction ) . to resolve priority issues , one could view avoidance learning as having the highest precedence , where reward is used as a softer ranking mechanism on the allowed actions .possibly , an agent that keeps learning new features will keep making mistakes .how to cope with new features therefore seems a relevant question .the answers could perhaps also help to understand animal behavior and consciousness .thereto one could consider other notions of success than the avoidance of aversive signals investigated in this paper .
we study a framework where agents have to avoid aversive signals . the agents are given only partial information , in the form of features that are projections of task states . additionally , the agents have to cope with non - determinism , defined as unpredictability on the way that actions are executed . the goal of each agent is to define its behavior based on feature - action pairs that reliably avoid aversive signals . we study a learning algorithm , called a - learning , that exhibits fixpoint convergence , where the belief of the allowed feature - action pairs eventually becomes fixed . a - learning is parameter - free and easy to implement .
at an early stage in the development of quantum mechanics , w. pauli raised the question whether the knowledge of the probability density functions for the position and momentum of a particle were sufficient on order to determine its state .that is , can we determine a unique if we are given and , where is the fourier transform of ?since position and momentum are the unique independent observables of the system , it was , erroneously , guessed that this pauli problem could have an affirmative answer .this was erroneous because there may be different quantum correlations between position and momentum that are not reflected in the distributions of position and momentum individually .indeed , many examples of pauli partners , that is , different states with identical probability distributions and , where found .a review of theses issues , with references to the original papers , and the treatment of the problem of state reconstruction for finite and infinite dimension of the hilbert space , can be found in refs .the general problem of the determination of a quantum state from laboratory measurements turned out to be a difficult one . in this workthe `` laboratory measurement '' means the complete measurement of an observable , that is , the determination of the probability distribution of the eigenvalues of the operator associated with the observable .given a state , the probability distribution ( assuming non - degeneracy ) is given by where are the eigenvectors of the operator .the state is not directly observable ; what can be measured , are the probability distributions of the eigenvalues of the observables and we want to be able to determine the state of the system using these distributions . besides the academic interest of quantum state reconstruction based on measurements of probability distributions , the issue has gained actuality in the last decade in the possible practical applications of quantum information theory . in order to state clearly the problem , let us consider a system described in an dimensional hilbert space .the determination of the state requires the determination of real numbers and a complete measurement of an observable provides equations . with the measurement of two observables ( like position and momentum in the pauli problem ) we have the same number of equations as unknowns .however the equations available are not linear and the system of equations will not have , in general , a unique solution . in many practical cases , a minimal additional information ( like the sign of an expectation value ) is sufficient to determine the state . in this workwe will not search the minimal extra information required , but instead , we will add a complete measurement of a third observable .one may think that this massive addition of information will make the system over - determined and that with three complete measurements we should always be able to find a unique state .this is wrong ; there are pathological cases where the complete measurement of observables , that is equations , is _ not sufficient _ for the determination of a _unique _ set of numbers ! in the other extreme ,if the state happens to be equal to one of the eigenvectors of the observable measured , then , of course , just one complete measurement is sufficient to fix the state .from these two cases we conclude that the choice of the observables to be measured is crucial for the determination of the state ; an observable with a probability distribution peaked provides much more information than an observable with uniform distribution .a pair of observables may provide redundant information and we expect that it is convenient to use observables as different as possible ; this happens when their eigenvectors build two unbiased bases as is the case , for example , with position and momentum ( two bases and are unbiased when , that is , every element of one basis has equal `` projection '' on all elements of the other basis ) .for this reason , unbiased bases have been intensively studied in the problem of state determination and also in quantum information theory .the number of mutually unbiased bases that one can define in an dimensional hilbert space is not known in general although it can be proved that if is equal to a power of a prime number , then there are unbiased bases . _unbiased observables _ , those represented by operators whose eigenvectors build unbiased bases , provide independent information ; there are however pathological cases where the measurement of several unbiased observables is useless to determine a unique state : assume for instance that the state belongs to a basis that is unbiased to several other mutually unbiased bases associated with the measured observables . in this case all the probability distributions are uniform and the state can not be uniquely determined because there are at least different states ( corresponding to the elements of the basis to which the state belongs ) all generating uniform distributions for the observables . if is a power of a prime number we could have up to observables with uniform distributions for different statesthis is the pathological case mentioned before : if there are mutually unbiased bases and we have unbiased observables with uniform distributions then we have pauli partners , that is , different states having the same distributions .if we make complete measurements of two or more observables we should be able to determine the state but it will not always be unique because there may be several different states having the same distributions for the measured observables .if we measure three observables , the mathematical problem would be to solve a set of nonlinear equations to determine numbers .one could blindly apply some numerical method to find the solution . instead of this , we present in this work an iterative method that is physically appealing because it involves the imposition of physical data to hilbert space elements that are approaching the solution .another advantage of this algorithm is that it does not involves the solution of a system of equations and therefore when we change the number of observables measured or the dimension of the hilbert space , we only have to make a trivial change in the algorithm .we will test the algorithm numerically by assuming an arbitrary state and two , three or four arbitrary observables , with them we generate the data corresponding to the distributions of the observables in the chosen state , and then we run the algorithm and we see how efficiently it returns the chosen state .in order to study the convergence of an iterative algorithm for the determination of a state , we will need a concept of _ distance _ that can tell us how close we are from the wanted solution .this criteria of approach can be applied in the space of states or in the space of probability distributions . in the first casewe want to know how close a particular state is from the state searched , that is , we need a _ metric in the space of states_. in the other case a particular state generates probability distributions for some observables and we want to know how close these distribution are from the corresponding distributions generated by the state searched . in this second casewe need a _ metric in the space of distributions_. the relation between these two distances in two different spaces has been studied for several choices of distances .however the application of some of theses `` distances '' that do not satisfy the mathematical requirements of a `` metric '' ( positivity , symmetry and triangular inequality ) in an iterative algorithm is questionable . in this workwe use a metric in the hilbert space of states in order to study the convergence of the algorithm but we also compare the final probability distributions with the corresponding distributions used as physical input for the algorithm because , as was explained before , there are cases of different states generating the same probability distributions .the usual hilbert space metric induced by the norm , itself induced by the internal product , is not an appropriate metric for states because the states are nor represented by hilbert space elements but by _ rays _ , that are sets of hilbert space elements with an arbitrary phase .that is , a state is given by the hilbert space element is a _ representant _ of the ray and it is common practice in quantum mechanics to say that the state is given by .however when we deal with distance between states we can not take the induced metric mentioned before , because this metric for two hilbert space elements , and , belonging to the _ same _ ray , that is , belonging to the same state , does not vanishes .a correct concept of distance between states is given by the distance between sets the minimization can be performed in general and we obtain we compare this result with and we conclude that and therefore every sequence converging in the induced metric is also convergent in the ray metric used here . in order to have a rigourous concept of convergence we must check that the distance between states given above in eqs.([disray],[disray1 ] ) is really a metric ( in general , for arbitrary sets , the distance between sets is not always a metric since one can easily find examples that violate the triangle inequality ) .the requirement of symmetry and positivity are trivially satisfied but to prove that this distance satisfies the triangle inequality is not trivial. however we can be sure that the distance between states is a metric because , in this particular case where the sets are rays , the distance between rays has the same value as the hausdorff distance and one can prove that the hausdorff distance is a metric .the hausdorff distance between two sets and is defined by as a final comment in this section , notice that the square root in eq.([disray1 ] ) is a nuisance but it can not be avoided because expressions like or are _ not _ metrics . in order to simplify the notation , in what follows , we will denote the distance between rays simply by .a state , or a hilbert space element , contains encoded information about all the observables of the system . given a state , the probability distribution for an observable is given by where are the eigenvectors of the operator associated with the observable corresponding to the eigenvalue . given any state , we can impose to this state the same distribution that the observable has in the state by means of an operator , the _ physical imposition operator _, that involves the expansion of in the basis of and a change in the modulus of the expansion coefficients .that is if we assume zero phase , that is .the moduli of the expansion coefficients are changed in order to impose the distribution of the observable in the state but the phases are retained and therefore some information of the original state is kept in the phases . although the numerical treatment of this operator is straightforward , its mathematical features are not simple .the operator is idempotent , it has no inverse and it is not linear but . furthermore the operator is bounded because .the fix points of this nonlinear application is the set of states that have the same distribution for the observable as the state .we will use this operator in order to develop an iterative algorithm for the determination of a state using as physical input the distribution of several observables in this state .it is therefore interesting to study whether this operator , applied to an arbitrary hilbert space element , brings us closer to the state or not . for thiswe can compare the distance with the distance for some given observable and some state .let us then define an observable by choosing its eigenvectors ( a basis ) in a three dimensional hilbert space , , and in this space let us take an arbitrary state .now we consider a large number ( 8000 ) of randomly chosen states and draw a scatter plot of the distances of this state to before and after applying the imposition operator . in figure 1we see that there are more points below the diagonal , showing cases where the imposition operator brings us closer to the state but there are also many cases where the operator take us farther away from the searched state .we will later see that this has the consequence that the iterative algorithm will not converge for every starting point .the imposition operator will shift the state some distance that is smaller than the total distance to the state .that is , there is no `` overshoot '' that could undermine the convergence of the iterative algorithm . in order to prove this ,consider the internal product now , using this inequality in the definition of distance in eq.([disray1 ] ) we get we can notice in figure 1 that there is a bound for the distance at some value smaller than the absolute bound for the distance .we will see that this bound appears when the state is chosen close to one of the eigenvectors of . from the definition of the distance and of the imposition operator it follows easily that the distance of to any element of is the same as the distance of to the same element .that is , so that is something like a `` mirror image '' of reflected on .we can now use this in order to derive the bound mentioned .consider the triangle inequality .using eq.([dist2 ] ) , we get . now we specialize this inequality for the value of that minimizes the right hand side , that is , the value of that maximizes or equivalently . then we have if the state is close enough to one of the eigenvectors of , the corresponding maximum value of the distribution can be larger than and the bound derived is smaller than the absolute bound . with increasing dimension of the hilbert space , the probability that a randomly chosen state is close to one of the basis elements decreases .the physical imposition operator modifies the moduli of the expansion coefficients but leaves the phases unchanged .the reason for choosing this definition is that the moduli of the coefficients are measured in an experimental determination of the probability distribution of the eigenvalues of an observable and therefore this operator provides a way to impose physical properties to a state .it is unfortunate that the phases of the expansion coefficients are not directly accessible in an experiment because we could use the knowledge of the phases in a much more efficient algorithm . in a sense that will become clear later ,the phases have more information about the state than the moduli . in order to clarify thislet us define a _phase imposition operator _ that leaves the moduli of the expansion coefficients unchanged but imposes the phases of the state .that is the same as was done before , we study how efficiently this operator approaches to the state . in figure 2we see the corresponding scatter plot for the same operator and states of those in fig.1 , that shows that in _ all _ cases the application of this operator brings us closer to the wanted state .one can indeed prove that considering the internal product using this inequality in the definition of distance in eq.([disray1 ] ) we get the inequality above .as said before , if we had physical information about the phases of the expansion coefficients , we could devise a very efficient algorithm .unfortunately we do nt have experimental access to the phases and this , in principle interesting , operator will no be further studied here .in this section we will investigate an algorithm for state determination that uses as physical input the knowledge provided by the complete measurement of several observables .these measurements provide the probability distributions for the eigenvalues in the unknown state . in other words, we assume that we know the physical imposition operators for several observables .the algorithm basically consists in the iterative application of the physical imposition operators to some arbitrary initial state randomly chosen . applying the operator to the initial state , we will get closer to ( although not always ) but a second application of the operator is useless because is idempotent .then we use another operator for a closer approach , say , and another one afterwards , until all physical information is used ; then we start again with .that is , we calculate the iterations given by and the convergence is checked comparing the physical input , that is , the distributions associated with the observables , with the corresponding distributions generated in the state . in order to check the efficiency of the algorithm numerically ,we choose a state at random and with it we generate the distributions corresponding to some observables that we use as input in the algorithm . calculating the distance study how efficiently the algorithm returns the initial state .there are cases where the algorithm converges to a state different from but having the same physical distributions , that is , to a pauli partner of .an interesting feature of the algorithm is that we can span the whole hilbert space by choosing the starting states randomly and the algorithm delivers many , if not all , pauli partners .we can not be sure that all pauli partners are found because some of them could correspond to a starting state belonging to a set of null measure that will not be necessarily `` touched '' in a random sampling of the hilbert space .this seems to be quite unlikely but it can not be excluded . in this way , the algorithm presented is not only a numerical tool for state determination but is also a useful tool for the theoretical investigation of the appearance of pauli partners .an example of this is presented below .the algorithm is very efficient ; however there are some starting states where the algorithm fails to converge .it was not surprising to find these failures because , as was suggested in fig .1 , the physical imposition operator sometimes take us farther away from the wanted state .we are informed of this failure because the distributions used as input are not approached in each iteration . in the case of a failure, we can simply restart the algorithm with a different initial state or restart with another initial state orthogonal to the one that failed . in this last case the probability of a repeated failure is much reduced and therefore it is a convenient choice .the appearance of a failure depends strongly on the choice of observables used to determine the state .if we use three unbiased observables we very rarely found a failure , in less than 1% of the cases , but if we use three random observables ( see below ) 40% of the randomly chosen starting states fail but only 10% of these fail again if we restart with an orthogonal state . in the case of four angular momentum observables in four arbitrary directions we had to restart the algorithm in some 10% of the cases .the appearance of failures also depends on the shape of the distributions : when one of the distributions is peaked , that is , the maximum value of the distribution has a large value for some , the application of the corresponding imposition operator bring us close to the wanted state as can be seen in eq.([dist3 ] ) , and figure 1 shows that then the algorithm has better convergence and no failures are found .this has been confirmed in the numerical tests .the convergence to the state , or to a pauli partner , was tested numerically in several hilbert space dimensions and for different choice of observables .these choices were random in some cases , that is , their associated orthonormal bases are randomly chosen , and in other cases we used physically relevant observables like angular momentum or position and momentum .position and momentum observables are usually represented by unbound operators in infinite dimensional hilbert spaces ; however there are also realizations of these observables in finite dimensions , for instance in a cyclic lattice , where they are represented by unbiased operators . in general the operators do not commute and the iteration of and are not necessarily equal .the algorithm was tested with several different choices in the ordering of the noncommuting physical imposition operators and also with random ordering and it turned out that the convergence of the algorithm is not much affected by the different orderings .the algorithm is robust under the noncommutativity of the observables .the physical imposition operator is idempotent so it is useless to apply it more than once ( successively ) in an attempt to approach the state . clearly , the complete measurement of just one observable is not sufficient to determine the state , except in the trivial case when the state happens to be equal to one of the eigenvector of the operator .therefore we consider the information provided by _ two _ observables and ( for two unbiased observables , like and , this is precisely the pauli problem ) .we studied then the convergence of towards or to a pauli partner , for an arbitrary . in a three dimensional hilbert space , , we applied the algorithm in several cases : for and random , unbiased ( that is , of the type and ) and also for angular momentum operators . as was expected , in all these cases the algorithm returned several pauli partners . choosing the starting state ( uniform distributed in the hilbert space ) we found that all pauli partners found are accessed with similar frequency . as was mentioned before, we can not be sure that the algorithm will deliver _ all _ partners , however we may be confident that this may be so because in one particular case , where we can calculate exactly the number of partners , the algorithm returns them all .the particular case is the , so called , pathological case where we have uniform distributions for observables that correspond to partners . for several combinations of and ,the algorithm delivered all partners .next we studied the case with _operators providing physical information to determine the state ( also with hilbert space dimension ) .we studied then the iteration .when two of the observables are unbiased ( of the type and ) we always obtained a unique state , regardless of the choice of the third operator : either unbiased or of the type ( biased to the first two ) , or random .this means that the information provided by two unbiased observables _ almost _ fixes the state and any other additional information is sufficient to find a unique state .however we know that in the , so called , pathological cases we must find pauli partners and the algorithm does indeed finds them . in these pathological cases ,the distributions corresponding to three unbiased operators are all uniform ( that is , the generating state is unbiased to all three bases ) . spanning the hilbert space by choosing randomly as a starting state for the algorithm, we converge to all pauli partners with almost equal probability .the pathological case was also studied with two unbiased observables with uniform distributions . in this casethe algorithm also delivered all pauli partners with similar probability .for biassed operators , like angular momentum operators and also for random we sometimes found pauli partners showing that , although we have more equations ( six ) than unknowns ( four ) , the nonlinearity of the problem may cause non - unique solutions .the appearance of pauli partners in the angular momentum case is consistent with the result reported by amiet and weigert .an inspection of the numerical results for these pauli partners revealed a symmetry that could also be proved analytically : given a state ( in the basis of ) with the corresponding distributions for the observables , ( it is always possible to fix real an nonnegative ) when and , if any one of the following conditions is satisfied : then there is a pauli partner where if we can make real and positive and then , and finally if then , where can take three values : . spanning the hilbert space with generator states randomly chosen , in some of the cases the algorithm returned the state and a pauli partner covering all possibilities mentioned above .notice that the ability of the algorithm to detect pauli partners is due to the limited precision of the numerical procedure . among all possible states of the system , only a few of them have pauli partners , more precisely , the set of states with pauli partners has null measure and if we had infinite precision , we would never find partners by random sampling of the hilbert space .because of the limited precision of the algorithm , all points in the hilbert space within a small environment are equivalent and therefore the sets of points with null measure can be accessed in a random sampling of the hilbert space .we have found indeed that if we become more restrictive with conditions of convergence we need more tries in order to detect partners .usually the limited precision is considered a drawback however in this case it is an advantage that allows us to detect sets of null measure . with the information provided by the complete measurement of _ four _ operators weiterated and we found unique states , not only when two of them are unbiased ( consistent with the result obtained with three operators ) , but also in the case of random operators or angular momentum in _ arbitrary _ directions .of course , in this case of excessive physical information we could ignore one of the observables and determine the state with only three of them .however not all the pauli partners found with three observables will have the correct distribution for the fourth one and therefore the use of all observables may be needed for a unique determination of the state . in this casethe number of equations , eight if , uniquely determine the four unknowns in spite of the nonlinearity .notice that the convergence of the algorithm in this case is not trivial .it is true that we are using much more information than what is needed ( except for the pathological cases that can only appear if ) but we must consider that we are using this excessive information in an iterative and approximative algorithm and therefore the consistency of the data in the final state does not necessarily cooperates in the iterations .the fact that the over - determined algorithm converges is a sign of its robustness .the algorithm converges in a very efficient way , close to exponential , as we see in figure 3 where the distance to the converging state is given as a function of the number of iterations for the case of three unbiased operators with .this is a typical example showing the exponential convergence where the distance to the solution is divided by 4.5 in each iteration .however the speed of convergence , that is , the slope in the figure , is not always the same and depends on the operators used and on the generating state . for higher hilbert space dimensions we obtained similar behaviour . for three operators with physical relevance , like angular momentum or unbiased operators , the distance to the target statewas divided by 2 - 3 in each iteration in hilbert spaces with dimensions up to 20 . inthe fastest case found , the distance was divided by 126 in each iteration , approaching the solution within after three iterations . with random operatorsthe approach was not always so fast and in some unfavourable cases up to 100 iterations were required ( this took only a fraction of a second in an old pc ) .in this work we defined the _ physical imposition operator _ that imposes to any state the same distribution that the eigenvalues of an observable have in a state . for this operatorwe do nt need to know the state but we just need the probability distribution for the observable in this state , that can be obtained from a complete measurement . considering two or more observables , we applied their corresponding physical imposition operators iteratively to an arbitrary initial state and obtained a succession of states that converge to the unknown state , or to a pauli partner having the same distribution for the observables .varying the initial state we can find the pauli partners but we can not be sure that all of them are obtained although this is very likely because in the cases where we can know exactly all the pauli partners , the algorithm finds them all and therefore it becomes a useful tool for the investigation of pauli partners .this algorithm for state determination was tested numerically for different sets of observables and different dimensions of the hilbert space and it turned out to be quite an efficient and robust way to determine a quantum state using complete measurements of several observables .we would like to thank h. de pascuale for his help on mathematical questions .this work received partial support from `` consejo nacional de investigaciones cientficas y tcnicas '' ( conicet ) , argentina .this work , part of the phd thesis of dmg , was financed by a scholarship granted by conicet .99 w. pauli .`` quantentheorie '' handbuch der physik * 24*(1933 ) . s. weigert .`` pauli problem for a spin of arbitrary length : a simple method to determine its wave function '' phys rev . a * 45 * , 7688 - 7696 ( 1992 ) . s. weigert .`` how to determine a quantum state by measurements : the pauli problem for a particle with arbitrary potential '' phys rev .a * 53 * , 2078 - 2083 ( 1996 ) .m. keyl , `` fundamentals of quantum information theory '' phys .a * 369 * , 431 - 548 ( 2002 ) .i. d. ivanovic. `` geometrical description of quantum state determination '' journal of physics a , 14 , 3241 - 3245 ( 1981 ) .wootters , and b.d .fields , `` optimal state - determination by mutually unbiased measurements '' ann . phys .191 , 363 - 381 ( 1989 ) .s. bandyopadhyay , p. boykin , v. roychowdhury , and f. vatan , `` a new proof for the existence of mutually unbiased bases '' algorithmica 34 , 512 ( 2002 ) .arxiv : quant - ph/0103162 .a. majtey , p. w. lamberti , m. t. martin and a. plastino , `` wootter s distance revisited : a new distinguishability criterium '' phys .a * 32 * , 413 - 419 ( 2005 ) .arxiv : quant - ph/0408082 e. lages lima , _ espaos mtricos _ , projeto euclides , impa , isbn : 85 - 244 - 0158 - 3 3ed .rio de janeiro , 2003 .a. c. de la torre , d. goyeneche .`` quantum mechanics in finite dimensional hilbert space '' am .* 71 * , 49 - 54 , ( 2003 ) .a. c. de la torre , h. mrtin , d. goyeneche .`` quantum diffusion on a cyclic one dimensional lattice '' phys .e * 68 * , 031103 - 1 - 9 , ( 2003 ) .j. p. amiet , s. weigert .`` reconstructing a pure state of a spin s through three stern - gerlach measurements '' j. phys .a : math . gen . * 32 * , 2777 - 2784 ( 1999 ) .figure 1 . scatter plot of the distances and for 8000 random initial states .points below the diagonal indicate cases where brings closer to .+ + + figure 2 .scatter plot of the distances and for the same operator and states as in figure 1 .notice that the phase imposition operator always approaches the state .+ + + figure 3 .distance from the state to the state after iterations , showing exponential convergence of the algorithm for unbiased operators in a three dimensional hilbert space .
an iterative algorithm for state determination is presented that uses as physical input the probability distributions for the eigenvalues of two or more observables in an unknown state . starting form an arbitrary state , a succession of states is obtained that converges to or to a pauli partner . this algorithm for state reconstruction is efficient and robust as is seen in the numerical tests presented and is a useful tool not only for state determination but also for the study of pauli partners . its main ingredient is the physical imposition operator that changes any state to have the same physical properties , with respect to an observable , of another state . + + keywords : state determination , state reconstruction , pauli partners , unbiased bases . pacs : 03.65.wj 02.60.gf
setting appropriate claims reserves to meet future claims payment cash flows is one of the main tasks of non - life insurance actuaries .there is a wide range of models , methods and algorithms used to set appropriate claims reserves . among the most popular methods there is the chain - ladder method , the bornhuetter - ferguson method and the generalized linear model methods . for an overview , see wthrich and merz ( 2008 ) and england and verrall ( 2002 ) .setting claims reserves includes two tasks : estimate the mean of future payments and quantify the uncertainty in this prediction for future payments . typically , quantifying the uncertainty includes two terms , namely the so - called process variance and the ( parameter ) estimation error .the process variance reflects that we predict random variables , i.e. it describes the pure process uncertainty .the estimation error reflects that the true model parameters need to be estimated and hence there is an uncertainty in the reliability of these estimates . in this paper , in addition to these two terms, we consider a third source of error / uncertainty , namely , we analyze the fact that we could have chosen the wrong model .that is , we select a family of claims reserving models and quantify the uncertainty coming from a possibly wrong model choice within this family of models .such an analysis is especially important when answering solvency questions .a poor model choice may result in a severe shortfall in the balance sheet of an insurance company , which requires under a risk - adjusted solvency regime an adequate risk capital charge .we analyze typical sizes of such risk capital charges within the family of tweedie s compound poisson models * * , * * see tweedie ( 1984 ) , smyth and jrgensen ( 2002 ) and wthrich ( 2003 ) .assume that are incremental claims payments with indices where denotes the accident year and denotes the development year . at time , we have observations and for claims reserving at time we need to predict the future payments see table [ tab1 ] .hence , the outstanding claims payment at time is given by its conditional expectation at time is given by = \sum_{i=1}^{i}e\left [ \left .r_{i}\right\vert \mathcal{d}_{i}\right ] = \sum_{i+j > i}e\left [ \left .y_{i , j}\right\vert \mathcal{d}_{i}\right ] .\ ] ] hereafter , the summation is for .therefore , we need to predict and to estimate ] .then , is used to predict the future payments and is the amount that is put aside in the balance sheet of the insurance company for these payments .prediction uncertainty is then often studied with the help of the ( conditional ) mean square error of prediction ( msep ) which is defined by .\ ] ] if is -measurable , the conditional msep can easily be decoupled as follows * * , * * see wthrich and merz ( 2008 ) , section 3.1 : -\widehat{r}\right ) ^{2}\label{vardecomp}\\ & = \text { process variance } + \text { estimation error.}\nonumber\end{aligned}\ ] ] it is clear that the consistent estimator which minimizes the conditional msep is given by ] -* saturated model*. * }=\left(p,\phi,\widetilde{{\beta}}_{0}\right ) ] with , , , . * }=\left(p,\phi,\widetilde{{\alpha}}_{1},\widetilde { { \alpha}}_{2},\widetilde{{\beta}}_{0},\widetilde{{\beta}}_{1},\widetilde { { \beta}}_{2}\right ) ] with , , + , , , , , * }=\left(p,\phi,\widetilde{{\alpha}}_{1},\widetilde { { \alpha}}_{2},\widetilde{{\alpha}}_{3},\widetilde{{\alpha}}_{4 } , \widetilde{{\beta}}_{0},\widetilde{{\beta}}_{1},\widetilde{{\beta } } _ { 2},\widetilde{{\beta}}_{3},\widetilde{{\beta}}_{4}\right ) ] with now , to determine the optimal model , we first consider the joint posterior distribution for the model probability and the model parameters denoted }~|~\mathcal{d}_{i}), ] is the parameter vector for model . ] as }},b_{\widetilde{{\theta } } _ { i,[k]}}\right ] . ] .it is no longer possible to run the standard mcmc procedure we described in section 3.4 for this variable selection setting .this is because the posterior is now defined on either a support consisting of disjoint unions of subspaces or a product space of all such subspaces , one for each model considered .a popular approach to run markov chains in such a situation is to develop a more advanced sampler than that presented above , typically in the disjoint union setting .this involves developing a reversible jump rj - mcmc framework , see green ( 1995 ) and the references therein .this type of markov chain sampler is complicated to develop and analyze .hence , we propose as an alternative in this paper to utilize a recent procedure that will allow us to use the above mcmc sampler we have already developed for a model the process we must follow involves first running the sampler in the simulation technique described in section 3.4 for each model considered .then the calculation of the posterior model probabilities is performed using the samples from the markov chain in each model to estimate ( [ modprobspost ] ) .furthermore , our approach here removes the assumption on the priors across models , made by congdon ( 2006 ) , p.348 , } ~|~m_{k}\right ) = 1,m\neq k\ ] ] and instead we work with the prior } ~|~m_{k})={\textstyle\prod\limits_{i=1 } ^{n_{\left [ m\right ] } } } \left [ b_{\widetilde{{\theta}}_{i,[m ] } } -a_{\widetilde{{\theta}}_{i,[m]}}\right ] ^{-1},m\neq k.\ ] ] that is , instead we use a class of priors where specification of priors for a model automatically specifies priors for any other model .this is a sensible set of priors to consider given our product space formulation and it has a clear interpretation in our setting where we specify our models through a series of constraints , relative to each other . in doing this we also achieve our goal of having posterior model selection insensitive to the choice of the prior and being data driven . the modified version of congdon s ( 2006 ) , formula a.3, we obtain after relaxing congdon s assumption , allows the calculation of the posterior model probabilities using the samples from the markov chain in each model to estimate } ~|~\mathcal{d}_{i})d\bm{\theta}_{[k ] } = \int\pi(m_{k}~|~\bm{\theta}_{[k]},\mathcal{d}_{i})\pi(\bm{\theta}_{[k ] } ~|~\mathcal{d}_{i})d\bm{\theta}_{[k]}\nonumber\\ & \approx\frac{1}{t - t_{b}}\sum\limits_{j = t_{b}+1}^{t}\pi(m_{k}~|~\mathcal{d } _ { i},\bm{\theta}_{j,[k]})\nonumber\\ & = \frac{1}{t - t_{b}}\sum\limits_{j = t_{b}+1}^{t}\frac{l_{\mathcal{d}_{i } } ( m_{k},\bm{\theta}_{j,[k ] } ) { \textstyle\prod\limits_{k=0}^{k } } \pi(\bm{\theta}_{j,[k]}~|~m_{k})\pi(m_{k})}{\sum\nolimits_{m=0}^{k } l_{\mathcal{d}_{i}}(m_{m},\bm{\theta } _ { j,[m ] } ) { \textstyle\prod\limits_{k=0}^{k } } \pi(\bm{\theta}_{j,[k]}~|~m_{m})\pi(m_{m})}\nonumber\\ & = \frac{1}{t - t_{b}}\sum\limits_{j = t_{b}+1}^{t}\frac{l_{\mathcal{d}_{i } } ( m_{k},\bm{\theta}_{j,[k]})}{\sum\nolimits_{m=0}^{k}l_{\mathcal{d}_{i } } ( m_{m},\bm{\theta } _ { j,[m]})}. \label{modprobspost}\end{aligned}\ ] ] here and for a proof , see congdon ( 2006 ) , formula a.3 . note that , the prior of parameters ( given model ) contributes in the above implicitly as } ] is the prior for model s parameters evaluated at the markov chain iteration .once we have the posterior model probabilities we can then take the map estimate for the optimal model ( variable selection ) for the given data set . in this paper we do not consider the notion of model averaging over different parameterized models in the variable selection context .instead we simply utilize these results for optimal variable selection from a map perspective for the marginal posterior .in addition to this model selection criterion we also consider in the bayesian framework the deviance information criterion ( dic ) , see bernardo and smith ( 1994 ) . from a classical maximum likelihood perspective we present the likelihood ratio ( lhr ) p - values .application of this technique to the simulated mcmc samples for each of the considered models produced the posterior model probabilities given in table [ tab4 ] .this suggests that within this subset of models considered , the saturated model was the optimal model to utilize in the analysis of the claims reserving problem , .it is followed by model with .additionally , the choice of was also supported by the other criteria we considered : dic and lhr . in future research it would be interesting to extend to the full model space which considers all models in the power set } \right\vert ] and ^{1/2} ] , ] .the mles obtained using both and underestimate the uncertainties compared to the bayesian analysis . note that , while the mles for the uncertainties are proportional to the dispersion estimator , the corresponding bayesian estimators are averages over all possible values of according to its posterior distribution .the uncertainty in the estimate for the dispersion is large which is also highlighted by a bootstrap analysis in wthrich and merz ( 2008 ) , section 7.3 .this indicates that should also depend on the individual cells .however , in this case overparameterization needs to be considered with care and bayesian framework should be preferred .the results demonstrate the development of a bayesian model for the claims reserving problem when considering tweedie s compound poisson model . the sampling methodology of a gibbs sampler is applied to the problem to study the model sensitivity for a real data set .the problem of variable selection is addressed in a manner commensurate with the mcmc sampling procedure developed in this paper and the most probable model under the posterior marginal model probability is then considered in further analysis . under this modelwe then consider two aspects , model selection and model averaging with respect to model parameter .the outcomes from these comparisons demonstrate that the model uncertainty due to fixing plays a significant role in the evaluation of the claims reserves and its conditional msep .it is clear that whilst the frequentist mle approach is not sensitive to a poor model selection , the bayesian estimates demonstrate more dependence on poor model choice , with respect to model parameter .we use constant priors with very wide ranges to perform inference in the setting where the posterior is largely implied by data only .also , we run a large number of mcmc iterations so that numerical error in the bayesian estimators is very small . in the case of the data we studied , the mles for the claims reserve ,process variance and estimation error were all significantly different ( less ) than corresponding bayesian estimators .this is due to the fact that the posterior distribution implied by the data and estimated using mcmc is materially different from gaussian , i.e. more skewed .future research will examine variable selection aspects of this model in a bayesian context considering the entire set of possible parameterizations .this requires development of advanced approaches such as reversible jump mcmc and variable selection stochastic optimization methodology to determine if a more parsimonious model can be selected under assumptions of homogeneity in adjacent columns / rows in the claims triangle . * * + the first author is thankful to the department of mathematics and statistics at the university of nsw for support through an australian postgraduate award and to csiro for support through a postgraduate research top up scholarship .thank you also goes to robert kohn for discussions . ** + _ csiro mathematical and information sciences , sydney , locked bag 17 , north ryde , nsw , 1670 , australia _ + and + _ unsw mathematics and statistics department , sydney , 2052 , australia . + email : peterga.unsw.edu.au_ + * * ( corresponding author ) + _ csiro mathematical and information sciences , sydney , locked bag 17 , north ryde , nsw , 1670 , australia .+ email : pavel.shevchenko.au_ + * * + _ eth zurich , department of mathematics , ch-8092 zurich , switzerland . +email : wueth.ethz.ch_ + [ c]|c||rrrrrrrrrr|accident & + year & 0 & 1 & & & & & & & & + & + & & + & & + & & + & & + & & + & & + & & + & & + & & +
in this paper we examine the claims reserving problem using tweedie s compound poisson model . we develop the maximum likelihood and bayesian markov chain monte carlo simulation approaches to fit the model and then compare the estimated models under different scenarios . the key point we demonstrate relates to the comparison of reserving quantities with and without model uncertainty incorporated into the prediction . we consider both the model selection problem and the model averaging solutions for the predicted reserves . as a part of this process we also consider the sub problem of variable selection to obtain a parsimonious representation of the model being fitted . * keywords : * claims reserving , model uncertainty , tweedie s compound poisson model , bayesian analysis , model selection , model averaging , markov chain monte carlo . this is a preprint of an article to appear in + astin bulletin 39(1 ) , pp.1 - 33 , 2009 .
accurate frequency measurements of thousands of modes of solar acoustic oscillation provide detailed information on the solar interior . in order to use these frequencies to derive internal structure and dynamics of the sun , it is crucial to understand and limit the uncertainties in the computation of solar models and mode frequencies. furthermore , it is of considerable interest to investigate the sensitivity of solar structure to changes in the input physical parameters and properties of the solar interior .one of the important physical properties in solar model calculations is the opacity , which is intimately linked to the properties of the solar oscillations through its effects on the mean structure of the sun .here we investigate the sensitivity of solar structure to local modifications to the opacity .several authors have investigated the effect of opacity on the solar models and oscillation frequencies . in an early paper , bahcall ,bahcall & ulrich ( 1969 ) studied the sensitivity of the solar neutrino fluxes to localised changes in the opacity and equation of state and concluded that the neutrino capture rates are more sensitive to the equation of state .the effects of artificial opacity modifications on the structure of solar models and their frequencies were also examined by christensen - dalsgaard ( 1988 ) .constructing static models of the present sun with an enhanced opacity near the base of the convection zone , he showed that the changes in structure and frequencies are approximately linear even to an opacity change of 60% .the linearity of the response of the model to opacity changes was later confirmed by christensen - dalsgaard & thompson ( 1991 ) . in a detailed investigation ,korzennik & ulrich ( 1989 ) attempted to improve the agreement between the theoretical and observed frequencies of oscillations by determining corrections to the opacity through inverse analysis .they found that the opacity inversion can only partially resolve the discrepancy . in a similar analysis ,saio ( 1992 ) obtained opacity corrections by fitting to low - degree frequency separations and helioseismically inferred sound speeds at a few points in the model ; he found that much of the discrepancy between the sun and the model could be removed by opacity changes of up to 50 % .more recently , elliott ( 1995 ) investigated the helioseismic inverse problem as expressed in terms of corrections to the opacity .he derived kernels relating the opacity differences to the changes in frequencies , on the assumption that the change in luminosity could be ignored ; he proceeded to carry out inversions for the opacity errors , based on the observed solar oscillation frequencies , and neglecting possible changes in composition associated with the change in opacity .he found that the differences between observed and computed oscillation frequencies could be accounted for by opacity changes of up to about 5 % .here we carry out a detailed investigation of the sensitivity of solar structure to localised changes in the opacity , both for static and evolutionary solar models .the ultimate goal , taken up in a subsequent paper , is to determine the opacity corrections required to account for the helioseismically inferred properties of the solar interior . in general the opacity is a function of density , temperature and composition .however , it is evident that information about the properties of the present sun can not constrain the opacity in such generality .thus , for simplicity , we consider only opacity modifications that are functions of temperature alone , being the logarithm to base 10 .if the opacity correction is sufficiently small that higher - order terms can be neglected , the response of any quantity related to solar structure can be expressed in terms of a differential kernel as = k_f(t ) ( t ) t .[ kernel ] here for simplicity we estimate the kernels from k_f(t_0 ) = ( f / f)(t_0 ) ( t ) t , [ kdef ] where is the change corresponding to a suitably change localised at .relations similar to eq .( [ kernel ] ) form the basis of inverse analysis ( gough 1985 ) : if is the difference between the observed and theoretical frequencies , eq .( [ kernel ] ) can in principle be inverted to determine corrections to the opacity in the model ( korzennik & ulrich 1989 ; saio 1992 ; elliott 1995 ) .the kernels also provide a powerful visualization of the response in a given physical quantity of the solar model to a small perturbation in the input physics . as an example of thiswe evaluate kernels relating opacity changes to the structural differences in the solar model , the neutrino fluxes , and the small frequency separation between low - degree modes .to illustrate the sensitivity of the models to opacity changes we have computed extensive sets of comparatively simple models .these were based on the opal opacity tables ( iglesias , rogers & wilson 1992 ) , the eggleton , faulkner & flannery ( 1973 ) equation of state , and nuclear reaction parameters from parker ( 1986 ) and bahcall & ulrich ( 1988 ) .convection was treated with the mixing - length formulation of bhm - vitense ( 1958 ) .the neutrino capture rates for the and experiments were obtained with the cross - sections given by bahcall ( 1989 ) .the heavy - element abundance was 0.01895 .all models were calibrated to a fixed radius ( and luminosity ( ) at the assumed solar age of years by adjusting the mixing - length parameter and the composition .diffusion and gravitational settling were ignored .details of the computational procedure were described by christensen - dalsgaard ( 1982 ) . it should be noted that our models are somewhat simplified , particularly in the choice of equation of state and the neglect of settling , compared with present state - of - the - art solar models ( for a review see , for example , christensen - dalsgaard 1996 ) .thus the values of the model parameters quoted , e.g. , in table 1 can not be regarded as representative of the actual solar structure .however , we are here concerned with the _ differential _ effects on the models of changes to the opacity ; for this purpose , the present simplified physics is entirely adequate .we have considered two types of models : static models of the present sun and proper evolution models , evolved from a chemically homogeneous initial zero - age main - sequence ( zams ) model . in the static models ,the hydrogen profile where = ( being the mass interior to the given point and is the total mass of the sun ) , was obtained by scaling by a constant factor the profile , obtained from a complete evolution model : x(q ) = x_r(q ) , [ xscale ] where was adjusted to fit the solar luminosity . in the evolutionmodels the calibration to solar luminosity was achieved by adjusting the initial helium abundance .the sensitivity of solar structure to opacity was investigated by increasing in a narrowly confined region near a specific temperature according to = _ 0 + f(t_0 ) , where is the opacity as obtained from the opacity tables ; was varied over the temperature range of the model .the function has the form f(t_0 ) = a , [ kapmod ] where the constants and determine the magnitude and width of the opacity modification .henceforth the models computed with the same input physics but without opacity modifications will be referred to as reference models whereas the perturbed ones will be referred as modified models . [cols="^,<,^,^,^,^,^,^,^,^ " , ] it is evident that these kernels correspond precisely , apart from a scaling , to the neutrino - flux differences shown in fig . 4b .also , it is striking that even for the neutrino flux there is little difference between the results for static and evolution models .in contrast , kernels for the small frequency separation , which depends strongly on the composition gradient in the core , can not be estimated accurately from the static models .= 8.8 cm to test these kernels , we use the models of christensen - dalsgaard ( 1992 ) , with reduced core opacity .specifically , the opacity modification was determined from = _ 0 - a f(t ) , where \dis , & \mbox{if }\\ 1 & \mbox{otherwise ; } \end{array}\right.\ ] ] as did christensen - dalsgaard ( 1992 ) , we used and .this form of provides an opacity decrease over a well defined region in with a continuous transition to zero at lower temperature .( note that in these models the unmodified opacity was obtained using the los alamos opacity library of huebner 1977 . )it was found that the neutrino flux was reduced by a factor of more than three for ( see also table 3 ) .table 3 compares the values of the neutrino flux and the small frequency separation computed for the models with reduced opacity with those estimated by means of the kernels shown in fig .it is evident that there is good agreement between the values of for both and 0.4 .the agreement in neutrino flux is somewhat better for , the error being % for .however , it is remarkable that linearized expressions of the form given in eq .( [ kernel ] ) remain reasonably precise for a reduction in opacity by more than a factor of two . to test the precision with which the kernels reproduce the structure of the model , fig .11 compares differences in and obtained from the kernels in fig . 9 and the corresponding density kernels , with the actual model differences .results are shown both for the static model sw2 and the evolution model ew2 , both corresponding to .as before , the agreement is quite close both for evolution and for static models .our understanding of the solar internal structure has improved significantly over the last few years due to the rapid progress in helioseismology as well as in the construction of solar models with improved input physics .although much of the knowledge has been obtained through inverse analysis , the starting point is always solar models ; thus it is important to investigate the sensitivity of the solar structure to the input physics and to determine the presence of any region which is particularly sensitive to a specific parameter .since the major uncertainties in the microphysics come from the opacity , we have focussed on the examination of the solar structure by means of a localised opacity change as a function of temperature .the sensitivity of the solar structure was represented by kernels relating the opacity changes to neutrino flux , frequency separation at low degree as parametrised by and the structure difference between modified and reference models .these kernels were subsequently used to derive the same parameters corresponding to a reduction in opacity by a factor of more than two in the models of christensen - dalsgaard ( 1992 ) , simulating the effects of weakly interacting massive particles .we found that the kernels were remarkably successful in estimating the changes in the solar structure caused by even such a large change in the input physics .a natural next step in this investigation is to study systematically how the oscillation frequencies are affected by opacity changes and also how well these frequency changes can be reproduced by kernels .furthermore , in a preliminary analysis tripathy , basu & christensen - dalsgaard ( 1997 ) found that much of the current discrepancy between the helioseismically inferred solar sound speed and the sound speed of a standard solar model can be understood in terms of modest modifications to the opacity .more detailed analyses of this nature are now under way .we thank s. basu and j. elliott for useful discussion as well as for comments on the manuscript .jc - d is grateful to the high altitude observatory for hospitality during much of the preparation of the paper .the work was supported in by danish natural science research council , and by the danish national research foundation through its establishment of the theoretical astrophysics center .sct acknowledges financial support from the department of science and technology , government of india .for convenience , we present some simple approximate properties of convective envelopes , which are useful for the interpretation of the numerical results in sect . 4.2 .further details were provided by christensen - dalsgaard ( 1997 ) .note also that baturin & ayukov ( 1995 , 1996 ) carried out a careful analysis of the properties of convective envelopes and their match to the radiative interior . in the convection zonethe stratification departs substantially from being adiabatic only in a very thin region just below the photosphere .thus the structure of the bulk of the convection zone is determined by the equation of hydrostatic equilibrium = - g m r^2 [ hydro ] ( being the gravitational constant and the mass interior to ) , and the relation .[ gamma ] to this approximation , the convection - zone structure therefore depends only on the equation of state , the composition and the constant specific entropy , the latter being fixed by the mixing - length parameter . in practice ,a more useful characterization of the bulk of the convection zone follows from noting that is approximately constant outside the dominant ionization zones of hydrogen and helium ; then eq . ( [ gamma ] ) shows that and are related by p k ^_1 , [ prho ] where the constant is closely related to .assuming again to be constant and neglecting the mass in the convection zone , it is readily shown from eqs ( [ hydro ] ) and ( [ gamma ] ) that is given by u g m ( 1 - 1 _ 1 ) ( 1 r - 1 r^ * ) , [ solu ] approximately valid in the lower parts of the convection zone ( e.g. christensen - dalsgaard 1986 ; dziembowski , pamyatnykh & sienkiewicz 1992 ; christensen - dalsgaard & dppen 1992 ; baturin & mironova 1995 ) ; here is the mass of the model and , its surface radius . from eqs ( [ prho ] ) and ( [ solu ] ) we also obtain p^1 - 1/_1 k^-1/_1 _ 1 -1 _ 1 g m ( 1 r - 1 r _ * ) . [ solp ] eq .( [ solu ] ) indicates that , and therefore , depend little on the details of the physics of the model ; in particular , if and are assumed to be fixed , as in the case of calibrated solar models , . this is confirmed by figs [ statdiff ] and [ evoldiff ] , which show that is very small in the bulk of the convection zone . from eq .( [ solp ] ) it furthermore follows that p - 1 _ 1 - 1 k [ deltap ] are approximately constant , again in accordance with figs [ statdiff ] and [ evoldiff ] . finally , assuming the ideal gas law, we have that t . [ deltat ] these relationsmay be used to investigate the changes in the model resulting from changes in the convection - zone parameters .of particular interest are conditions at the base of the convective envelope , defining the match to the radiative interior . neglecting possible convective overshoot , , the radiative temperature gradient , at this point . for calibrated modelsthe luminosity is unchanged ; then .assuming also that is constant and that the heavy - element abundance is fixed , we find that the change in the pressure at the base of the convection zone is related to the changes in and the envelope hydrogen abundance by & & - 4 - _ t ( 4 - _ t ) ( _ 1 - 1 ) - _ 1(_p + 1 ) k + & & - _ 1 [ ( 4 - _ t ) _ x - _ x ] ( 4 - _ t ) ( _ 1 - 1 ) - _ 1(_p + 1 ) [ delpcz ] ( see also christensen - dalsgaard 1997 ) ; here _ p = ( / p)_t , x ,_ t = ( / t)_p , x , + _ x = ( / x)_p , t , _ x = ( / x)_z . having obtained , the change in the depth of the convection zone can be determined from eq .( [ solp ] ) as [ k + ( _ 1 - 1 ) ] , [ deldcz ] where is the radius at the base of the convection zone .[ note that this relation differs from eq .( 11 ) of christensen - dalsgaard ( 1997 ) .there it was assumed that the interior properties of the model were largely unchanged while the surface radius was allowed to change ; this is the case relevant , for example , to the calibration of solar models to have a specific radius .here we have kept fixed and the change in corresponds to changes in the properties of the radiative interior of the model . ]it is of some interest to compare these simple relations with the numerical results obtained for the envelope models listed in table 2 .to do so , we first need to relate the change in to the changes in and . from the results in table 2 ( k ) _-1.40 , ( k ) _ 1.90 .[ numder ] as discussed by christensen - dalsgaard ( 1997 ) , the relation between and , at fixed composition , follows simply from the properties of the mixing - length theory ; the result of such an analysis , using the properties of the reference envelope model en1 , agrees quite closely with the value given above .it is less straightforward to derive a simple expression for the relation between and .to evaluate the changes in and , we need the derivatives of and . at the base of the convection zone in the reference envelope en1 we have the following values : , , , and .thus , from eq .( [ delpcz ] ) we obtain , and hence , from eq .( [ deldcz ] ) , -0.49 k + 0.87 .[ deldczknum ] using also eqs ( [ numder ] ) we find 0.69 - 0.07 . [ deldcznum ] for comparison , the results in table 2 give ( ) _ 0.72 , ( ) _ -0.18 .evidently , the -derivative is in reasonable agreement with eq .( [ deldcznum ] ) ; although the agreement appears less satisfactory for the derivative with respect to , it should be noticed that small coefficient in eq .( [ deldcznum ] ) arises from near cancellation between the contributions from the two terms in eq .( [ deldczknum ] ) . christensen - dalsgaard j. , 1997 , in : it solar convection and oscillations and their relationship : proc .score96 , aarhus , may 1996 , eds f. p. pijpers , j. christensen - dalsgaard , c. s. rosenthal , in the press christensen - dalsgaard j. , dppen w. , ajukov s. v. , 1996 , sci 272 , 1286 dziembowski w. a. , pamyatnykh a. a. & sienkiewicz r. , 1992 , acta astron .42 , 5 eggleton p. p. , faulkner j. , flannery b. p. , 1973, a&a 23 , 325 saio h. , 1992 , mnras 258 , 491 tripathy s. c. , basu s. & christensen - dalsgaard j. , 1997 , in : poster volume ; proc .iau symposium no 181 : sounding solar and stellar interiors , eds f. x. schmider , j. provost , nice observatory , in the press
despite recent major advances , the opacity remains a source of substantial uncertainty in the calculation of solar models , and hence of solar oscillation frequencies . hence it is of substantial interest to investigate the sensitivity of solar structure to changes in the opacity . furthermore , we may hope from the precise helioseismic inferences of solar structure to obtain information about possible corrections to the opacities used in the model calculation . here we carry out detailed calculations of the influence on solar models of changes in the opacity , including also evolutionary effects . we find that over the relevant range the response of the model is approximately linear in the opacity change , allowing the introduction of _ opacity kernels _ relating a general opacity change to the corresponding model changes . changes in the convection zone can be characterized entirely by the change in the initial composition and mixing length required to calibrate the model . # 1]*#1 ] *
in large - scale cellular networks , the existence of interference at a receiver is an inevitable outcome of the concurrent operation of multiple transmitters at the same carrier frequency and time slot .it is a fundamental notion of wireless communications and thus methods to reduce it and even exploit it are of great interest .conventionally , multiuser interference is restricted with the use of orthogonal channels which prevent intra - cell interference , i.e. a user ( base station ( bs ) ) experiences interference only from out - of - cell bss ( users ) . however , even though orthogonality assists in the reduction of multiuser interference , it limits the available spectrum .given the increased use of wireless devices , especially mobile phones , the orthogonality schemes will be quite restrictive in the near future . towards this direction ,full - duplex ( fd ) is considered as a possible technology for the next generation of cellular networks .fd radio refers to the simultaneous operation of both transmission and reception using non - orthogonal channels and hence its implementation could potentially double the spectral efficiency .nevertheless , the use of non - orthogonal channels has the critical disadvantage of increasing the interference in a cellular network , which significantly degrades its performance .firstly , the existence of more active wireless links results in the escalation of both intra- and out - of - cell multiuser interference .secondly , the non - orthogonal operation at a transceiver creates a loop interference ( li ) between the input and output antennas .this aggregate interference at a receiver is the reason why fd has been previously regarded as an unrealistic approach in wireless communications .indeed , the primary concern towards making fd feasible was how to mitigate the li which has an major negative impact on the receiver s performance .recently , many methods have been developed which successfully mitigate the li ; these methods can be active ( channel - aware ) , e.g. , , passive ( channel - unaware ) , e.g. , , or a combination of the two , e.g. , . with regards to the multiuser interference , a well - known approach for reducing itis the employment of directional antennas . by focusing the signal towards the receiver s direction ,the antenna can increase the received power and at the same time decrease the interference it generates towards other directions .the significance of directional antennas in large - scale networks has been shown before . in , the authors studied an ad - hoc network s performance under some spatial diversity methods and showed the achieved gains .moreover , in , the impact on the performance of a downlink user in a heterogeneous cellular network with directional antennas was demonstrated .the employment of directional antennas in an fd context , provides the prospect of passively suppressing the li with antenna separation techniques , . in ,fd cellular networks with omnidirectional antennas are investigated where the terminals only make use of active cancellation mechanisms . in this paper, we consider fd cellular networks where the terminals employ directional antennas and therefore , in addition to the active cancellation , they can passively suppress the li and also reduce the multiuser interference .the main contribution of this work is the modeling of the passive suppression as a function of the angle between the transmit and receive antennas . by deriving analytical expressions of the outage probability and the average sum rate of the network ,we show the significant gains that can be achieved .the rest of the paper is organized as follows .section [ sec : model ] presents the network model together with the channel , interference and directional antenna model .section [ sec : analysis ] provides the main results of the paper and in section [ sec : validation ] the numerical results are presented which validate our analysis .finally , the conclusion of the paper is given in section [ sec : conclusion ] .: denotes the -dimensional euclidean space , denotes a two dimensional disk of radius centered at , denotes the euclidean norm of , represents the number of points in the area , denotes the probability of the event and represents the expected value of .fd networks can be categorized into two - node and three - node architectures .the former , referred also as bidirectional , describes the case where both nodes , i.e. , the user and the base station ( bs ) , have fd - capabilities .the latter , also known as relay or bs architecture , describes the case where only the bs ( or in other scenarios the relay ) has fd - capabilities . in what follows , we consider both architectures in the case where each node employs a number of directional antennas .the network is studied from a large - scale point of view using stochastic geometry .the locations of the bss follow a homogeneous poisson point process ( ppp ) of density in the euclidean plane , where denotes the location of the bs .similarly , let be a homogeneous ppp of the same density but independent of to represent the locations of the users .assume that all bss transmit with the same power and all users with the same power .a user selects to connect to the nearest bs in the plane , that is , bs serves user if and only if where and . assuming the user is located at the origin and at a distance to the nearest bs , the probability density function ( pdf ) of is .note that this distribution is also valid for the nearest distance between two users and between two bss .all channels in the network are assumed to be subject to both small - scale fading and large - scale path loss .specifically , the fading between two nodes is rayleigh distributed and so the power of the channel fading is an exponential random variable with mean .the channel fadings are considered to be independent between them .the standard path loss model is used which assumes that the received power decays with the distance between the transmitter and the receiver , where denotes the path loss exponent . throughout this paper, we will denote the path loss exponent for the channels between a bs and a user by .for the sake of simplicity , we will denote by the path loss exponent for the channels between bss and between users .lastly , we assume all wireless links exhibit additive white gaussian noise ( awgn ) with zero mean and variance .define as and the number of directional transmit / receive antennas employed at a bs and a user respectively .the main and side lobes of each antenna are approximated by a circular sector as in .therefore , the beamwidth of the main lobe is , .it is assumed that an active link between a user and a bs lies in the boresight direction of the antennas of both nodes , i.e. , maximum power gain can be achieved .note that refers to the omni - directional case . as in , we assume that the antenna gain of the main lobe is where , is the ratio of the side lobe level to the main lobe level . therefore , the antenna gain of the side lobe is , .the total multiuser interference at a node is the aggregate sum of the interfering received signals from the bss of and the uplink users of . in the two - node architecture , multiuser interference at any node results from both out - of - cell users and bss . in the three - node architecture ,the bs experiences multiuser interference from out - of - cell bss and users , whereas the downlink user experiences additional intra - cell interference from the uplink user .when or the transmitters can interfere with a receiver in four different ways : * transmitting towards a receiver in the main sector , * transmitting away from a receiver in the main sector , * transmitting towards a receiver outside the main sector , * transmitting away from a receiver outside the main sector , where the main sector is the area covered by the main lobe of the receiver . consider the interference received at a node from all other network nodes , , . to evaluate the interference ,each case needs to be considered separately .this results in each of the ppps and being split into four thinning processes and with densities .additionally , the power gain of the link between and changes according to . table [ tbl : thin ] provides the density and power gain for each case .note that and when the links have no gain , i.e. , . & & & & + & & & & +in this section , we analytically derive the outage probability and the sum rate of an fd cellular network implementing the three - node architecture .the respective expressions for the two - node architecture are omitted since they can be derived in a similar way .the performance analysis is derived using similar procedures as in , and . without loss of generality and following slivnyak s theorem , we execute the analysis for a typical node located at the origin but the results hold for all nodes in the network .denote by , and the typical downlink user , uplink user and bs respectively .0.45 /in 0.1,0.2, ... ,1 circle ( ) ; /in 0.01,0.02, ...,3.5 ( -22.5 : ) arc(-22.5:22.5 : ) ; /in 0.1,0.3, ... ,3.5 ( 67.5 : ) arc(67.5:112.5 : ) ; //in 0/0/right , 1//above right , 2//above , 3//above left , 4//left , 5//below left , 6//below , 7//below right ( 0,0 ) ( 45:3.7 ) ; ( 0,0 ) ( 22.5 + 45:3.5 ) ; at ( 45:3.8 ) [ ] ; ( 0:5.1 ) arc(0:90:5.1 ) ; at ( 45:5.1 ) [ above right ] ; 0.45 ( 0,0 ) circle ( 1 ) ; ( 0,0 ) circle ( 2 ) ; at ( -2 , 0 ) [ below ] ; ( 0,0 ) circle ( 3 ) ; ( 0,0 ) circle ( 4 ) ; at ( -4 , 0 ) [ below ] ; //in 0/0/right , 1//above right , 2//above , 3//above left , 4//left , 5//below left , 7//below right , 6//below ( 0,0 ) ( 180 / 4:4.2 ) ; at ( 180 / 4:4.2 ) [ ] ; plot [ domain=-180:180 ] ( xy polar cs : angle= , radius=4*exp(-cos(abs()-2 * 180/3)-0.5 ) ) ; at ( 1.5,2 ) ; the outage probability describes the probability that the instantaneous achievable rate of the channel is less than a fixed target rate , i.e. , ] .the average sum rate of a three - node fd cellular network is , {{\rm d}}t\nonumber\\&+ \int_0^\infty \left[1-\pi^d(t , \lambda , m_b , m_u , \alpha_1 , \alpha_2)\right]{{\rm d}}t.\end{aligned}\ ] ] since for a positive random variable , the result follows . to simplify the above expressions ,let and consider the asymptotic case when the number of employed antennas goes to infinity .furthermore , let and .assume that the bss and the users transmit with the same power , i.e. , , and consider high power transmissions which result in an interference - limited network , that is .then , using firstly the transformations in and in and secondly the transformations and , the outage probability as approaches infinity changes to , for the uplink and , for the downlink .it is clear that is independent from the density of the network and depends only on the target rate and the ratio .also , when then , which is expected since in this case there will be no multiuser interference . on the other hand , heavily depends on the value of . when and , depends entirely on the function and on the density , since in this case and .when , behaves similarly to : it becomes independent of ( even though it is not entirely clear ) and its value converges to zero when .we validate and evaluate our proposed model with computer simulations .unless otherwise stated , the simulations use the parameters from section [ sec : asymptotic ] , together with , and . in figures [ fig : outage_vs_bpcu]-[fig : rate_vs_li ] , the dashed lines represent the analytical results .firstly , we illustrate in figure [ fig : outage_vs_bpcu ] the impact on the uplink outage probability from the employment of directional antennas .both architectures benefit greatly from directionality but the three - node achieves a better performance due to the bs s ability to passively suppress the li .this is shown in figure [ fig : outage_vs_li ] where we depict the performance of a bs in terms of the outage probability , with and without passive suppression , for different values of . in the two extreme cases , and , the two methods have the same performance . in the former case, converges to a constant floor and in the latter case .however , for moderate values , passive suppression provides significant gains , e.g. , for db it achieves about reduction of .finally , figure [ fig : rate_vs_li ] shows the average sum rate of each architecture with respect to .the sum rate of the three - node architecture is obviously greater than the sum rate of the two - node architecture .this is in part due to the li passive suppression at the uplink but also due to the half - duplex mode at the downlink which is not affected by li .when , the sum rate of the three - node converges to the rate of the downlink whereas the sum rate of the two - node converges to zero . on the other hand , when , the two - node outperforms the three - node as expected since both nodes operate fd mode but this scenario is difficult to achieve which is also evident from the figurethis paper has presented the impact of directional antennas on the performance of fd cellular networks .the ability of the three - node architecture to passively suppress the li at the uplink has significant gains to its efficiency .moreover , since the downlink user operates in half - duplex mode , the network can achieve high sum rates .the three - node architecture is regarded as the topology to be potentially implemented first in the case of fd employment in cellular networks .the main reason is the high energy requirements which fd will impose on future devices .the results of this paper , give insight as to how such an architecture will perform and provide another reason to support its implementation . conditioned on the distance to the nearest uplink user we have , \right]\\ & = 1 - 2\pi \lambda\int_0^\infty { \mathbb{p}}[\sinr^u \geq 2^r-1\ |\ r ] \;re^{-\lambda \pi r^2}{{\rm d}}r.\end{aligned}\ ] ] the coverage probability ] is evaluated as follows , = \prod_{i\in\{1,2,3,4\ } } { \mathbb{e}}_{\phi_i , g_j}\left[\prod_{j\in\phi_i}e^{-s p_b { \gamma}_{b , b , i } g_j d_j^{-\alpha_2}}\right]\nonumber\\ & \stackrel{(a)}{= } \prod_{i\in\{1,2,3,4\ } } { \mathbb{e}}_{\phi_i}\left[\prod_{j\in\phi_i}{\mathbb{e}}_g[e^{-s p_b { \gamma}_{b , b , i } g d_j^{-\alpha_2}}]\right]\nonumber\\ \nonumber&\stackrel{(b)}{= } \prod_{i\in\{1,2,3,4\}}e^{-2\pi{\lambda}_{b , b , i } \int_\rho^\infty \left(1-{\mathbb{e}}_g[\exp(-s p_b { \gamma}_{b , b , i } g y^{-\alpha_2 } ) ] \right ) y { { \rm d}}y}\\ & \stackrel{(c)}{= } \prod_{i\in\{1,2,3,4\}}e^{-2\pi{\lambda}_{u , u , i } \int_\rho^\infty \left(1 - \frac{\mu}{\mu + s p_u { \gamma}_{u , u , i } y^{-\alpha_2 } } \right ) y { { \rm d}}y}\label{eq : exp_i_b},\end{aligned}\ ] ] where follows from the fact that are i.i.d . and also independent from the point process ; follows from the probability generating functional ( pgfl ) of a ppp and the limits follow from the closest bs being at a distance ; follows from the mgf of an exponential random variable and since .the results follows by replacing $ ] with , with in and letting . can be derived in a similar way .m. wildemeersch , t. q. s. quek , m. kountouris , a. rabbachin , and c. h. slump , successive interference cancellation in heterogeneous networks , " _ ieee commun .44404453 , dec . 2014 .e. everett , m. duarte , c. dick , and a. sabharwal , empowering full - duplex wireless communication by exploiting directional diversity , " _ proc .asilomar conf .signals , systems and computers _ , pacific grove , ca , nov .2011 , pp . 20022006 .m. duarte and a. sabharwal , full - duplex wireless communications using off - the - shelf radios : feasibility and first results , " in _ proc .asilomar conf .signals , systems and computers _ , pacific grove , ca , nov .2010 , pp .15581562 .a. sabharwal , p. schniter , d. guo , d. w. bliss , s. rangarajan , and r. wichman , in - band full - duplex wireless : challenges and opportunities , " _ ieee jsac special issue on full - duplex wireless networks _ ,32 , pp . 16371652 , sept .
loop interference ( li ) in wireless communications , is a notion resulting from the full - duplex ( fd ) operation . in a large - scale network , fd also increases the multiuser interference due to the large number of active wireless links that exist . hence , in order to realize the fd potentials , this interference needs to be restricted . this paper presents a stochastic geometry model of fd cellular networks where the users and base stations employ directional antennas . based on previous experimental results , we model the passive suppression of the li at each fd terminal as a function of the angle between the two antennas and show the significant gains that can be achieved by this method . together with the reduction of multiuser interference resulting from antenna directionality , our model demonstrates that fd can potentially be implemented in large - scale directional networks . full - duplex , cellular networks , stochastic geometry , performance analysis , loop interference , passive suppression .
the last years have seen dramatic improvements in robotic capabilities relevant to household tasks such as putting items into a dishwasher , folding and ironing clothing , and cleaning surfaces .so far , however , robots have not been able to robustly perform household tasks involving liquids , such as pouring a glass of water .solving such tasks requires both robust control and detection of liquid during the pouring operation .humans often are not very accurate at this , requiring specialized containers to measure a specific amount of liquid .instead people often use vague , relative terms such as `` pour me a half cup of coffee '' or `` just a little , please . ''while there has been recent success in robotics on controlling a manipulator to pour liquids simulated by small balls and on detecting liquids using optical flow or deep learning , the task of pouring certain amounts of actual liquids has not been addressed . in this paper, we introduce a framework that enables robots to robustly pour specific amounts of a liquid into containers typically found in a home environment , such as coffee mugs , cups , glasses , or bowls .we achieve this in the most general setting , without requiring specialized hardware , such as highly accurate force sensors for measuring the amount of liquid held by a robot manipulator , scales placed under the target container , or sensors designed for detecting liquids .however , while we avoid requiring specialized environmental augmentation , our investigation is on how accurate a robot could pour under relatively controlled conditions , such as having been able to train on the target containers .the intuition behind our approach is based on the insight that people strongly rely on visual cues when pouring liquids .for example , a health study revealed that the amount of wine people pour into a glass is strongly biased by visual factors such as the shape of the glass or the color of the wine .we thus propose a framework that uses visual feedback in a closed - loop pouring controller .specifically , we train a deep neural network structure to estimate the amount of liquid in a cup from raw visual data .our network structure has two stages . in the first stage ,a network detects which pixels in a camera image contain water .the output of the detection network is fed into another network that estimates the amount of liquid already in the container .this amount is used as real - time feedback in a pid controller that is tasked to pour a desired amount of water into a cup . to generate labeled data needed for the neural networks, we developed an experimental setup that uses a thermal camera calibrated with an rgbd camera to automatically label which pixels in the color frames contain ( heated ) water .experiments with a baxter robot pouring water into three different containers ( two mugs and one bowl ) indicate that this approach allows us to train deep networks that provide sufficiently accurate volume estimates for the pouring task .our main contributions in this paper are ( 1 ) an overall framework for determining the amount of liquid in a container for real - time control during a pouring action ; ( 2 ) the use of thermal imagery to generate ground truth data for pixel level labeling of ( heated ) water ; ( 3 ) a deep neural network that uses such labels to detect liquid pixels in raw color images ; ( 4 ) a model - based method to determine the volume of liquid in a target container given pixel - wise liquid detection ; ( 5 ) a neural network to regress to the volume of liquid given pixel - wise liquid detections as input ; and ( 6 ) an extensive evaluation that shows that our methodology is suitable for control by deploying it on a robot for use in a pouring task .4.cm 4.cm 4.cm 4.cm there is prior work related to robotic pouring , however , most of it either uses coarse simulations disconnected from real liquid perception and dynamics or constrained task spaces that bypass the need to perceive and reason directly about liquids .additionally , all of these works with the exception of pour the entire contents of the source container into the target container , with the focus on other factors such as spillage or the overall motion trajectory .in contrast , in this work we focus primarily on pouring a specific amount of liquid from the source into the target rather than simply emptying the source container into the target . to do this , the robot requires some method for estimating the volume of liquid in the target .et al._ utilized force sensors in the robot s arm to measure how much had been poured out , however this requires a robot with very precise torque sensors , which are not available on our baxter robot . in our own prior work we placed a digital scale under the target container . but this method presents many of its own challenges , such as delay in the scale measurement ( often 1 - 2 seconds ) and no information about where the liquid is or how it is moving .humans , on the other hand , are able to accomplish this task purely from visual feedback , which strongly suggests that robots should be able to as well .there is some prior work related to directly perceiving liquids from sensory feedback .work by yamaguchi and atkeson utilizes optical flow to detect liquids as they flow from a source into a target container .however , for the tasks in this paper , the robot must also be able to detect standing water with no motion , for which optical flow is poorly suited .instead , we build on our own prior work relating to liquid detection in simulation .we developed a method utilizing fully - convolutional neural networks to label pixels in an image as either _ liquid _ or _ not - liquid_. here we utilize the recurrent network with long short - term memory ( lstm ) layers that we used in that work to detect and label liquid in an image .in this paper , the robot is tasked with pouring a specific amount of liquid from a source container into a target container .this task is more difficult than prior work on robotic pouring which primarily focuses on pouring all the contents of the source container into the target container , whereas we focus on pouring only a limited amount from the source . to accomplish this, the robot must use visual feedback to continuously estimate the current volume of liquid in the target container .our approach has 3 main components : first the robot detects which pixels in its visual field are liquid and which are not .next the robot uses these detections to estimate the volume of liquid in the target container . finally , the robot feeds these volume estimates into a controller to pour the liquid into the target .figure [ fig : model ] shows a diagram of this process .we structure the problem in this manner as opposed to simply training one end - to - end network as it allows us to train and evaluate each of the individual components of the system , which can give us better insight into its operation . in order for both the model - based and model - free volume estimation methods to work, the robot must classify each pixel in the image as _ liquid _ or _ not - liquid_. we developed two methods for acquiring these pixel labels : a thermographic camera in conjunction with heated water , and a fully - convolutional neural network with color images . while the thermal camera works well for generating pixel labels ,it is also rather expensive and must be registered to an rgbd sensor . in our prior work ,we developed a method for generating pixel labels on simulated data for liquid from color images only , which we briefly describe here .given color images , we train a convolutional neural network ( cnn ) to label each pixel as _ liquid _ or _ not - liquid _ ( we use the thermal camera to acquire the ground truth labeling ) .the network is fully - convolutional , that is , all the learned layers are convolution layers with no fully - connected layers .the output of the network is a heatmap over the image , with real values in the range $ ] , where higher values indicate a higher likelihood of liquid . in tested 3 network structures and found that a recurrent network utilizing a long short - term memory ( lstm ) layer performs the best of the 3 .here we use the lstm - cnn from that paper , which is shown in the top row of figure [ fig : model ] .we refer the reader to for more details .( lstm_in3 ) at ( 0.0,9.0 ) ; ( lstm_rec1 ) at ( 0.0,7.225 ) ; ( lstm_conv1 ) at ( 2.2,8.975 ) ; ( lstm_conv2 ) at ( 3.1,8.975 ) ; ( lstm_conv3 ) at ( 4.0,8.975 ) ; ( lstm_conv4 ) at ( 4.9,8.975 ) ; ( lstm_conv5 ) at ( 5.8,8.975 ) ; ( lstm_rec_conv1 ) at ( 2.2,7.2 ) ; ( lstm_rec_conv2 ) at ( 3.1,7.2 ) ; ( lstm_rec_conv3 ) at ( 4.0,7.2 ) ; ( lstm_lstm1 ) at ( 7.0,8.0 ) [ fill = green!60 ] ; ( lstm_fc_conv1 ) at ( 8.4,8.0 ) [ fill = blue!60 ] ; ( lstm_deconv ) at ( 9.3,8.0 ) [ fill = orange!60 ] ; ( lstm_out1 ) at ( 10.2,8.0 ) ; ( lstm_rec_in2 ) at ( 6.0 , 7.0 ) recurrent + state ; ( lstm_rec_in3 ) at ( 7.45 , 6.7 ) cell + state ; ( lstm_rec_out2 ) at ( 8.7 , 9.5 ) recurrent + state ; ( lstm_rec_out3 ) at ( 7.45 , 9.3 ) cell + state ; ( lstm_in3 ) ( lstm_conv1 ) ; ( lstm_conv1 ) ( lstm_conv2 ) ; ( lstm_conv2 ) ( lstm_conv3 ) ; ( lstm_conv3 ) ( lstm_conv4 ) ; ( lstm_conv4 ) ( lstm_conv5 ) ; ( lstm_rec1 ) ( lstm_rec_conv1 ) ; ( lstm_rec_conv1 ) ( lstm_rec_conv2 ) ; ( lstm_rec_conv2 ) ( lstm_rec_conv3 ) ; ( lstm_conv5.east ) ( lstm_lstm1.west ) ; ( lstm_rec_conv3.east ) ( lstm_lstm1.west ) ; ( lstm_lstm1 ) ( lstm_fc_conv1 ) ; ( lstm_fc_conv1 ) ( lstm_deconv ) ; ( lstm_deconv ) ( lstm_out1 ) ; ( lstm_rec_in2 ) ( lstm_lstm1.west ) ; ( lstm_rec_in3 ) ( lstm_lstm1.south ) ; ( lstm_lstm1.east ) ( lstm_rec_out2.215 ) ; ( lstm_lstm1.north ) ( lstm_rec_out3 ) ; ( concat ) at ( 7.5,3.525 ) [ fill = gray!60 ] ; ( conv6 ) at ( 8.5,3.525 ) ; ( fc_conv1 ) at ( 9.5,3.525 ) [ fill = blue!60 ] ; ( fc_conv2 ) at ( 10.5,3.525 ) [ fill = blue!60 ] ; ( fc_conv3 ) at ( 11.5,3.525 ) [ fill = blue!60 ] ; ( out1 ) at ( 12.5,3.525 ) ; ( in3a ) at ( 0.0,4.0 ) ; ( conv1a ) at ( 2.2,3.975 ) ; ( conv2a ) at ( 3.1,3.975 ) ; ( conv3a ) at ( 4.0,3.975 ) ; ( conv4a ) at ( 4.9,3.975 ) ; ( conv5a ) at ( 5.8,3.975 ) ; ( in3a ) ( conv1a ) ; ( conv1a ) ( conv2a ) ; ( conv2a ) ( conv3a ) ; ( conv3a ) ( conv4a ) ; ( conv4a ) ( conv5a ) ; ( conv5a ) ( concat ) ; ( elps_node ) at ( 0.22,2.5 ) [ fill = none , draw = none ] ; ( in3b ) at ( 0.5,3.0 ) ; ( conv1b ) at ( 2.7,3.075 ) ; ( conv2b ) at ( 3.6,3.075 ) ; ( conv3b ) at ( 4.5,3.075 ) ; ( conv4b ) at ( 5.4,3.075 ) ; ( conv5b ) at ( 6.3,3.075 ) ; ( in3b ) ( conv1b ) ; ( conv1b ) ( conv2b ) ; ( conv2b ) ( conv3b ) ; ( conv3b ) ( conv4b ) ; ( conv4b ) ( conv5b ) ; ( conv5b ) ( concat ) ; ( concat ) ( conv6 ) ; ( conv6 ) ( fc_conv1 ) ; ( fc_conv1 ) ( fc_conv2 ) ; ( fc_conv2 ) ( fc_conv3 ) ; ( fc_conv3 ) ( out1 ) ; ( crop ) at ( 5.0,5.7 ) [ fill = magenta!60,minimum height=0.7 cm ] ; ( lstm_out1.south ) to [ out=270,in=0 ] ( crop.east ) ; ( crop.west ) to [ out=180,in=90 ] ( in3a.north ) ; ( hmm ) at ( 4.0,0.0 ) [ fill = cyan!60,minimum height=0.7 cm ] ; ( pid ) at ( 8.0,0.0 ) [ fill = violet!60,minimum height=0.7 cm ] ; ( control ) at ( 12.0,0.0 ) ; ( out1.south ) to [ out=270,in=90 ] ( hmm.north ) ; ( hmm.east ) ( pid.west ) ; ( pid.east ) ( control.west ) ; ( hmm.east ) ( 7.0,0.0 ) ( 7.0,-1.0 ) ( 3.0,-1.0 ) ( 3.0,0.0 ) ( hmm.west ) ; at ( 4.7,0.7 ) * * ; at ( 6.6,0.4 ) * * ; we propose two different methods for estimating the volume of liquid in a target container .the first is a model - based method , which assumes we have access to a 3d model of the target container and infers the height of the liquid based on the camera pose and binary pixel labels .the second is a model - free method that trains a neural network to regress to the volume of liquid in the target container given labeled pixels .our model - based method for estimating the volume of liquid in a target container assumes we have a 3d model of the container and that we can use the pointcloud from our rgbd sensor to find its pose in the scene . to determine the volume of liquid in the container ,we first acquire the pixel - wise liquid labels as described in the previous section .next we use these classifications to compute the height of the liquid in the container at time .if we assume that the liquid is resting level in the container , then there is a one - to - one correspondence between the height of the liquid and the volume , and we can use the 3d model to compute the volume given the height . we use a discrete bayes s filter with observations only to estimate a probability distribution over for all timesteps .that is , for all , we wish to estimate where are all the observations ( pixel labels ) up to time .we make the markovian assumption that each state is conditionally independent of all prior observations given the previous state , thus the resulting bayes filter is equivalent to a corresponding hidden markov model ( hmm ) .we can estimate the posterior distribution over using bayes s rule as follows : since we make the conditional independence assumption , we can drop the term from the conditional in the observation probability , resulting in .we compute as the expectation of the previous distribution times the transition probability , i.e , where is the probability of transitioning from height and is the prior probability of at time .we can compute the observation probability as where is the set of all pixels that see the inside of the target container , as determined by the model s pose in the scene .we make the naive bayes assumption that each pixel is conditionally independent of all the others given , which , while not technically correct , works well in practice .to compute the observation probability of an individual pixel being either liquid or not liquid given the height of the surface of the water , we compare what the robot would expect to see if were the true height to what the observed pixel labels are .to do this , we use the pose of the model to fit a plane to the surface height and project that surface back into the camera s pixel space , generating expected pixel labels .an example of this is shown in figure [ fig : therm_layout ] .based on this , we set as follows : [ cols=">,>,^,^ " , ] where is the expected label at pixel for .we allow slightly more error for classifying a pixel as _ liquid _ when it is above the level of the water due to the stream of liquid falling from the source container during the pour .we discretize the height into 1000 values . after computing the distribution over ,we take the median as the height of the liquid , and thus the volume , at each time step .our model - free method replaces the object pose inference of the model - based method with a neural network .the neural network takes in pixel labels and produces a volume estimate .we use only the output of the detection network described in section [ sec : classification ] for the pixel labels , so we directly feed the heatmap over the pixels into the volume estimation network .we also evaluate adding as inputs either the color or depth images , which we append channel - wise to the pixel labels before feeding into the network .we crop the input to the network around the target container .the output of the volume estimation network is a discrete distribution over a class label .we treat this as the observation for a discrete bayes filter and process it in a similar manner as the previous section ( notably we make the markovian independence assumption ) .we can compute the distribution over volumes as where is a distribution over the volume of liquid in the target container at time .we compute in the same manner as in the previous section .to compute , we treat the output of the network as a probability distribution over a discrete observation .there are multiple methods we can utilize to compute the observation probability from the network output .for example , we could take the maximum probability and consider that value the observed value .however , a more robust and principled method would be to compute the expectation of the observation probability , taking the entire distribution output by the network into account .we can compute this as = \displaystyle\sum_i p(z_t = i | v_t)p(z_t = i ) \vspace{-0.2cm}\ ] ] where is the probability that the observation is given the volume is , and is the probability of state derived from the output of the network .we use the median of as the volume at time .we evaluated three different network architectures : a single - frame cnn , a multi - frame cnn , and a recurrent lstm cnn .we use the caffe deep learning framework to implement our networks _ single - frame cnn : _ the single - frame network is a standard cnn that takes as input a single image. it then passes the image through 5 convolution layers , each of which is followed by a max pooling and rectified linear layer .every layer has a stride of 1 except for the first 3 max pooling layers .it passes the result through 3 fully connected layers , each followed by a rectified linear layer .these last 3 layers are also followed by dropout layers during training , with a drop rate of 10% .the single - frame network ( cnn ) is similar to the multi - frame network shown in the center row of figure [ fig : model ] , with the exception that it only takes a single frame and does not have the concatenation layer or the convolution layer immediately following it . _multi - frame cnn : _ the multi - frame network ( mf - cnn ) is shown in the center row of figure [ fig : model ] .it takes as input a set of temporally sequential images .each image is passed independently through the first 5 layers of the network , which are identical to the first 5 convolutional layers in the single - frame network .next , the result of each image is concatenated channel - wise and passed through another convolution layer ( which is also followed by max pooling and rectified linear layers ) .this is then fed into 3 fully connected layers , which are identical to the last 3 layers of the single - frame cnn ._ recurrent lstm cnn : _ the lstm - cnn is identical to the single - frame network , with the exception that we replace the first fully connected layer with the lstm layer .in addition to the output of the convolution layers , the lstm layer also takes as input the recurrent state from the previous timestep , as well as the cell state from the previous timestep . each gate in the lstm layer is a 256 node fully connected layer .please refer to figure 1 of for a detailed layout of the lstm layer . for this paper , we want to investigate whether , given good real - time feedback , pouring can be performed with a simple controller .we place a table in front of the robot , and on the table we place the target container .we fix the source container in the robot s right gripper and pre - fill it with a specific amount of water not given to the robot .we also fix the robot s arm such that the source container is above and slightly to the side of the target container . to pour , the robot controls the angle of its wrist joint , thus directly controlling the angle of the source container .we use a modified pid controller to execute the pour .the robot first tilts the container to a pre - specified angle ( we use 75 degrees from vertical ) , then begins running the pid controller , using the difference between the target volume and the current volume in the target container as its error signal .since pouring is a non - reversible task ( liquid can not return to the source once it has left ) , we set the integral gain to 0 , and we set the proportional and derivative gains to and respectively .once the target volume has been reached , the robot rotates the source container until it is vertical once again .both our model - based and model - free methods require finding the target container on the table in front of the robot ( though only the model - based needs a 3d model ) . to find the container , we use the robot s rgbd camera to capture a pointcloud of the scene in front of the robot and then utilize functions built - in to the pointcloud library ( pcl ) to find the plane of the table and cluster the points on top of the table . to acquire the pose for the model - based method, we use iterative closest points to find the 3d pose of the model in the scene .next we use this pose to label each pixel in the image as either _ inner _( inside of the container ) , _ outer _ ( outside of the container ) , or _neither_. we use a thermal camera in combination with water heated to approximately 93celsius to get the ground truth pixel labels for the liquid . to register the thermal image to the color image , we use a paper checkerboard pattern attached to a centimeter metal aluminum sheet .we then direct a small , bright spotlight at the pattern , causing a heat differential between the white and black squares , which is visible as a checkerboard pattern in the thermal image and we use opencv s built - in function for finding corners of a checkerboard to find correspondence points and compute an affine transformation .we use an adaptive threshold based on the average temperature of the pixels associated with the target container ( which includes the pixels for the liquid in the container ) .the result of this is a binary image with each pixel classified as either _ liquid _ or _ not - liquid_. figure [ fig : therm_examples ] shows a color image , its corresponding thermal image transformed to the color pixel space , and a simple temperature threshold of the thermal image . note that the thermal camera provides quite reliable pixel labels for liquid detection with minimal false positives in order to train our networks in the previous section , and to evaluate both our model - based and model - free methods ,we need a baseline ground truth volume estimation . to generate this baseline, we utilize the thermal camera in combination with the model - based method described in section [ sec : model - based ] .however , since this analysis can be done _ a posteriori _ and does not need to be real - time , we can use the benefit of hindsight to improve our estimates , i.e. , future observations can improve the current state estimate . while we acknowledge that this method does not guarantee perfect volume estimates , the combined accuracy of the thermal camera and after - the - fact processing yield robust estimates suitable for training and evaluation . to compute this baselinewe replace the forward method for hmm inference described in section [ sec : model - based ] with viterbi decoding .we replace the summation in equation [ eq : prior ] in the computation of the prior with a to compute the probability of each sequence .we use a corresponding to compute the previous state from the current state , starting at the last time step and working backwards . at the last time step , we start with the most probable state . thus using this methodwe can generate a reliable ground truth estimate of the volume of liquid in the target container over the duration of a pouring sequence to use for training our learning algorithms and evaluating our methodology .all of our experiments were performed on our rethink robotics baxter research robot , shown in figure [ fig : robot_setup ] .it is equipped with two 7-dof arms , each with an electric parallel gripper .for the experiments in this paper , we use exclusively the right arm .the robot has an asus xtion pro mounted on its upper - torso , directly below its screen , which includes both an rgb color camera and a depth sensor , each of which produce images at 30hz .mounted on the robot immediately above the xtion sensor is an infrared cameras inc .8640p thermal imaging camera , which reads the temperature of the image at each pixel and outputs a image at 30hz . for all experiments ,the robot poured from the cup shown in its gripper in figure [ fig : robot_setup ] .we used three target containers , also shown in figure [ fig : robot_setup ] .we collected a dataset of pours using this setup in order to both train and evaluate our methodologies .we collected a total of 279 pouring sequences , in which the robot attempted to pour 250ml of water into the target using the thermal camera with the model - based method , with the initial amount in the varied between 300ml , 350ml , and 400ml .each sequence lasted exactly 25 seconds and was recorded on both the thermal and rgbd cameras at 30hz .we randomly divided the data 75%-25% into train and evaluation sets .after the data was collected , we used the thermal images to generate ground truth pixel labels as well as we used the viterbi decoding method described in section [ sec : get_gt ] to generate ground truth volume estimates , which we compare against for the remainder of this section .we use this ground truth estimate to directly infer used in the hmms described previously . 4.0 cm ( 4.0,3.5 ) ( 0.0,0.0 ) 4.0 cm ( 4.0,3.5 ) ( 0.0,0.0 )before we can evaluate our methodologies , we must first verify that our method for generating ground truth volume estimates is accurate .we can compare a static volume measurement with a scale to static estimates from the thermal camera combined with the model - based method to gauge the accuracy of our method .figure [ fig : thermal_verification ] shows a comparison between measurements from a scale ( x - axis ) and the corresponding measurement from the thermal camera with model ( y - axis ) for each of the three target containers .the black dashed line shows a 1:1 correspondence for reference . from the figure it is clear that the model - based method overestimates the volume for each container . in order to make our baseline as accurate as possible ,we fit a linear model for each container and use that to calibrate the baseline ground truth estimates described in section [ sec : get_gt ] .next we must verify that the neural network we trained to labels pixels as _ liquid _ or _ not - liquid _ from color images is accurate enough to utilize for volume estimation .our prior work showed that neural networks can label liquid pixels in an image reasonably well on data generated by a realistic liquid simulator , and so we expect that this will carry over to the data we collected for these experiments .we trained the recurrent lstm cnn using the mini - batch gradient descent method adam with a learning rate of 0.0001 and default momentum values , for 61,000 iterations .we unrolled the recurrent network during training for 32 frames and used a batch size of 5 .we scaled the input images to resolution .the error signal was computed using softmax loss . as in , we found the best results are achieved when we first pre - train the network on crops of liquid in the images , and then train on full images .figure [ fig : detection_verification ] shows the performance of the detection network , and the image in figure [ fig : detection_example ] shows an example of the output of the network .these results clearly show that our detection network is able to classify pixels with high precision and recall .this suggests that the network will work well for estimating the volume of liquid in the target container .we should note , however , that due to the relatively small size of the training set , this detection network will work well only for the tasks described in this paper and will not generalize to other environments or tasks .2.4 cm ( 2.0,3.5 ) ( 0.0,0.1 ) 6.0 cm ( 6.0,3.5 ) ( 0.0,0.0 ) 3.0 cm ( 3.0,2.7 ) ( 0.0,0.0 ) 2.7 cm ( 2.7,2.7 ) ( 0.0,0.0 ) 2.7 cm ( 2.7,2.7 ) ( 0.0,0.0 ) for our model - free methodology , every network was trained using the mini - batch gradient descent method adam with a learning rate of 0.0001 and default momentum values .each network was trained for 61,000 iterations , at which point performance tended to plateau .all single - frame networks were trained using a batch size of 32 ; all multi - frame networks with a window of 32 and batch size of 5 ; and all lstm networks with a batch size of 5 and unrolled for 32 frames during training .the input to each network was a resolution crop of either the liquid detections only , the color image and detections appended channel - wise , or the depth image and detections appended channel - wise .we discretize the output to 100 values for the range of 0 to 400ml ( none of our experiments use volumes greater than 400ml ) and train the network to classify the volume .the error signal was computed using the softmax with loss layer built into caffe . in our data we noticed that approximately of the time during each pouring sequence was spent either before or after pouring had occurred , with little change in the volume .we found that the best results could be achieved by first pre - training each network on data from the middle of each sequence during which the volume was actively changing , and then training on data sampled from the entire sequence .we discretize and the output of the network into 20 values and compute used in the model - free method for all and from the output of the networks on the data ( we compute a separate observation probability distribution for each network ) .figure [ fig : aggregate_all ] shows the root mean squared error in milliliters on the testing data for each method with respect to our baseline ground truth comparison described in section [ sec : get_gt ] .it should be noted that although both our baseline ground truth estimate and the thermal estimate in figure [ fig : model_based ] are derived from the same data , the difference between the two can be largely attributed to the fact that the baseline method is able to look backwards in time and adjust its estimates , whereas the thermal model - based method can only look forward ( which is necessary for control ) .for example , in the initial frames of a pour , as the water leaves the source container , it can splash against the side of the target container , causing the forward thermal estimate to incorrectly estimate a spike in the volume of liquid , whereas the baseline method can smooth this spike by propagating backwards in time .while the error for both model - based methods are relatively small , it is clear that many of the model - free methods are actually better able to estimate the volume of liquid in the target container . surprisingly , the best performing model - free estimation network is the multi - frame network that takes as input only the pixel - wise liquid detections from the detection network .the networks trained on detections only are the only networks that receive no shape information about the target container ( both the depth and color images contain some information about shape ) , so intuitively , it would be expected that they would be unable to estimate the volume of more than a single container , and thus perform more poorly than the other networks .however , a lot of the temporal and perceptual information used by our methodology is already provided in the pixel - wise liquid detections , thus temporal information in addition to either color or depth images are not as beneficial to the networks .we can verify that this is indeed the case by looking at the volume estimates on randomly selected pouring sequences from the test set , one for each target container .figure [ fig : exs ] shows the volume estimates for the two model - based methods and the multi - frame detection only method as compared to the baseline .it is clear from the plots that the multi - frame network is better able to match the baseline ground truth than either of the model - based methods .not only does the multi - frame network outperform the model - based methods , but unlike them , it does not require either an expensive thermal camera or a model of the target container . for these reasons ,we utilize this method in the next section for carrying out actual pouring experiments with closed - loop visual feedback .( 7.0,5.5 ) ( 0.0,0.0 ) 2.5 cm 2.5 cm 2.5 cm 2.5 cm 2.5 cm 2.5 cm estimating the volume _ a posteriori _ and using a volume estimator as input to a pouring controller are two very different problems. a volume estimation method may work well analyzing the data after the pouring is finished , but that does not necessarily mean it is suitable for control .for example , if the estimator outputs an erroneous value at one timestep , it may be able to correct in the next since the trajectory of the pour does not change .however , if this happens during a pour and the estimator outputs an erroneous value , this may result in a negative feedback loop in which the trajectory deviates more and more from optimal , leading to more erroneous volume estimates , etc . to verify that our chosen method from the previous section is actually suitable for control, we need to execute it on a real robot for real - time control . to test the multi - frame network with detections only , we executed 30 pours on the real robot using the pid controller described in section [ sec : robot_controller ] .we ran 10 sequences on each of the three target containers . for each sequence , we randomly selected a target volume in milliliters and we randomly initialized the volume of water in the source container as either 300 , 350 , or 400 milliliters , always ensuring at least a 100ml difference between the starting amount in the source and the target amount ( so the robot can not simply dump out the entire source and call it a success ) .each pour lasted exactly 25 seconds , and we evaluated the robot based on the actual amount of liquid in the target container ( as measured by a scale ) after the pour was finished .figure [ fig : control_points ] shows a plot of each pour , where the x - axis is the target amount and the y - axis is the actual volume of liquid in the target container after the pour finished .note that the robot performs approximately the same on all containers .this is particularly interesting since the volume estimation network is never given any information about the target container , and must simply infer it based on the motion of the liquid .additionally , almost all of the 30 pours were within 50ml of the target .in fact , the average error over all the pours was 38ml . for reference , figure[ fig : refrence_amounts ] shows 50ml differences for each of our 3 containers from the robot s perspective . as is apparent from this figure ,50ml is a small amount , and a human solving the same task would be expected to have a similar error .in this paper , we introduce a framework for visual closed - loop control for pouring specific amounts of liquid into a container . to provide real - time estimation of the amount of liquid poured so far ,we develop a deep network structure that first detects the presence of water in individual pixels of color videos and then estimates the volume based on these detections . to generate the data and labels required to train the deep networks , we collect training videos with heated water observed by both an rgb - d camera and a calibrated thermal camera .a model - based approach allows us then to estimate the volume of liquid in a container based on the pixel - level water detections .our experiments indicate that the deep network architecture can be trained to provide real - time estimates from color only data that are slightly better than the model - based estimates using thermal imagery .furthermore , once trained on multiple containers , our volume estimator does not require a matched shape model of the target container any more .we incorporated our approach into a pid controller and found that it on average only missed the target amount by 38ml .while this is not accurate enough for some applications ( e.g. , some industrial settings ) , it is well suited for similar pouring tasks in standard home environments and is on par with what a human would be expected to do in a similar setting . to our knowledge , this is the first work that has combined visual feedback with control in order to pour specific amounts of liquids into everyday containers .this work opens up various directions for future research .one important avenue is to develop methods to improve the robustness of the neural networks .currently , because of the limited size of our dataset , the networks only work properly on the three target containers they were trained on .we believe that the networks will be able to generalize to arbitrary containers when trained on sufficiently many examples , but this still has to be shown .other interesting directions include generalization to different liquids , such as pouring a glass of soda or a cup of coffee , representing target amounts in _ relative _ terms , such as in `` pour me a half cup of water '' , and more sophisticated control schemes on top of our perception .
pouring a specific amount of liquid is a challenging task . in this paper we develop methods for robots to use visual feedback to perform closed - loop control for pouring liquids . we propose both a model - based and a model - free method utilizing deep learning for estimating the volume of liquid in a container . our results show that the model - free method is better able to estimate the volume . we combine this with a simple pid controller to pour specific amounts of liquid , and show that the robot is able to achieve an average 38ml deviation from the target amount . to our knowledge , this is the first use of raw visual feedback to pour liquids in robotics .
multiplex networks are tools for modeling networked systems in which units have heterogeneous types of interaction , making them members of distinct networks simultaneously .the multiplex framework envisages different layers to model different types of relationships between the same set of nodes .for example , we can take a sample of individuals and constitute a social media layer , in which links represent interaction on social media , a kinship layer , a geographical proximity layer , and so on .examples of real systems that have been conceptualized so far using the multiplex framework include citation networks , online social media , airline networks , scientific collaboration networks , and online games . theoretical analysis of multiplex networks was initiated by the seminal papers that invented and introduced theoretical measures for quantifying multiplex networks .consequently , multiplex networks were utilized for the theoretical study of phenomena such as epidemics , pathogen - awareness interplay , percolation processes , evolution of cooperation , diffusion processes and social contagion . for a thorough review ,see . in the present paper we focus on the problem of growing multiplex networks . in , the case where two layers are homogeneously growing ( that is , the number of links that each newly - born node establishes is the same for both layers ) according to preferential attachmentis considered , and it is shown that ( which is the average layer-2 degree of nodes whose layer-1 degree is ) is a function of .previous results on growing multiplex networks are confined to homogeneously - growing layers . in the present paper , we consider heterogeneously - growing layers : each incoming node establishes links in layer 1 and links in layer 2 .we also solve the problem for the case where growth is uniform , rather than preferential .we demonstrate that , surprisingly , the expression for is identical to that of the preferential case .we verify the theoretical findings with monte carlo simulations .the two - layer multiplex network we consider in the present paper possesses one set of nodes and two distinct sets of links .the network comprises two layers , corresponding to the two sets of links .each node resides in both layers .the degree of node in layer 1 is denoted by , and its degree in layer 2 is denoted by .the number of nodes at time is denoted by and the number of links at layer is denoted by , and is the number of nodes that have degrees and at time .we denote the fraction of these nodes by .each incoming node establishes links in layer 1 and links in layer 2 . at the inception, there are links in the first layer and links in the second layer .the network grows by the successive addition of new nodes .each node establishes links in each layer .so the number of links in layer at time is .in the first model , incoming nodes choose their destinations according to the preferential attachment mechanism posited in .the probability that an existing node ( call it ) receives a layer-1 link from the newly - born node is proportional to , and similarly , the probability for it to receive a layer-2 link is proportional to . note that to obtain the normalized link - reception probabilities at time , the former should be divided by and the latter should be divided by number of links in the first and second layers , respectively .the addition of a new node at time can alter the values of . if a node with layer-1 degree and layer-2 degree receives a layer-1 link , its layer-1 degree increments to , and increments as a consequence .if a node with layer-1 degree and layer-2 degree receives a link , its layer-2 degree increments and consequently , increments .there are two events which would result in a decrease in : if a node with layer-1 degree and layer-2 degree receives a link in either layer .finally , each incoming node has an initial layer-1 degree and layer-2 degree of , and increments when it is introduced .the following rate equation quantifies the evolution of the expected value of upon the introduction of a single node by addressing the aforementioned events with their corresponding probabilities of occurrence : alternatively , we can write the rate equation for . using the substitution , we obtain \big [ n _ { k,\ell}(t+1 ) - n_{k,\ell}(t ) \big ] + n _ { t+1}(k,\ell ) = \nonumber \\ & + \beta_1 { \ , \displaystyle \frac { ( k-1 ) n_{k-1 , \ell}(t)- k n_{k \ell}(t ) } { l_1(0)+ 2\beta_1 t } } \nonumber \\ & + \beta_2 { \ , \displaystyle \frac { ( \ell-1 ) n_t(k,\ell-1)- \ell n_t(k,\ell ) } { l_2(0)+ 2\beta_2 t } } + \delta_{k \beta_1 } \delta_{\ell \beta_2 }. \label{rate_2 } \end{aligned}\ ] ] now we focus on the limit as , when the values of reach steady states , and we have in this limit transforms into rearranging the terms , this can be equivalently expressed as follows this difference equation is solved in appendix [ app : sol_1 ] .the solution is this is depicted in figure [ fig_3 ] . as a measure of correlation between the two layers ,we find the average layer-2 degree of the nodes whose layer-1 degree is .let us denote this quantity by . to calculate , we need to perform the following summation : in appendix [ app : nk_1 ] , we perform this summation .the answer is in the special case of , this reduces to , which is consistent with the previous result in the literature .note that if we take the expected value of , we obtain which coincides with the mean degree in layer 2 .now let us analyze how adding a layer affects inequality in degrees .we ask , what is the probability that a node has higher degree in layer 2 than in layer 1 ( on average ) ?that is , we seek . analyzing the inequality , we observe that if , then for every the inequality holds , if , then must be less than .so a node with degree below is on average more connected in layer 2 than in layer 1 .note that since the minimum degree in layer 1 is , we should impose an additional constraint on , namely , .this leads to . since and can only take integer values , since yields .so in order for a node with degree to have greater expected degree in layer 2 than its given degree in layer 1 , first we should have , and second , . in short , there are three distinct cases to discern : * ( a ) * if , the inequality holds for all , that is , on average , every node is more connected in layer 2 than in layer 1 . * ( b ) * if , then the inequality never holds .that is , everyone is on average more connected in layer 1 . *( c ) * if , then for nodes whose degree in layer 1 is smaller than ( which coincides with ) , the inequality holds , and for others it does not .so in the case of homogeneous growth , nodes whose degree in one layer is below the mean degree are on average more connected in the other layer , and nodes with degree higher are on average less connected in the other layer .these three cases are depicted in figure [ fig_4 ] .the purple area pertains to case ( a ) , where curves are are always below , regardless of and .the green area corresponds to case ( c ) , where is always above .the middle region is the one that curves for the cases of reside in .those curves are depicted in red .it is visible that for each red curve , there is a cutoff degree above which ..48 0.48in this model , we assume that each incoming node establishes links in both layers by selecting destinations from existing nodes uniformly at random . the rate equation should be modified to the following : \big [ n _ { k,\ell}(t+1 ) - n_{k,\ell}(t ) \big ] + n _ { t+1}(k,\theta,\ell ) = \nonumber \\ & + \beta_1 { \ , \displaystyle \frac { n_{k-1 , \ell}(t)- n_{k \ell}(t ) } { n(0)+ t } } + \beta_2 { \ , \displaystyle \frac { n_t(k,\theta,\ell-1)- n_t(k,\theta,\ell ) } { n(0)+ t } } + \delta_{k \beta_1 } \delta_{\ell \beta_2 } .\label{rate_2_u } \end{aligned}\ ] ] using the substitution , this becomes \big [ n _ { k,\ell}(t+1 ) - n_{k,\ell}(t ) \big ] + n _ { t+1}(k,\theta,\ell ) = \nonumber \\ & \beta_1 { \ , \displaystyle \frac { n_{k-1 , \ell}(t)- n_{k \ell}(t ) } { n(0)+ t } } + \beta_2 { \ , \displaystyle \frac { n_t(k,\theta,\ell-1)- n_t(k,\theta,\ell ) } { n(0)+ t } } + \delta_{k \beta_1 } \delta_{\ell \beta_2 } .\label{rate_3_u } \end{aligned}\ ] ] in the steady state , that is , in the limit as , this becomes this can be simplified and equivalently expressed as follows this difference equation is solved in appendix [ app : sol_2 ] .the solution is to find the conditional average degree , that is , , we first need the degree distribution of single layers in order to constitute the conditional degree distribution .this is found previously for example in .the degree distribution in the first layer is .we need to compute we have performed this summation in appendix [ app : nk_2 ] .the result is this is identical to .we performed monte carlo simulations to verify the results .figure [ fig_1 ] depicts as a function of for both uniform and preferential attachment for .the two curves are visibly linear and overlapping .figure [ fig_2 ] depicts for both uniform and preferential attachment for for the cases .it can be observed from figure [ fig_2 ] that in all cases the curves for preferential and uniform growth overlap , and that the slope increases as increases .this is consistent with the predictions of and , where the slope is given by .this attains its minimum at , and reaches unity for ..48 0.48we studied the problem of multiplex network growth , where two layers were heterogeneously growing .we considered the cases of preferential and uniform growth separately .we obtained the inter - layer joint degree distribution for both settings .we calculated , and observed that it is identical in both scenarios .we corroborated the theoretical findings with monte carlo simulations . while the average degree are calculated to be the same in eqs .( 8) and ( 16 ) , it does not mean the two cases have entirely the same correlation properties .note , for example , that it was obtained in that the two cases have different inter - degree correlation coefficients .plausible extensions of the present analysis are as follows .first , there is no closed - form solution in the literature for the inter - layer joint degree distribution of growing multiplex networks with nonzero coupling , where the link reception probabilities in one layer depends on the degrees in both layers .second , it would be informative to analyze the growth problem in arbitrary times , to grasp the finite size effects and to understand how evolves over time , and how the time evolution differs in the preferential and uniform settings .third , it would plausible to endow the nodes with initial attractiveness , that is , to consider a shifted - linear kernel for the preferential growth mechanism .fourth , a more realistic and practical model would require intrinsic fitness values for nodes , so it would be plausible to analyze the multiplex growth problem with intrinsic fitness .finally , since most real systems are multi - layer , it would be plausible to extend the bi - layer results to arbitrary layers ..9 100 de domenico , m. , sole - ribalta , a. , cozzo , e. , kivela , m. , moreno , y. , porter , m. a. , gomez , s. , arenas , a. : mathematical formulation of multilayer networks , phys .x 3 , 041022 ( 2013 ) .kivela , a. , arenas , a. , barthelemy , m. , gleeson , j. , moreno , y. , porter , m. : multilayer networks , j. complex netw . 2 , 203 - 271 ( 2014 ) .son , s. w. , bizhani , g. , christensen , c. , grassberger , p. , paczuski , m. : percolation theory on interdependent networks based on epidemic spreading , europhysics lett .97 , 16006 ( 2012 ) .granell , c. , gomez , s. , arenas , a. : dynamical interplay between awareness and epidemic spreading in multiplex networks , phy .111 , 128701 ( 2013 ) .cellai , d. , lopez , e. , zhou , j. , gleeson , j. p. , bianconi , g. : percolation in multiplex networks with overlap , phys .e , 88 , 052811 ( 2013 ) .gomez - gardenes , j. , reinares , i. , arenas , a. , floria , l. m. : evolution of cooperation in multiplex networks , sci . rep . 2 , 620 ( 2012 ) .gomez , s. , diaz - guilera , a. , gomez - gardenes , j. , perez - vicente , c. j. , moreno , y. , arenas , a. : diffusion dynamics on multiplex networks .110 , 028701 ( 2013 ) .cozzo , e. , banos , r. a. , meloni , s. , moreno , y. : contact - based social contagion in multiplex networks .e 8 , 050801 ( 2013 ) .boccaletti , s. , bianconi , g. , criado , r. , del genio , c. i. , gmez - gardenes , j. , romance , m. , sendina - nadal , i , zanin , m. : the structure and dynamics of multilayer networks , phys . rep .544 , 1122 .( 2014 ) .barabasi , a. l. , albert , r. : emergence of scaling in random networks , science , 286 , 509512 ( 1999 ) .nicosia , v. , bianconi , g. , latora , v. , barthelemy , v. : non - linear growth and condensation in multiplex networks , phys .e 90 , 042807 ( 2014 ) kim , jung yeol , and k - i .goh . : coevolution and correlated multiplexity in multiplex networks .lett . 111.5( 2013 ) : 058702 .nicosia , v. , bianconi , g. , latora , v. , barthelemy , m. : growing multiplex networks , phys .111 , 058701 ( 2013 ) .fotouhi , b. , rabbat , m. , network growth with arbitrary initial conditions : degree dynamics for uniform and preferential attachment , phys .e 88 , 062801 ( 2013 ) .we need to solve we define the new sequence the following holds plugging these into , we can recast it as now define the z - transform of sequence as follows : taking the z transform of every term in , we arrive at this can be rearranged and rewritten as follows the inverse transform is given by } } .\label{m_1_1 } \end{aligned}\ ] ] first we integrate over .we get now note that the residue of for positive integer equals , where the numerator denotes the derivative of the function , evaluated at .also , note that the -th derivative of the function , for integer and , equals .combining these two facts , we obtain using , we arrive at this can be equivalently expressed as follows : this can be inverted through the following steps there is a single simple pole at , which renders the integral trivial : after inserting the expressions for from , this becomes need to calculate we use the following identity : to rewrite the binomial reciprocal of the coefficient as follows also , from taylor expansion , it is elementary to show that this identity will be used in the steps below . plugging into , we have dt = { \ , \displaystyle \frac { \beta_2(\beta_2 + 1 ) } { ( 2+\beta_1+\beta_2 ) } } { \ , \binom{\beta_1+\beta_2 + 2}{\beta_1 + 1 } } { \ , \displaystyle}\int_0 ^ 1 ( 1-t)^{k+2 } t^{-k-2 } { \ , \displaystyle \frac { d } { dt } } \left[t^{3+\beta_1+\beta_2 } { \ , \displaystyle}\sum_{\ell } t^{k-\beta_1+\ell-\beta_2 } { \ , \binom{k-\beta_1+\ell-\beta_2}{k-\beta_1 } } \right ] dt } \nonumber \\ & = { \ , \displaystyle \frac { \beta_2(\beta_2 + 1 ) } { ( 2+\beta_1+\beta_2 ) } } { \ , \binom{\beta_1+\beta_2 + 2}{\beta_1 + 1 } } { \ , \displaystyle}\int_0 ^ 1 ( 1-t)^{k+2 }t^{-k-2 } { \ , \displaystyle \frac { d } { dt } } \left [ { \ , \displaystyle \frac { t^{k+\beta_2 + 3 } } { ( 1-t)^{k-\beta_1 + 1 } } } \right ] dt \nonumber \\ & \resizebox{.9 \linewidth}{!}{ } \nonumber \\ & \resizebox{.95\linewidth}{!}{ } \nonumber \\ &\resizebox{.95\linewidth}{!}{ } \nonumber \\ & = \frac { \beta_2(\beta_2 + 1 ) \beta_1 ! \beta_2 ! } { ( 2+\beta_1+\beta_2 ) ( 1+\beta_1+\beta_2 ) ! } { \ , \binom{\beta_1+\beta_2 + 2}{\beta_1 + 1 } } \left [ ( k+\beta_2 + 3 ) -(\beta_2 + 1 ) \right ] \nonumber \\ & = { \ , \displaystyle \frac { \beta_2 } { \beta_1 + 1 } } ( k+2 ) \label{lbar_k_1_app } \end{aligned}\ ] ] let us denote by and by .also let us denote by .we need to evaluate the following sum : .let us use and define .we have : .\label{sum11 } \end{aligned}\ ] ] replacing with and inserting this result into , we get ^{k-\beta_1 + 2 } } } \big[\beta_2+\frac{\beta_2}{1+\beta_1+\beta_2 } ( k-\beta_1 + 1-\beta_2 ) \big ] \nonumber \\ & = { \ , \displaystyle \frac { ( 1+\beta_1+\beta_2)^{k-\beta_1 + 2 } } { ( 1+\beta_1)^{k-\beta_1 + 2 } } } \big[\beta_2 + 2+\frac{\beta_2}{1+\beta_1+\beta_2 } ( k-\beta_1 + 1-\beta_2 ) \big ] \nonumber \\ & = { \ , \displaystyle \frac { ( 1+\beta_1+\beta_2)^{k-\beta_1 + 2 } } { ( 1+\beta_1)^{k-\beta_1 + 2 } } } \big[\frac{\beta_2(k+2)}{1+\beta_1+\beta_2 } \big ] \label{sum_11_2 } \end{aligned}\ ] ] plugging this into , we get
the multiplex network growth literature has been confined to homogeneous growth hitherto , where the number of links that each new incoming node establishes is the same across layers . this paper focuses on heterogeneous growth in a simple two - layer setting . we first analyze the case of two preferentially growing layers and find a closed - form expression for the inter - layer degree distribution , and demonstrate that non - trivial inter - layer degree correlations emerge in the steady state . then we focus on the case of uniform growth . we observe that inter - layer correlations arise in the random case , too . also , we observe that the expression for the average layer-2 degree of nodes whose layer-1 degree is , is identical for the uniform and preferential schemes . throughout , theoretical predictions are corroborated using monte carlo simulations .
recently , kish and sethuraman proposed a new method , which is claimed to be an absolute secure data encryption , by using only classical information .sethuraman has identified a mechanical analogy of the method with a mail - carrier - box and two - padlocks , see fig .the sender is using a padlock to lock the box ( operation ) before the mail is sent . when the receiver receives the box , he is using another padlock to lock the box again ( operation ) .then the double - locked box is sent back to the sender where the first padlock is removed ( operation ) .then the sender sends the box again , which is still locked by the receiver s padlock , back to the receiver where the box is opened ( operation ) thus the mail is delivered . in conclusion , at the kish - sethuraman ( ks ) chiper ( so named by klappenecker ) , the following operations are carried out on the bit ( or word , etc . ) , m(t ) of the message : where the arrows with s and r mean that the resulting signal was generated by the sender and the receiver , respectively . at the third step ,the presumed mathematical condition is : which was identified by bergou as commutativity of the operators : the commutativity is necessary to get through the message to the receiver .another mathematical condition concerns the security of the ks cipher : this condition simply ensures that a comparison of the data sequences before and after the application of the operators will not provide enough information to identify the data or the actual operator code .a. klappenecker pointed out that operators in the rsa public key cryptosystem satisfy eq .( [ eq3 ] ) and condition 1 .further discussions with a. singer made it obvious that , for the absolute security , one more condition is required which has been implicitly assumed but not explicitly stated : indeed , the rsa system is not absolutely secure because of the shared public information about the operators . thus the system can be broken if the eavesdropper has extraordinary calculational power for factoring ( is , for example , in the possession of a quantum computer ) . in conclusion , with conditions 1 and 2 the ks cipher ( or any cipher , for that matter ) would be absolutely secure , and eq .( [ eq3 ] ) is needed for the functioning of this crypto system .however , it seems to be extremely difficult to find classical operators which satisfy all three of these requirements .for example , the classical ks cipher does not meet condition 1 .the eavesdropper can record the three sequences , , and with no difficulty . then simply by comparing the first and the second sequence , she can read out and applying the inverse , , to the third sequence she will have full access to the key . similarly ,the rsa operators satisfy eq .( [ eq3 ] ) and condition 1 but not condition 2 .in the following we will introduce a straightforward quantum generalization of the above two - padlock method and show that the quantum version provides absolute security for qkd .in particular , as we shall demonstrate , all of the above requirements , eq .( [ eq3 ] ) and conditions 1 and 2 are met .first we replace the individual classical bits in the message by qubits , i.e. we communicate the single bits by single quantum systems with two states , such as polarization or spin or two - level atoms , etc . for the sake of a concrete example , but by no means restricting generality , let us suppose that we communicate with single photons and the information - bit is represented by two orthogonal polarization states , for horizontal and for vertical polarization , respectively .the quantum realization of eq .( 1 ) and that of the two padlock arrangement of fig . 1is shown in fig .alice , the sender , generates a qubit randomly in either the state 0 or the 1 .then , in the next step she generates a random unitary operator , , which , in this case is a polarization rotation by a random amount and applies it to the qubit she generated .she then sends this qubit which has now a completely random orientation over to bob , the receiver .bob generates another random unitary operator , , which rotates the polarization by an additional random amount and applies it to the qubit he just received and then sends it back to alice .alice , at her end , applies to the qubit she receives and redirects it to bob . in the final step bobapplies to the qubit and then measures the qubit in the orthogonal basis 0 and 1 .whichever detector clicks will tell the state of the initially prepared photon uniquely .it is easy to see that these steps exactly correspond to the ones in eq .( [ eq1 ] ) .furthermore , the operators and are polarization rotations , which are randomly generated at the communication of each bits , and therefore they obviously satisfy eq .( [ eq3 ] ) .condition 1 is also clearly satisfied ; at each step of the communication scheme the sates that travel between alice and bob are completely random and unknown quantum states can not be determined from a single measurement .condition 2 is also trivially met ; the keys , and , never leave alice s and bob s site , respectively .the mirrors at the sender s site and at the receiver s site have the only role of redirecting the photons and can be replaced by optical fibers or similar devices .the polarization beam splitter pbs directs the photons to the relevant detector .the resulting quantum communicaton scheme has superior properties .it is very simple , provides absolutely no information to the eavesdropper , has no inherent detection noise like other quantum communicators , and does not need rely on the use of a classical channel or entangled states .the new method has several advantages over exisiting proposals and we will briefly list them here , along with a short discussion of each of these features . \1 .the photons seen by the eavesdropper have random polarizations .moreover , from a quantum measurement on a single photon the evasdropper is unable to identify the polarization .thus the absolute security conditions 1 and 2 are both satisfied and the eavesdropper has zero information even if she measures the photons .this is in sharp contrast to previously proposed schemes where the eavesdropper could get a finite amount of information but her presence could be detected .the ultimate encoding of the key is into polarization eigenstates .thus the inherent quantum detection noise originating from the necessity of detecting non - orthogonal polarization states in other quantum communication systems does not exist here .another feature is that the presence of an eavesdropper can be found out in a simple manner . although it should be noted that in this scheme the eavesdropper has no information to gain , her ultimately unsuccessful activities can still be detected .for example , if the eavesdropper is using a photon amplifier to gain information , she introduces a quantum detection ( cloning , amplification ) noise in the system , which can immediately be detected , provided each bit is sent at least two times and the results are then publicly compared .let us consider quantum cloning as an example . since perfect cloning of a quantum state is impossible eve has to rely on the next best thing which is approximate cloning . in each step , the eavesdropper can reach a maximum fidelity of 5/6 using the optimal universal quantum cloning machine . since there are three exchanges between alice andbob this means that eve can reach a maximum fidelity of which is hardly better than a completely random guess and is , thus , useless for any security breach .in fact , even this small window of opprtunity can be removed by using an obvious four - padlock extension of the above scheme ( alice and bob each applies two quantum padlocks initially , and in each subsequent exchange they remove one of them , requiring five exchanges altogether ) . in summary, the above scheme provides absolute security for secret communication , qkd in particular , between alice and bob since the communication is via pure noise from the point of view of an eavesdropper . in no step ofthe scheme is there a possibility for eve to access useful information and , thus , the need for classical communication is also eliminated .the proposed scheme can be generalized in many different ways and opens up several novel possibilities for absolutely secure communication .note added : recently we became aware that a very similar idea has actually been published already but the order of adding and removing the quantum padlocks is different and therefore we feel that our method provides more security .this research was partially supported by a grant from the humboldt foundation ( jb ) and by a grant from psc - cuny .jb also acknowledges helpful discussions with prof .w. schleich and his group during a visit to the university of ulm .the material presented in this paper is described in a cuny - tamu patent disclosure .kish , s. sethuraman and p. heszler , non breakable data encryption with classical information ?fourth international conference on unsolved problems of noise , lecce , italy , june 6 - 10 , 2005 , american institute of physics press ( 2005 ) , in press .
a new quantum communication scheme is introduced which is the quantum realization of the classical kish - sethuraman ( ks ) cipher . first the message is bounced back with additional encryption by the receiver and the original encryption is removed and the message is resent by the sender . the mechanical analogy of this operation is using two padlocks ; one by the sender and one by the receiver . we show that the rotation of the polarization is an operator which satisfies the conditions required for the ks encryption operators provided single photons are communicated . the new method is not only simple but has several advantages . the evasdropper extracts zero information even if she executes a quantum measurement on the state . the communication can be done by two publicly agreed orthogonal states . therefore , there is no inherent detection noise . no classical channel and no entangled states are required for the communication .
the opening quotes set up the frame in which this paper has been written : in the sciences we always deal with uncertainties ; being in condition on uncertainty we can only state ` somehow ' how much we believe something ; in order to do that we need to build up probabilistic models based on good sense .for example , if we are uncertain about the value we are going to _ read on _ an instrument , we can make probabilistic assessments about it .but in general our interest is the _ numerical value of a physics quantity_. we are usually in great condition of uncertainty before the measurement , but we still remain with some degree of uncertainty after the measurement has been performed .models enter in the construction of the the causal network which connects physics quantities to what we can observe on the instruments .they are also important because it is convenient to use , whenever it is possible , probability distributions , instead than to assign individual probabilities to each individual ` value ' ( after suitable discretization ) that a physics quantity might assume .as we know , there are good reasons why in many cases the gaussian distribution ( or _ normal _ distribution ) offers a _reasonable _ and _ convenient _ description of the probability that the quantity of interest lies within some bounds .but it is important to remember that , when he derived the famous distribution for the measurement errors , one should not take literally the fact that the variable appearing in the formula can range from minus infinite to plus infinite : an apple can not have infinite mass , or a negative one ! sticking hereafter to gaussian distributions , it is clear that if we are only interested to the probability density function ( pdf ) of a variable at the time , we can only describe our uncertainty about that quantity , and nothing more .the game becomes interesting when we study the joint distribution of several variables , because this is the way we can learn about some of them assuming the values of the others .for example , if we assume the joint pdf of variables and under the state of information ( on which we ground our assumptions ) , we can evaluate , that is the pdf adding the extra condition , which is usually not the same as , that is the pdf of for any value might assume .is called _ marginal _ , although there is never special about this name , since all distributions of a single variable can be thought as being ` marginal ' to all other possible quantities which we are not interested about . is instead ` called ' _ conditional _ , although it is a matter of fact that distributions are conditional to a given state of information , here indicated by . note that throughout this paper will shall use the same symbol for all pdf s , as it is customary among physicists i have met mathematics oriented guys getting mad by the equation because , they say , `` the three functions can not be the same '' ... ] let us take for example the three diagrams of fig .[ fig : modelli_base ] to which we give a physical interpretation : 1 . in the diagram on the leftthe variable might represent the numerical value of a physics quantity , on which we are in condition on uncertainty , modelled by where and are suitable parameters to state our ` ignorance ' about ( ` complete ignorance ' , if it does ever exist , is recovered in the limit ) .instead , is then what we read on an instrument when we apply it to .that is , even if we knew , we are still uncertain about what we can read on the instrument , as it is well understood .modelling this uncertainty by a normal distribution we have , for any value of where is a compact symbol for and which is in general different from .in fact our uncertainty about ( for any possible value of ) must be larger than that about itself , for obvious reasons we shall see later the details .2 . in the diagram on the center might represent a second observation done _ independently _ applying in general a second ( possibly different ) instrument to the identical value .this means that and are independent , although and are , as we shall see .3 . in the diagram on the right the observation read on the instrument applies to , but possibly influenced by , that might then represent a kind of _ systematics_. note , how it has been precisely stated , that of the first and of the second diagrams , as well as of the other two , are the _ readings _ on the instruments and the result of the measurement !this is because by `` result of the measurement '' we mean statements about the quantity of interest and not about the quantities read on the instruments ( think for example at the an experiment measuring the higgs boson mass , making use of the information recorded by the detector ! ) . in this casethe `` result of the measurement '' would be where * data * stands for the set of observed variables. the diagrams of the figure can be complicated , using sets of data , with systematics effects common to observations in each subset .the aim of this paper is to help in developing some intuition of what is going on in problems of this kind , with the only simplification that all pdf s of interest are normal .we assume that the reader is familiar with some basic concepts related to _ uncertain numbers _ and _ uncertain vectors _ , usually met under the name of `` random variables '' . : + \ ] ] with & = & \mu \\\mbox{var}[x ] & = & \sigma^2 \\\sigma[x ] = \sqrt{\mbox{var}[x ] } & = & \sigma\,.\end{aligned}\ ] ] ( we remind that in most physics applications simply means . ) in the r language there are functions ( dnorm ( ) , pnorm ( ) and qnorm ( ) , respectively ) to calculate the pdf , the cumulative function , usually indicated with `` '' , as well as its inverse , as shown in the following , self explaining examples ( ` ' is the r console prompt ) : + + [ 1 ] 0.3989423 + + [ 1 ] 0.3989423 + + [ 1 ] 0.5 + + [ 1 ] 0.6826895 + + [ 1 ] 5 + + [ 1 ] inf + + [ 1 ] -inf + note the capability of the language to handle infinities , as it can be cross checked by + + [ 1 ] 1 + and here are the instructions to produce the plots of figure [ fig : gaussian_f - f ] . the joint distribution of a bivariate normal distribution is given by \right\ } \, , \label{eq : bivar}\end{aligned}\ ] ] where & = & \mu_i \\ \mbox{var}[x_i ] & = & \sigma_i^2 \\\sigma[x_i ] \equiv \sqrt{\mbox{var}[x_i ] } & = & \sigma_i\\ \rho_{12}&= & \frac{\mbox{cov}[x_1,x_2]}{\sigma_1\,\sigma_2}\ , , \end{aligned}\ ] ] with variances and covariances forming the _ covariance matrix _ & \mbox{cov}[x_1,x_2 ] \\ & \\\mbox{cov}[x_1,x_2 ] & \mbox{var}[x_2 ] \end{array } \!\right ) = \left(\!\begin{array}{cc } \sigma_1 ^ 2 & \rho_{12}\,\sigma_1\,\sigma_2 \\ & \\\rho_{12}\,\sigma_1\,\sigma_2 & \sigma_2 ^ 2 \end{array } \!\right ) \end{aligned}\ ] ] the bivariate pdf ( [ eq : bivar ] ) can be rewritten in a compact form as \ , , \label{eq : normale_multivariata_gen}\end{aligned}\ ] ] where stands for det( ) .this expression is valid for any number of variables and it turns , , into \ , .\label{eq : normale_multivariata_ind}\ ] ] ( for an extensive , although mathematically oriented treatise on multivariate distribution see ref . , freely available online . )functions to calculate multivariate normal pdf s , as well as cumulative functions and random generators are provided in r via the package mnormt that needs first to be installed issuing + + and then loaded by the command + + then we have to define the values of the parameters and built up the vector of the central values and the covariance matrix .here is an example : + + + + + + + then we can evaluate the joint pdf in a point , e.g. + + [ 1 ] 0.1645734 + or we can evaluate , or , respectively , with + + [ 1 ] 0.140636 + and + + [ 1 ] 0.3524164 + if we like to visualize the joint distribution we need a 3d graphical package , for example rgl or plot3d .we need to evaluate the joint pdf on a grid of values ` ' and ` ' and provide them to the suited function . hereare the instructions that use the persp3d ( ) of the rgl package : + + + + + + + after the plot is shown in the graphics window , the window can be enlarged and the plot rotated at wish .figure [ fig : normale_bivariata ] shows in the upper two plots two views of the same distribution .+ here are also the instructions to use plot3d ( ) : + + + + + the result is shown in the lower plot of fig .[ fig : normale_bivariata ] . another convenient and often used representation of normal bivariates is to draw iso - pdf contours , i.e. lines in correspondence of the points in the plane such as .this requires that the _ quadratic form _ at the exponent of eq .( [ eq : bivar ] ) [ that is what is written in general as has a fixed value .in the two dimensional case of eq .( [ eq : bivar ] ) we recognize the expression of an ellipse .we have in r the convenient package ellipse to evaluate the points of such an ellipse , given the vector of expected values , the covariance matrix and the probability that a point falls inside it .here is the script that applies the function to the same bivariate normal of fig .[ fig : normale_bivariata ] , thus producing the contour plots of fig . [ fig : normale_bivariata_ellipse ] : the probability to find a point inside the ellipse contour is defined by the argument level .the ellipses drawn with solid lines define , in order of size , 50% , 90% and 99% contours . for comparisonthere are also the contours at 68.3% , 95.5% and 99.73% , which define the _ highly confusing _1- , 2- and 3- contours .indeed , the probability that each of the variable falls in the interval of \pm k\,\sigma[x_i] ] the probability needs to be calculated making the integral of the joint distribution inside the rectangle ( some of these rectangles are shown in fig .[ fig : normale_bivariata_ellipse ] by the dotted lines , that indicate 1- , 2- and 3- bound in the individual variable ) .let us see how to evaluate in r the probability that a point falls in a rectangle , making use of the cumulative probability function pmnorm ( ) . in fact the probability in a rectangle is related to the cumulative distribution by the following relation + & = & p[\ , ( x_1 \le x_{1_m})\ , \&\ , ( x_2 \le x_{2_m } ) ] \nonumber \\ & & - p[\ , ( x_1 \le x_{1_m})\ , \&\ , ( x_2 \le x_{2_m } ) ] \nonumber \\ & & - p[\ , ( x_1 \le x_{1_m})\ , \&\ , ( x_2 \le x_{2_m } ) ] \nonumber \\ & & + p[\ , ( x_1 \le x_{1_m})\ , \&\ , ( x_2 \le x_{2_1 } ) ] \ , , \hspace{0.9cm}\end{aligned}\ ] ] + + that can be implemented in an r function : for example 51313 ] + + [ 1 ] 0.5138685 + + [ 1 ] 0.5138685 + as a cross check , let us calculate the probabilities in strips of plus / minus one standard deviations around the averages ( the ` strips ' provide a good intuition of what a ` marginal ' is ) : + + [ 1 ] 0.6826895 + + [ 1 ] 0.6826895 a nice feature of the multivariate normal distribution is that if we are just interested to a subset of variables alone , neglecting which value the other ones can take ( ` marginalizing ' ) , we just drop from and from the uninteresting values , or the relative rows and columns , respectively .for example , if we have see subsection [ sss : syst_x3 ] marginalizing over the second variable ( i.e. being only interested in the first and the third ) we obtain here is a function that returns expected values and variance of the multivariate ` marginal ' .... marginal.norm < - function(mu , v , x.m ) { # x.m is a vector with logical values ( or non zero ) indicating # the elements on which to marginalise ( the others are 0 , na or false ) x.m[is.na(xm ) ] < - false v < - which ( as.logical(x.m ) ) list(mu = mu[v ] , v = v[v , v ] ) } .... ( note how the function has been written in a very compact form , exploiting some peculiarities of the r language. in particular , the elements of x.m to which we are interested can be true , or can be a numeric value different from zero ; the others can be false , 0 or na . ) a different problem is the pdf of one of variables , say , for a given value of the other .this is not as straightforward as the marginal ( and for this reason in this subsection we only consider the bivariate case ) .fortunately the distribution is still a gaussian , with _ shifted central value _ and _ squeezed width _ : i.e. & = & \mu_1 + \rho_{12}\,\frac{\sigma_1}{\sigma_2}\ , ( x_2-\mu_2 ) \label{eq : x1_cond1_e } \\\mbox{var}[x_1 ] & = & \sigma_1 ^ 2\cdot(1-\rho_{12}^2 ) \label{eq : x1_cond1_var } \\\sigma[x_1 ] & = & \sigma_1 \cdot\sqrt{1-\rho_{12}^2}\ , .\end{aligned}\ ] ] and , by symmetry , mnemonic rules to remember eqs .( [ eq : x1_cond1_e ] ) and ( [ eq : x1_cond1_var ] ) are * the shift of the expected value depends linearly on the correlation coefficient as well on the difference between the value of the conditionand ( ) and its expected value ( ) ; the ratio can be seen as a minimal dimensional factor in order to get a quantity that has the same dimensions of ( remember that and have in general different physical dimensions ) ; * the variance is reduced by a factor which depends on the absolute value of the correlation coefficient , but not on its sign .in particular it goes to zero if , limit in which the two quantities become linear dependent , while it does not change if , since the two variables become independent and they can not effect each other .( in general independence implies . for the normal bivariate it is also true the other way around . )an example of a bivariate distribution ( from , with and indicated as customary with and ) is given in fig .[ fig : bivar ] , which shows also the marginals and some conditionals . as an exercise ,lets prove ( [ eq : x1_cond1 ] ) , with the purpose of show some useful tricks to simplify the calculations .if we take literally the rule to evaluate knowing that is given by ( [ eq : bivar ] ) we need to calculate the trick is to make the calculations neglecting all irrelevant multiplicative factors , starting from the whole denominator , which given ( whatever its value might be ! ) . hereare the details ( note that additive terms in the exponential are factors in the function of interest!):} ] if none of the non - normal components dominates the overall variance , i.e. if \ll \sum_i c_i^2\ , \mbox{var}[x_i]cxvcc\muvva\muvxxvvvv ] and ] .it is convenient to model our uncertainty about with a normal distribution , with a standard deviation much larger than if we make a measurement we want to gain knowledge about that quantity ! and centered around the values we roughly expect . in order to simplify the calculations , in the exercise that followslet us assume that is centered around zero .we shall see later how to get rid of this limitation .the joint distribution is then given by \times \frac{1}{\sqrt{2\,\pi}\,\sigma_1}\ ,\exp\left[-\frac{x_1 ^ 2}{2\,\sigma_1 ^ 2}\right ] \label{eq : joint_x1x1}\end{aligned}\ ] ] as an exercise , let us see how to evaluate .the trick , already applied before , is to manipulate the terms in the exponent in order to recover a well known pattern . hereare the details , starting from ( [ eq : joint_x1x1 ] ) rewritten dropping all irrelevant factors : \\ & \propto & \exp\left [ -\frac{1}{2}\left ( \frac{x_2 ^ 2 - 2\,x_1x_2+x_1 ^ 2}{\sigma_{2|1}^2 } + \frac{x_1 ^ 2}{\sigma_1 ^ 2 } \right)\right ] \\ & \propto & \exp\left [ -\frac{1}{2}\left ( \frac{x_2 ^ 2}{\sigma_{2|1}^2 } -\frac{2\,x_1x_2}{\sigma_{2|1}^2 } + x_1 ^ 2\cdot\left(\frac{1}{\sigma_{2|1}^2 } + \frac{1}{\sigma_{1}^2}\right ) \right)\right ] \\ & \propto & \exp\left [ -\frac{1}{2}\left ( \frac{x_2 ^ 2}{\sigma_{2|1}^2 } -\frac{2\,x_1x_2}{\sigma_{2|1}^2 } + x_1 ^ 2\cdot\frac{\sigma_{2|1}^2+\sigma_{1}^2 } { \sigma_{2|1}^2\cdot\sigma_{1}^2 } \right)\right ] \\ & \propto & \exp\left [ -\frac{1}{2}\,\frac{\sigma_{2|1}^2+\sigma_{1}^2 } { \sigma_{2|1}^2 } \,\left ( \frac{x_2 ^ 2}{\sigma_{2|1}^2+\sigma_1 ^ 2 } -\frac{2\,x_1x_2}{\sigma_{2|1}^2+\sigma_1 ^ 2 } + \frac{x_1 ^ 2}{\sigma_{1}^2 } \right)\right ] \\ & \propto & \exp\left [ -\frac{1}{2}\,\frac{1}{\frac{\sigma_{2|1}^2 } { \sigma_{2|1}^2+\sigma_{1}^2 } } \,\left ( \frac{x_2 ^ 2}{\sigma_{2|1}^2+\sigma_1 ^ 2 } -\frac{2\,x_1x_2}{\sigma_{2|1}^2+\sigma_1^ 2 } + \frac{x_1 ^ 2}{\sigma_{1}^2 } \right)\right]\end{aligned}\ ] ] in this expression we recognize a bivariate distribution centered around , provided we interpret and after having checked the consistency of the terms multiplying .indeed we have and then the second term within parenthesis can be rewritten as then \end{aligned}\ ] ] is definitively a bivariate normal distribution with as a cross check , let us evaluate expected value and variance of if we assume a certain value of , for example : & = & 0 + \frac{\sigma_1 ^ 2}{\sigma_1 ^ 2}\cdot(x_1 - 0 ) = x_1\\ \mbox{var}[\left.x_2\right|_{x_1=x_1 } ] & = & \sigma_1 ^ 2 + \sigma_{2|1}^2 - \frac{\sigma_1 ^ 2}{\sigma_1 ^ 2 } \,\sigma_1 ^ 2 = \sigma_{2|1}^2 \,,\end{aligned}\ ] ] as it should be : provided we know the value of our expectation of is around its value , with standard uncertainty .more interesting is the other way around , that is indeed the purpose of the experiment : how our knowledge about is modified by : & = & 0 + \frac{\sigma_1 ^ 2}{\sigma_1 ^ 2+\sigma_{2|1}^2}\cdot(x_2 - 0 ) = x_2 \cdot \frac{1}{1+\sigma_{2|1}^2/\sigma_1 ^ 2 } \label{eq : e_x1|x2 } \\\mbox{var}[\left.x_1\right|_{x_2=x_2 } ] & = & \sigma_1 ^ 2 - \frac{\sigma_1 ^ 2}{\sigma_1 ^ 2 + \sigma_{2|1}^2 } \,\sigma_1 ^ 2 = \sigma_{1|2}^2 \cdot \frac{1}{1+\sigma_{2|1}^2/\sigma_1 ^ 2 } \, , \label{eq : var_x1|x2}\end{aligned}\ ] ] contrary to the first case , this second result is initially not very intuitive : the expected value of is not exactly equal to the ` observed ' value , unless , that models our _ prior standard uncertainty _ about , is much larger than the experimental resolution .similarly , the _ final standard uncertainty _ is in general a smaller than , unless , again , .[multiblock footnote omitted ] although initially surprising , these result are in qualitative agreement with the good sense of experienced physicists .the next step is to see what happens when we are in the conditions to make several _ independent measurements _ on the same quantity , possibly with different instruments , each one characterized by a conditional standard uncertainty and perfectly calibrated , that is = x_1 mu [ 1 ] 2 0 2 2 2 2 2 v ) ) ) [ 1 ] 0.000000 1.000000 1.414214 1.414214 1.4142141.414214 1.414214 > round ( outmu , 4 ) [ 1 ] 1.9608 0.0196 2.0000 1.9804 1.98041.9804 1.9804 > round ( outv ) ) , 4 ) [ 1 ] 1.4003 0.9951 0.0000 1.4107 1.41071.4107 1.4107 > round ( outmu , 4 ) [ 1 ] 1.4778 0.0148 2.0000 1.0000 1.49261.4926 1.4926 > round ( outv ) ) , 4 ) [ 1 ] 1.2157 0.9951 0.0000 0.0000 1.2237 1.2237 1.2237> round ( outmu , 4 ) [ 1 ] 2.0000 -0.3333 2.0000 1.0000 1.6667 1.66671.6667 > round ( outv ) ) , 4 ) [ 1 ] 0.0000 0.5774 0.0000 0.0000 1.15471.1547 1.1547 > round ( outmu , 4 ) [ 1 ] 1.9999 1.0000 2.5000 2.5000 1.9999 1.9999 1.99992.0000 1.9999 > round ( out.s < - sqrt(diag(outv , 4 ) [ , 1 ] [ , 2 ] [ , 3 ] [ , 4 ] [ , 5 ] [ , 6 ] [ , 7 ] [ , 8 ] [ , 9 ] [ 1 , ] 0.3333 0 0.0 0.0 0.3333 0.3333 0.33330 0.3333 [ 2 , ] 0.0000 0 0.0 0.0 0.0000 0.0000 0.00000 0.0000 [ 3 , ] 0.0000 0 0.5 -0.5 0.0000 0.0000 0.00000 0.0000 [ 4 , ] 0.0000 0 -0.5 0.5 0.0000 0.0000 0.0000 0 0.0000 [ 5 , ] 0.3333 0 0.0 0.0 1.3333 0.3333 0.33330 0.6667 [ 6 , ] 0.3333 0 0.0 0.0 0.3333 1.3333 0.33330 0.6667 [ 7 , ] 0.3333 0 0.0 0.0 0.3333 0.3333 1.33330 0.6667 [ 8 , ] 0.0000 0 0.0 0.0 0.0000 0.0000 0.0000 0 0.0000 [ 9 , ] 0.3333 0 0.0 0.0 0.6667 0.6667 0.6667 0 0.6667 > round ( out ] .the covariance matrix is diagonal with all terms equal to .we make then the transformation to , where , and then condition on .the transformation matrix is then from which we obtain conditioning on , that is , using eqs .( [ eq : eaton_e ] ) and ( [ eq : eaton_v ] ) , we get with . in practicethe resulting rule is the most naive one could imagine : subtract to each value one third of the excess of their sum above .( if you think that this rule is to simplistic , the reason might be that your model of uncertainty in this kind of measurements is different than that used here , implying for example scale type errors .but this kind of errors are beyond the aim of this note , because they imply non - linear transformations . )this is the conditioned covariance matrix written in a form that highlights the correlation matrix .the result is finally and similar expression for and , thus yielding with . in analogy of what we have previously done in several cases ,we start from independent quantities , , , , and . for the true values of the anglewe choose a flat prior , modelled with a gaussian : it is just a trick to have a pdf that is practically flat between 0 and .the trick allows us to use the normal multivariate formulae of reconditioning .obviously , one has to check that the final results are consistent with our assumptions and that the tails of the gaussian posterior distributions are harmless , as it is the case in our example .] of central value ( all values in degrees ) and .the expected values of the fluctuations of the observations around the true values are instead 0 , with standard deviations equal to the experimental resolutions , called sigma.gonio in the code , so that it can be changed at wish .the transformation rules are from which we get the transformation matrix here is the r code to calculate the expected values and covariance matrix of the three angles : .... mu.priors < - rep(60 , 3 ) ; sigma.priors < - rep(1000 , 3 ) # priors sigma.gonio < - c(2 , 2 , 2 ) # experimental resolutions m=6 ; mu0 < - c(mu.priors , rep(0 , 3 ) ) sigma< - c(sigma.priors , sigma.gonio ) v0 < - matrix(rep(0 , m*m ) , c(m , m ) ) # diagonal matrix diag(v0 ) < - sigma^2 c < - matrix(rep(0 , m*m ) , c(m , m ) ) # tranformation matrix diag(c ) < - 1 for(i in 1:3 ) c[3+i , i ] < - 1 c < - rbind(c , c ( rep(1 , 3 ) , rep(0,3 ) ) ) v < - c % * % v0 % * % t(c ) # transformed matrix mu < - as.vector(c % * % mu0 ) # expected values out < - norm.mult.cond(mu , v , c(na , na , na , 58 , 73 , 54 , 180 ) ) angles < - marginal.norm(outv , rep(1,3 ) ) .... and these are , finally , the results , shown as an r session : .... > anglesv ) ) ) [ 1 ] 1.63299 1.63299 1.63299 > ( corr < - anglesv[1,2 ] , respectively equal to and , would become identical and equal to .nevertheless since this check has been done only at this stage of the paper and being the result absolutely negligible , the original matrix inversion function solve ( ) has been used also through all this section . [fn : choleski ] ] .... > ( out < - norm.mult.cond(mu , v , c(na , na , y , na , na , na ) , full = false ) ) v [ , 1 ] [ , 2 ] [ , 3 ] [ , 4 ] [ , 5 ] [ 1 , ] 0.44997876 -0.09999521 -0.34998295 -0.5499734 0.44997876 [ 2 , ] -0.09999524 0.02499899 0.09999666 0.1499946 -0.09999524 [ 3 , ] -0.34998318 0.09999669 0.69999034 0.6499837 -0.34998318 [ 4 , ] -0.54997366 0.14999467 0.64998368 1.1999730 -0.54997366 [ 5 , ] 0.44997876 -0.09999521 -0.34998295 -0.5499734 0.69997876 .... from which we extract standard uncertainties and correlation coefficient : .... > ( sigmas < - sqrt ( diag(outv / outer(sigmas , sigmas ) ) [ , 1 ] [ , 2 ] [ , 3 ] [ , 4 ] [ , 5 ] [ 1 , ] 1.0000000 -0.9428053 -0.6235982 -0.7484450 0.8017770 [ 2 , ] -0.9428055 1.0000000 0.7559242 0.8660217 -0.7559198 [ 3 , ] -0.6235986 0.7559244 1.0000000 0.7092032 -0.4999870 [ 4 , ] -0.7484454 0.8660219 0.7092032 1.0000000 -0.6000863 [ 5 , ] 0.8017770 -0.7559195 -0.4999867 -0.6000860 1.0000000 .... our resulting parametric inference on intercept and slope is then with the correlation coefficient far from being negligible , and in fact crucial when we want to evaluate other quantities that depend on and , as we shall see in a while .we can check our result , at least as far expectations are concerned , against what we obtain using the r function lm ( ) , based on ` least squares ' : .... > lm(y ~ x ) call : lm(formula = y ~ x ) coefficients : ( intercept ) x 3.6 1.9 .... the data points , together with the best fit line and the intercept are reported in fig .[ fig : linear_fit_plot ] .the expectations about the future measurements are instead with interesting correlations : * = 0.71 ] , = -0.60 mu[1:2 ] ) ) [ 1 ] 18.800107 22.600169 3.599857 > ( v.mu.f < - c.mu.f % * % outv and out ] we recognize a familiar pattern ( see also footnote [ fn : media_pesata ] ) : } & = & \frac{1}{\sigma_1 ^ 2 } + \frac{1}{\sigma_{2|1}^2}\,.\end{aligned}\ ] ] c.f .`` theoria motus corporum coelestium in sectionibus conicis solem ambientum '' _ , hamburg 1809 , n.i 172179 ; reprinted in werke , vol .7 ( gota , gttingen , 1871 ) , pp 225234 .+ ( see e.g. in http://www.roma1.infn.it/~dagos/history/gaussmotus/index.html )
the properties of the normal distribution under linear transformation , as well the easy way to compute the covariance matrix of marginals and conditionals , offer a unique opportunity to get an insight about several aspects of uncertainties in measurements . the way to build the overall covariance matrix in a few , but conceptually relevant cases is illustrated : several observations made with ( possibly ) different instruments measuring the same quantity ; effect of systematics ( although limited to _ offset _ , in order to stick to linear models ) on the determination of the ` true value ' , as well in the prediction of future observations ; correlations which arise when different quantities are measured with the same instrument affected by an offset uncertainty ; inferences and predictions based on averages ; inference about constrained values ; fits under some assumptions ( linear models with known standard deviations ) . many numerical examples are provided , exploiting the ability of the r language to handle large matrices and to produce high quality plots . some of the results are framed in the general problem of ` propagation of evidence ' , crucial in analyzing graphical models of knowledge . # 1 # 1#2 \{#2 } # 1
many authors argue that superluminal fields are not causal ( but see refs. ) .this is not true , unless one refers to an indefensible notion of causality . indeed , as the notion of causality evolves from newtonian dynamics to special relativity ( sr ) , one must as well reconsider the notion of causality from special or general relativity ( gr ) , in which spacetime is only endowed with the flat ( resp .gravitational ) metric , to the case where it is endowed with a finite set of lorentzian metrics ( notably then , if there are superluminal fields ) . in this short communication based on the more detailed paper, we thus look for an expression of causality in such a multi - metric framework . the gravitational metric fieldis denoted by , and is a four - dimensional differentiable manifold .the analysis of the notion of causality leads in particular to the following : * observation 1 * : since causes must precede effects , causally connected events must be time - ordered .causality thus needs a notion of chronology to be expressed .* observation 2 * : any lorentzian metric over defines a local chronology ( in the tangent space ) , through the special relativistic notions of absolute future and past . gluing these two points together , we get the following * main point * : in relativistic field theories , there are _ as many _ notions of causality as there are non - conformally related metrics over .these metrics are the one along which the various fields propagate , with velocities .this plurality of the notion of causality is the crucial feature of multi - metric theories ._ indeed , it does not make any sense to assert that a given theory is -or not- causal , if one does not define to which metric ( i.e. to which chronology ) he refers to_. a theory which appear to be non causal w.r.t some metric may be causal w.r.t another metric . to face this issue ,one may be tempted to assume that there exists a preferred metric field over .in other words , one may fix a preferred chronology and its associated causal structure .most of the literature on causality and superluminal fields is based -often implicitly- on this first approach . in their famous textbook , hawking and ellis recognize explicitly that their notion of causality is defined w.r.t the gravitational metric .this constitutes a `` _ _ postulate _ _ which sets the metric * g * apart from the other fields on and gives it its distinctive geometrical character'' ( p.60 ) . as a consequence , fields that propagate faster than gravitons are not causal .thus , `` the null cones of the matter equations [ must ] coincide or lie within the null cone of the spacetime metric '' ( p.255 ) .although such an attitude does not pose any problem when spacetime is endowed with only one metric , as is the case of gr plus matter fields that couple to , it becomes highly problematic in the multi - metric case .first , indeed , there is no way to find which metric should be favored , and which should not .thus , by invoking causality , different authors may find opposite requirements on the theory .second , let us consider two fields propagating along the metrics ( ) , such that travels faster than . following the above reasonning , we can define causality w.r.t the metric . thentwo observers that are spacelike related w.r.t ( and hence , non time - ordered ) but timelike related w.r.t must be considered as causally disconnected , whereas they can interact thanks to the field .the only way to avoid so an absurd conclusion is to define causality w.r.t to the metric that defines the wider cone in the tangent space ( see below ) .third , any choice of a preferred metric is equivalent to a choice of preferred coordinates which , locally , diagonalize it .but the existence of preferred coordinates , or equivalently , of preferred rods and clocks , is in great conflict with the whole spirit of gr , namely diffeomorphism invariance ; coordinates are meaningless in gr .the above attitude is thus irrelevant in the multi - metric case . as an application, one should not invoke such a notion of causality to put constraints on the theory ( notably in order to fix various signs ) , contrary to what is done in the literature .there is only one relevant notion of ( extended ) chronology that does not refer to a given metric .this consists in defining the extended future of a point as the union of the futures of defined by each metric .the corresponding ( extended ) notion of causality is thus in accordance with the notion of interaction .it would however allow non - causally connected observers to interact , as in the previous section . ] .it is very permissive in the sense that , by construction , any field theory is _ a priori _ causal provided that the various fields propagate along lorentzian metrics , so that the ( extended ) spacelike region is never empty .moreover , interactions can not threaten this causal behavior , since , by construction , the extended future and past are defined at _ each _ point of . which metric defines the wider cone may thus depend on the location on spacetime .in particular , superluminal fields are _ a priori _ causal .of course , this construction is not sufficient .causality also requires , first , that the whole theory has an initial value formulation .this is generically the case if the field equations form a quasilinear , diagonal and second order hyperbolic system .beware however that initial data must be assigned on hypersurfaces that are spacelike in the extended sense , that is spacelike w.r.t to _ all _ metrics .all the difficulties in the cauchy problem of superluminal fields found in the literature arise from an irrelevant choice of initial data surfaces .second , a _ local _chronology is not enough .we must have at hand a global chronology over spacetime , in order to prevent , e.g. the existence of closed timelike curves . in the multi - metric case, we shall also require that our extended chronology is a global one , that is that no closed extended - timelike curves exist .it has been shown that a particular superluminal scalar field may suffer from such a global pathology .this is however not enough to kill this theory , for the very reason that gr itself may suffer from such causal anomalies .therefore , difficulties at a global level _ do not _ signal an intrinsic disease of superluminal fields .rather , they originate from the fact that the global topology of the universe is not imposed by local field equations .it is therefore necessary to _ assume _ that spacetime does not involve any closed ( extended ) timelike curves to ensure causality .y. aharonov , a. komar and l. susskind , _ phys .* 182 * , 1400 ( 1969 ) a. adams , n. arkani - hamed , s. dubovsky , a. nicolis and r. rattazzi , _ jhep _ * 0610 * , 014 ( 2006 ) n. straumann , _ mod .* a21 * , 1083 ( 2006 ) ; g. calcagni , b. carlos and a. de felice , _ nucl . phys ._ * b752 * , 404 ( 2006 ) ; c. bonvin , c. caprini and r. durrer , _ phys .lett . _ * 97 * , 081303 ( 2006 ) v. a. rubakov , hep - th/0604153 ; a. de felice , m. hindmarsh and m. trodden , _ jcap _ * 0608 * , 005 ( 2006 ) ; a. jenkins and d. oconnell , hep - th/0609159 ; g. gabadadze and a. iglesias _ phys .lett . _ * b639 * , 88 ( 2006 ) b. a. bassett , s. liberati , c. molina - pars and m. visser , _ phys . rev . _ * d62 * , 103518 ( 2000 ) ; c. armendriz - picn , t. damour and v. mukhanov , _ phys .lett . _ * b458 * , 219 ( 1999 ) ; c. armendriz - picn and e. a. lim , _ jcap _ * 0508 * , 007 ( 2005 ) ; a. d. rendall _ class .grav . _ * 23 * , 1557 ( 2006 )
the expression of causality depends on an underlying choice of chronology . since a chronology is provided by any lorentzian metric in relativistic theories , there are as many expressions of causality as there are non - conformally related metrics over spacetime . although tempting , a definitive choice of a preferred metric to which one may refer to is not satisfying . it would indeed be in great conflict with the spirit of general covariance . moreover , a theory which appear to be non causal with respect to ( hereafter , w.r.t ) this metric , may well be causal w.r.t another metric . in a theory involving fields that propagate at different speeds ( e.g. due to some spontaneous breaking of lorentz invariance ) , spacetime is endowed with such a finite set of non - conformally related metrics . in that case one must look for a new notion of causality , such that 1 . no particular metric is favored and 2 . there is an unique answer to the question : `` is the theory causal ? '' . this new causality is unique and defined w.r.t the metric drawing the wider cone in the tangent space of a given point of the manifold . moreover , which metric defines the wider cone may depend on the location on spacetime . in that sense , superluminal fields are generically causal , provided that some other basic requirements are met .
the smart grid ( sg ) is envisioned to be a large - scale next generation cyber - physical system that will improve the efficiency , reliability , and robustness of future power and energy grids by integrating the consumers as one of its key management components , and thus , achieve a system which is clean , safe , reliable , resilient and sustainable .this heterogeneous network will motivate the adoption of advanced technologies that will increase the participation of its consumers to overcome various technical challenges at different levels of demand - supply balance . in this respect ,game theory , which is an analytical framework to capture the complex interactions among rational players is studied in this paper to model an energy trading scheme for the sg .the model uses the two - way communication facility of the sg , and inspires the customers to _ spontaneously _ take part in _ supplying _ their surplus energy ( se ) to the grid so as to assist the power grid ( pg ) in balancing the excess energy demand at the peak hour .this voluntary participation of consumers in energy trading is very important in the context of sg because of its ability to greatly enhance the sg s reliability , and thus , improve the social benefit of the electricity market .we use the framework of a stackelberg game for this model in which the pg is considered as the leader and energy users ( eus ) are the followers . here , on the one hand , the pg decides on the total amount of energy it wants to buy , and also on the price per unit of energy it needs to pay to each eu . on the other hand , the eu decides on its amount of energy to be sold to the pg in response to the price offered to it .we note that energy management in the context of sg has been receiving considerable attention recently .for example , energy management for sgs in a vehicle - to - grid ( v2 g ) scenario have been studied in and the references therein , whereby the application of game theory for demand - supply balance in sgs can be found in and .however , little has been done in prioritizing the consumers benefit in management modeling where the main priority of the energy management scheme is to benefit the consumers .we stress that consumers are the core element of the evolution of sg as explained in , and hence , their benefit is one of the most important concerns of any demand - supply modeling scheme . in this respect ,we propose an energy management scheme that prioritizes the consumers in the sg and balances demand with supply at peak hours .we first formulate a noncooperative _ stackelberg _ game ( nsg ) to study the interactions between the pg and eus in the sg , and show that the optimal demand - supply balance can be achieved at the solution of the game ; then we analyze properties of the game in terms of existence and optimality , and it is shown that the game possesses a socially optimal solution ; finally , we propose a distributed algorithm to reach the solution of the game , and the effectiveness of the proposed scheme is demonstrated via numerical experiments .consider an sg network that consists of a single pg and a number of eus .the set of eus is , where . here, the pg refers to the main electricity grid which is servicing a group of customers at peak hours of the day ( i.e. , pm to pm ) , and each eu is a group of similar idle energy users , connected via an aggregator , such as smart homes , electric vehicles , wind mills , solar panels and bio - gas plants that have some se for sale after regular usage .it is assumed that the pg can communicate with the eus through smart meters via an appropriate communication protocol . due to frequent change of energy state in the grid ,energy management in the sg needs to be carried out frequently , and therefore , the total peak hour duration can be divided into multiple time slots . as energy demands by the customers are very high during the peak hours , the pg may be unable to balance some of the demands from its own generation in some of these time slots .meanwhile , the pg needs to buy energy from alternative energy sources such as idle eus who have se , and may agree to sell it to the pg with appropriate incentives . for the rest of the paper , we will concentrate on the energy management in a single time slot .it is assumed that the energy deficiency of the pg , , at any time slot is fixed although the deficiency may vary from one time slot to the next .however , as the required energy by the pg is fixed during a time slot , the pg would not be interested in buying more energy than to keep its cost at a minimum .thus , if each eu with se provides the pg with energy , these quantities need to satisfy to buy an offered amount of energy , the pg pays a price per unit of energy to eu as an incentive .however , the pg may need to pay different incentives to different eus due to their different amounts of se .for instance , a lower incentive may not affect the intended revenue of an eu with higher se as it can sell more , but could severely affect the revenue of eus with smaller amounts of se .moreover , the pg may also want to minimize its total cost of purchase as it would further enable the pg to sell this energy at a cheaper rate to its customer .this would facilitate the trading of energy between the pg and the eus in the network rather than establishing more expensive generators or bulk capacitors to meet any excess demand , and also , the cheaper rate would benefit the consumers who buy the energy from the pg . to this end , we assume that the pg estimates a total price per unit of energy , analogous to the _ total cost per unit production _ in economics , in each time slot using a real time price estimation technique as proposed in .the pg uses this to optimize the price it will pay to each eu to order to minimize its total cost while maintaining the constraint the equality constraint in establishes that the announced total price per unit energy must be paid to all eus , and thus motivates the eus to take part in energy trading with the pg . here , is the minimum price that the pg needs to pay eu to incentivize it to trade energy , and is the maximum price that the pg can pay .although is fixed , can be different for different based on .in a consumer - prioritized sg , the beneficiaries of the energy management scheme are the consumers in the network . in this regard , we propose an nsg , in which on the one hand , the objective of each eu is to voluntarily sell an amount of energy to the pg based on and the offered price so as to maximize its own utility . on the other hand, the pg wants to minimize its total cost of purchase by optimizing for different as explained in section [ system - model ] . to this end, we now define the objective functions of the leader and followers of the game .the considered utility function of eu , , is based on a linearly decreasing marginal benefit , which is appropriate for energy users .in addition , the utility function is also assumed to possess the following properties : i ) the utility of eu increases with the amount of se , i.e. , ; and ii ) the utility of an eu increases as increases , i.e. , . to meet the above properties , in this work we consider the following utility function for eu : from, we note that the addition of the quantity to in does not affect the solution .consequently , all the eus equivalently maximize subject to the constraint . here ,^t ] and ^t ] , by a constraint optimization technique for the offered .thereafter , the eus again decide on their gne energy vector ^t ] kilowatt hour ( kwh ) , and thus covers both the lowest battery capacity of a group of solar panels ( 3.6 kwh per panel ) and the highest battery capacity of a wind turbine group ( 12.25 kwh per turbine ) . the total price per unit energyis assumed to be us cents eus , the average price per unit energy is cents / kwh . ] , and cents , unless stated otherwise .all the statistical results are averaged over random values of the eus capacities using independent simulation run . in fig .[ fig - utility - se ] , the convergence of the utility achieved by each eu from selling its energy is shown to reach the nse .we consider that eus are connected to the pg , and as shown in the figure , the utility achieved by each eu reaches its nse after the iteration .importantly , the eu with a larger amount of se has a higher utility , which is due to the fact that it can sell more energy to the pg , and hence , it is being paid more . consequently , its utility is larger .next , we demonstrate the effectiveness of the proposed scheme by comparing its performance with a standard feed - in tariff scheme ( fit ) . we note that an fit is an incentive based energy trading scheme which is designed to increase the use of renewable energy systems providing power to the main grid when it is required .a higher tariff is paid to the energy producers to encourage them to take part in energy trading . for comparison , here we assume that the contract between the eus and the pg is such that the eus are capable of providing the pg with the required energy with a tariff of cents per kwh . in fig .[ fig - utilitywith - eu ] , we show the performance comparison between the proposed and fit schemes for the average achieved utility per eu as the number of eus varies in the sg .as shown in this figure , an increase in the number of eus subsequently increases the freedom of the pg to buy its energy from more eus , and hence , the amount of energy sold by each eu decreases . as a result ,average utility decreases for both the schemes .however , the proposed nsg , due to its capability of choosing an optimal energy for maximizing the eus benefits , always shows a considerable improved performance over the fit scheme in terms of average utility per eu . as seen in fig .[ fig - utilitywith - eu ] , the utility per eu for the proposed nsg is times , on average , better than the utility achieved by an fit scheme . the effect of the number of eus on the average total cost to the pg is shown in fig .[ fig - costwith - eu ] for both the nsg and the fit schemes for the same total price per unit of energy . for a fixed , increasing the number of eus from to the pg to buy its energy from more eus , and thus , enables the pg to pay a cheaper rate .consequently , the total cost incurred by the pg decreases .however , to keep all the eus participating , the pg needs to pay the minimum mandatory price to each eu .thus , as the number of eus increases from to , the total cost to the pg increases due to the mandatory payment to a large number of eus .[ fig - costwith - eu ] shows that the proposed scheme has significantly lower total cost to the pg at small network sizes , e.g. , for eus the average total cost for the proposed scheme is half the total cost incurred by the fit scheme . however , as the network size increases , the average total cost for the proposed nsg becomes closer to the fit scheme .in fact , as the network size increases , the pg needs to optimize its price for a large number of eus while maintaining the minimum payment .hence , due to the constraint , a large number of eus causes the pg to choose a price close to its minimum payment and consequently , the total cost for the proposed nsg becomes closer to that of the fit scheme .in this paper , we have studied a demand - supply balance technique by prioritizing consumer benefits , and have proposed a stackelberg game which leads to a socially optimal stackelberg equilibrium .we have shown that the proposed scheme maximizes the utility of the end users at the solution of the game , and at the same time keeps the total cost to the power grid to a minimum .we have studied the properties of the game analytically including the existence and the social optimality of the studied scenario .the effectiveness of the scheme has been demonstrated with considerable performance improvement when compared to a standard feed - in tariff scheme .a. mohsenian - rad , v. wong , j. jatskevich , r. schober , and a. leon - garcia , `` autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid , '' _ ieee transactions on smart grid _, vol . 1 , no . 3 , pp .320 331 , dec .r. walawalkar , s. fernands , n. thakur , and k. r. chevva , `` evolution and current status of demand response ( dr ) in electricity markets : insight from pjm and nyiso , '' _ energy journal _ , vol .35 , no . 4 , pp .15531560 , apr . 2010 .p. w. farris , n. t. bendle , p. e. pfeifer , and d. j. reibstein , _ marketing metrics : the definitive guide to measuring marketing performance_.1em plus 0.5em minus 0.4emupper saddle river , nj , usa : pearson prentice hall ., 2010 .z. yun , z. quan , s. caixin , l. shaolan , l. yuming , and s. yang , `` rbf neural network and anfis - based short - term load forecasting approach in real - time price environment , '' _ ieee transactions on power systems _ , vol .23 , no . 3 , pp . 853 858 , aug . 2008 .p. samadi , a. mohsenian - rad , r. schober , v. wong , and j. jatskevich , `` optimal real - time pricing algorithm based on utility maximization for smart grid , '' in _ proc .of the first ieee international conference on smart grid communications _ , gaithersburg , md , oct .2010 , pp .415 420 .d. arganda , b. panicucci , and m. passacantando , `` a game theoretic formulation of the service provisioning problem in cloud system , '' in _ proc . of the international worldwide web conference _ , hyderabad , india , apr .2011 , pp .177 186 .s. choice , `` which electricity retailer is giving the best solar feed - in tariff , '' website , 2012 , http://www.solarchoice.net.au / blog / which - electricity - retailer - is - giving - the - best - solar - feed - in - tariff/.
this paper explores an idea of demand - supply balance for smart grids in which consumers are expected to play a significant role . the main objective is to _ motivate _ the consumer , by _ maximizing _ their benefit both as a seller and a buyer , to trade their surplus energy with the grid so as to balance the demand at the peak hour . to that end , a stackelberg game is proposed to capture the interactions between the grid and consumers , and it is shown analytically that optimal energy trading parameters that maximize customers utilities are obtained at the solution of the game . a novel distributed algorithm is proposed to reach the optimal solution of the game , and numerical examples are used to assess the properties and effectiveness of the proposed approach . smart grid , two - way communication , demand management , stackelberg game , consumer s benefit , variational equilibrium .
astrophysical structure formation and the dynamics of astrophysical systems involve nonlinear gas dynamical processes which can not be modeled analytically but require numerical methods .one would like to address the challenging problem of star formation and how this process produces planetary systems .observations of the x - ray emission from hot gas in galaxy clusters , the sunyaev - zeldovich effect in the cmb spectrum , and the lyman alpha forest in the spectra of quasars are only meaningful if we understand the gas dynamical processes involved .the evolution of complex systems is best modeled using numerical simulations .a large class of astrophysical problems involve collisional systems where the mean free path is much smaller than all length scales of interest .hence , one can appropriately adopt an ideal fluid description of matter where the thermodynamical properties of the fluid obey well known equations of state .conservation of mass , momentum , and energy allows one to write down the euler equations which govern fluid mechanics ( see * ? ? ?this formalism is an ideal basis for simulating astrophysical fluids .hydrodynamical simulations are faced with challenging problems , but advancements in the field have made it an important tool for theoretical astrophysics .one of the main challenges in simulating complex fluid flows is the capturing of strong shocks , which frequently occur and play an important role in gas dynamics .the differential euler equations are ill - defined at shock discontinuities where derivatives are infinite .much effort has been devoted to solving this problem and a field of work has resulted from it .computational fluid dynamics ( cfd ) is a powerful approach to simulating fluid flow with emphasis on high resolution capturing of shocks and prevention of numerical instabilities .both eulerian and lagrangian methods have been developed .lagrangian methods based on smoothed particle hydrodynamics ( sph ; * ? ? ?* ; * ? ? ?* ) consider a monte - carlo approximation to solving the fluid equations , somewhat analogous to -body methods for the vlasov equation .sph schemes follow the trajectories of particles of fixed mass which represent fluid elements .the lagrangian forms of the euler equations are solved to determine smoothed fluid variables like density , velocity , and temperature .the particle formulation does not naturally capture shocks and artificial viscosity is added to prevent unphysical oscillations .however , the addition of artificial viscosity broadens shocks over several smoothing lengths and degrades the resolution .the lagrangian approach has a large dynamic range in length but not in mass .it achieves good spatial resolution in high density regions but does poorly in low density regions .sph schemes must smooth over a large number of neighbouring particles , making it computationally expensive and challenging to implement in parallel .the standard approach to eulerian methods is to discretize the problem and solve the integral euler equations on a cartesian grid by computing the flux of mass , momentum , and energy across grid cell boundaries . in conservative schemes ,the flux taken out of one cell is added to the neighbouring cell and this ensures the correct shock propagation .flux assignment schemes based on the _ total variation diminishing condition _ have been designed for high order accuracy and high resolution capturing of shocks , while preventing unphysical oscillations .the eulerian approach has a large dynamic range in mass but not in length , opposite to that of lagrangian schemes . in general , eulerian algorithms are computationally faster by several orders of magnitude .they are also easy to implement and to parallelize .the purpose of this paper is to present a pedagogical review of some of the methods employed in eulerian computational fluid dynamics . in briefly review the euler equations and discuss the standard approach to discretizing conservation laws .we describe traditional central differencing methods such as the lax - wendroff scheme in and more modern flux assignment methods like the tvd scheme in . in review the relaxing tvd method for systems of conservation laws like the euler equations , which has been successfully implemented for simulating cosmological astrophysical fluids by . in we apply a self - gravitating hydro code to simulating the formation of blue straggler stars through stellar mergers .a sample 3-d relaxing tvd code is provided in the appendix .the euler equations which govern hydrodynamics are a system of conservation laws for mass , momentum , and energy . in differentialconservation form , the continuity equation , momentum equation , and energy equation are given as : \frac{{\partial}(\rho v_i)}{{\partial t}}+\frac{{\partial}}{{\partial x}_j}(\rho v_iv_j+p\delta_{ij})=0\ , \\[8pt ] \frac{{\partial}e}{{\partial t}}+\frac{{\partial}}{{\partial x}_j}[(e+p)v_j]=0\ .\end{gathered}\ ] ] we have omitted gravitational and other source terms like heating and cooling .the physical state of the fluid is specified by its density , velocity field , and total energy density , in practice , the thermal energy is evaluated by subtracting the kinetic energy from the total energy . for an ideal gas , the pressure related to the thermal energy by the equation of state , where is the ratio of specific heats .another thermodynamic variable which is of importance is the sound speed which is given by the thermodynamical properties of an ideal gas obey well known equations of state , which we do not fully list here .the differential euler equations require differentiable solutions and therefore , are ill - defined at jump discontinuities where derivatives are infinite . in the literature , nondifferentiable solutions are called _weak solutions_. the differential form gives a complete description of the flow in smooth regions , but the integral form is needed to properly describe shock discontinuities . in integral conservation form , the rate of change in mass , momentum , and energy is equal to the net flux of those conserved quantities through the surface enclosing a control volume . for simplicity of notation , we will continue to express conservation laws in differential form , as a shorthand for the integral form .the standard approach to eulerian computational fluid dynamics is to discretize time into discrete steps and space into finite volumes or cells , where the conserved quantities are stored . in the simplest case ,the integral euler equations are solved on a cartesian cubical lattice by computing the flux of mass , momentum , and energy across cell boundaries in discrete time steps .consider the euler equations in vector differential conservation form , where contains the conserved physical quantities and represents the flux terms . in practice , the conserved cell - averaged quantities and fluxes are defined at integer grid cell centres .the challenge is to use the cell - averaged values to determine the fluxes at cell boundaries . in the following sections, we describe flux assignment methods designed to solve conservation laws like the euler equations . for ease of illustration ,we begin by considering a 1-d scalar conservation law , where and is a constant advection velocity . equation ( [ eqn : advect ] ) is referred to as a linear advection equation and has the analytical solution , the linear advection equation describes the transport of the quantity at a constant velocity . in integral flux conservation form, the 1-d scalar conservation law can be written as where and for our control cells .let denote the flux of through cell boundary at time .note then that the second integral is simply equal to .the rate of change in the cell - integrated quantity is equal to the net flux of through the control cell . for a discrete time step ,the discretized solution for the cell - averaged quantity is given by the physical quantity is conserved since the flux taken out of one cell is added to the neighbouring cell which shares the same boundary .note that equation ( [ eqn : conservation ] ) has the appearance of being a finite difference scheme for solving the differential form of the 1-d scalar conservation law .this is why the differential form can be used as a shorthand for the integral form .central - space finite - difference methods have ease of implementation but at the cost of lower accuracy and stability . for illustrative purposes ,we start with a simple first - order centered scheme to solve the linear advection equation .the discretized solution is given by equation ( [ eqn : conservation ] ) where the fluxes at cell boundaries , are obtained by taking an average of cell - centered fluxes .the discretized first - order centered scheme can be equivalently written as in this form , the discretization has the appearance of using a central difference scheme to approximate spatial derivatives .hence , centered schemes are often referred to as central difference schemes . in practicewhen using centered schemes , the discretization is done on the differential conservation equation rather than the integral equation .this simple scheme is numerically unstable and we can show this using the _ von neumann _ linear stability analysis . consider writing as a discrete fourier series : where is the number of cells in our periodic box . in plane - wave solution form , we can write this as \ , \ ] ] where are the fourier series coefficients for the initial conditions . equivalently , the time evolution of the fourier series coefficients in equation ( [ eqn : dfte ] ) can be cast into a plane - wave solution of the form , where the numerical dispersion relation is complex in general .the imaginary part of represents the growth or decay of the fourier modes while the real part describes the oscillations .a numerical scheme is linearly stable if .otherwise , the fourier modes will grow exponentially in time and the solution will blow up . the exact solution to the linear advection equation can be expressed in the form of equation ( [ eqn : advectsol ] ) or by a plane - wave solution where the dispersion relation is given by .the waves all travel at the same phase velocity in the exact case .the centrally discretized linear advection equation ( equation [ eqn : cfde ] ) is exactly solvable .after times steps , the time evolution of the independent fourier modes is given by where and .the dispersion relation is given by \ , \ ] ] for any time step , the imaginary part of will be . the fourier modes will grow exponentially in time and the solution will blow up . hence , the first - order centered scheme is numerically unstable .the lax - wendroff scheme is second - order accurate in time and space and the idea behind it is to stabilize the unstable first - order scheme from the previous section . consider a taylor series expansion for : + \frac{{\partial u}}{{\partial t}}{\delta t}+\frac{{\partial}^2u}{{\partial t}^2}\frac{{\delta t}^2}{2}+{\cal o}({\delta t}^3)\ , \end{gathered}\ ] ] and replace the time derivatives with spatial derivatives using the conservation law to obtain -\frac{{\partial f}}{{\partial x}}{\delta t}+\frac{{\partial}}{{\partial x}}\left(\frac{{\partial f}}{{\partial u}}\frac{{\partial f}}{{\partial u}}\frac{{\partial u}}{{\partial x}}\right)\frac{{\delta t}^2}{2}+{\cal o}({\delta t}^3)\ .\label{eqn : taylor}\end{gathered}\ ] ] for the linear advection equation , the eigenvalue of the flux jacobian is .discretization using central differences gives + \left(\frac{f_{n+1}^t - f_n^t}{{\delta x}}-\frac{f_n^t - f_{n-1}^t}{{\delta x}}\right)\frac{v{\delta t}^2}{2{\delta x}}\ .\label{eqn : lwe}\end{gathered}\ ] ] in conservation form , the solution is given by equation ( [ eqn : conservation ] ) , where the fluxes at cell boundaries are defined as compare this with the boundary fluxes for the first - order scheme ( equation [ eqn : cfdf ] ) .the lax - wendroff scheme obtains second - order fluxes , by modifying the first - order fluxes with a second - order correction .the stability of the lax - wendroff scheme to solve the linear advection equation can also be determined using the von neumann analysis . the discretized lax - wendroff equation( equation [ eqn : lwe ] ) is exactly solvable and after time steps , the fourier modes evolve according to ^mc_k^\circ\ , \label{eqn : lwce}\ ] ] where is called the _ courant _ number and .the dispersion relation is given by \\[8pt ] + \frac{in}{4\pi{\delta t}}\ln\,\left[1 - 4\lambda^2(1-\lambda^2)\sin^4\left(\frac{\phi}{2}\right)\right]\ .\label{eqn : lwdr}\end{gathered}\ ] ] it is important to note three things .first , the lax - wendroff scheme is conditionally stable provided that , which is satisfied if this constraint is a particular example of a general stability constraint known as the _ courant - friedrichs - lewyor _ ( cfl ) condition .the courant number is also referred to as the cfl number .second , for the dispersion relation is exactly identical to that of the exact solution and the numerical advection is exact .this is a special case , however , and it does not test the ability of the lax - wendroff scheme to solve general scalar conservation laws . normally , one chooses to satisfy the cfl condition .lastly , for the dispersion relation for the lax - wendroff solution is different from the exact solution where .the dispersion relation relative to the exact solution can be parametrized by the second - order truncation of the taylor series ( equation [ eqn : taylor ] ) results in a phase error which is a function of frequency . in the lax - wendroff solution ,the waves are damped and travel at different speeds .hence the scheme is both diffusive and dispersive . in figure ( [ fig : lwdispersion ] ) we plot the phase error and the amplification term for the lax - wendroff scheme with parameters , , and .a negative value of represents a lagging phase error while a positive value indicates a leading phase error . for the chosen cfl number ,the high frequency modes have the largest phase errors but they are highly damped .some of the modes having lagging phase errors are not highly damped .we will subsequently see how this becomes important .a rigourous test of the 1-d lax - wendroff scheme and other flux assignment schemes we will discuss is the linear advection of a square wave .the challenge is to accurately advect this discontinuous function where the edges mimic riemann shock fronts . in figure ( [ fig : lax ] ) we show how the lax - wendroff scheme does at advecting the square wave once ( dashed line ) and ten times ( dotted line ) through a periodic box of 100 grid cells at speed and . note that this scheme produces numerical oscillations . recall that a square wave can be represented by a sum of fourier or sine waves .these waves will be damped and disperse when advected using the lax - wendroff scheme .figure ( [ fig : lwdispersion ] ) shows that the modes having lagging phase errors are not damped away .hence , the lax - wendroff scheme is highly dispersive and the oscillations in figure ( [ fig : lax ] ) are due to dispersion .we leave it as an exercise for the reader to advect a sine wave using the lax - wendroff scheme .since there is only one frequency mode in this case , there will be no spurious oscillations due to dispersion , but a phase error will be present . for a comprehensive discussion on the family of lax - wendroff schemes and other centered schemes, see and .upwind methods take into account the physical nature of the flow when assigning fluxes for the discrete solution . this class of flux assignment schemes , whose origin dates back to the work of , has been shown to be excellent at capturing shocks and also being highly stable . we start with a simple first - order upwind scheme to solve the linear advection equation .consider the case where the advection velocity is positive and flow is to the right .the flux of the physical quantity through the cell boundary will originate from cell .the upwind scheme proposes that , to first - order , the fluxes at cell boundaries be taken from the cell - centered fluxes , which is in the upwind direction .if the advection velocity is negative and flow is to the left , the boundary fluxes are taken from the cell - centered fluxes .the first - order upwind flux assignment scheme can be summarized as follows : f_{n+1}^t & \text{if . }\end{cases}\ ] ] unlike central difference schemes , upwind schemes are explicitly asymmetric. the cfl condition for the first - order upwind scheme can be determined from the von neumann analysis .we consider the case of a positive advection velocity .after time steps , the fourier modes evolve according to ^mc_k^\circ\ , \label{eqn : uwce}\ ] ] where and .the dispersion relation is given by \\[8pt ] + \frac{in}{4\pi{\delta t}}\ln\,\left[1 - 4\lambda(1-\lambda)\sin^2\left(\frac{\phi}{2}\right)\right]\ .\label{eqn : updr}\end{gathered}\ ] ] the cfl condition for solving the linear advection equation with this scheme is to have , identical to that for the lax - wendroff scheme .for the dispersion relation for the first - order upwind scheme is different from the exact solution where .this scheme is both diffusive and dispersive .since it is only first - order accurate , the amount of diffusion is large . in figure ( [ fig : uwdispersion ] ) we compare the dispersion relation of the upwind scheme to that of the lax - wendroff scheme .the fourier modes in the upwind scheme also have phase errors but they will be damped away .the low frequency modes which contribute to the oscillations in the lax - wendroff solution are more damped in the upwind solution .hence , one does not expect to see oscillations resulting from phase errors . in figure ( [ fig : uw ] ) we show how the first - order upwind scheme does at advecting the riemann shock wave .this scheme is well - behaved and produces no spurious oscillations , but since it is only first - order , it is highly diffusive .the first - order upwind scheme has the property of having monotonicity preservation . when applied to the linear advection equation, it does not allow the creation of new extrema in the form of spurious oscillations .the lax - wendroff scheme does not have the property of having monotonicity preservation .the flux assignment schemes that we have discussed so far are all linear schemes . showed that all linear schemes are either diffusive or dispersive or a combination of both .the lax - wendroff scheme is highly dispersive while the first - order upwind scheme is highly diffusive .godunov s theorem also states that linear monotonicity preserving schemes are only first - order accurate . in order to obtain higher order accuracy and prevent spurious oscillations ,nonlinear schemes are needed to solve conservation laws . proposed the _total variation diminishing _( tvd ) condition which guarantees that a scheme have monotonicity preservation . applying godunov s theorem , we know that all linear tvd schemes are only first - order accurate . in fact , the only linear tvd schemes are the class of first - order upwind schemes . therefore , higher order accurate tvd schemes must be nonlinear .the tvd condition is a nonlinear stability condition .the total variation of a discrete solution , defined as is a measure of the overall amount oscillations in . the direct connection between the total variation and the overall amount of oscillations can be seen in the equivalent definition where each maxima is counted positively twice and each minima counted negatively twice ( see * ? ? ?. the formation of spurious oscillations will contribute new maxima and minima and the total variation will increase .a flux assignment scheme is said to be tvd if which signifies that the overall amount of oscillations is bounded . in linear flux - assignment schemes, the von neumann linear stability condition requires that the fourier modes remain bounded . in nonlinear schemes, the tvd stability condition requires that the total variation diminishes .we now describe a nonlinear second - order accurate tvd scheme which builds upon the first - order monotone upwind scheme described in the previous section . the second - order accurate fluxes at cell boundariesare obtained by taking first - order fluxes from the upwind scheme and modifying it with a second order correction .first consider the case where the advection velocity is positive .the first - order upwind flux comes from the averaged flux in cell .we can define two second - order flux corrections , { \delta f}_{n+1/2}^{r , t}&=\frac{f_{n+1}^t - f_n^t}{2}\ , \end{aligned}\ ] ] using three local cell - centered fluxes .we use cell and the cells immediately left and right of it .if the advection velocity is negative , the first - order upwind flux comes from the averaged flux in cell . in this case ,the second - order flux corrections , { \delta f}_{n+1/2}^{r , t}&=-\frac{f_{n+2}^t - f_{n+1}^t}{2}\ .\end{aligned}\ ] ] are based on cell and the cells directly adjacent to it .near extrema where the corrections have opposite signs , we impose no second - order correction and the flux assignment scheme reduces to first - order . a flux limiter then used to determine the appropriate second - order correction , which still maintains the tvd condition .the second - order correction is added to the first - order fluxes to get second - order fluxes .the first - order upwind scheme and second - order tvd scheme will be referred to as _ monotone upwind schemes for conservation laws _ ( muscl ) .time integration is performed using a second - order runge - kutta scheme .we first do a half time step , using the first - order upwind scheme to obtain the half - step values .a full time step is then computed , using the tvd scheme on the half - step fluxes .the reader is encouraged to show that is second - order accurate .we briefly discuss three tvd limiters .the minmod flux limiter chooses the smallest absolute value between the left and right corrections : \min(|a|,|b|)\ .\ ] ] the superbee limiter chooses between the larger correction and two times the smaller correction , whichever is smaller in magnitude : minmod(2a , b ) & \text{otherwise . }\end{cases}\ ] ] the van leer limiter takes the harmonic mean of the left and right corrections : the minmod limiter is the most moderate of all second - order tvd limiters . in figure ( [ fig : mm ] ) we see that it does not do much better than first - order upwind for the square wave advection test .superbee chooses the maximum correction allowed under the tvd constraint .it is especially suited for piece - wise linear conditions and is the least diffusive for this particular test , as can be seen in figure ( [ fig : sb ] ) .note that no additional diffusion can be seen by advecting the square wave more than once through the box .it can be shown that the minmod and superbee limiters are extreme cases which bound all other second - order tvd limiters .the van leer limiter differs from the previous two in that it is analytic .this symmetrical approach falls somewhere inbetween the other two limiters in terms of moderation and diffusion , as can be seen in figure ( [ fig : vl ] ) .it can be shown that the cfl condition for the second - order tvd scheme is to have . for a comprehensive discussion on tvd limiters ,see and .we now describe a simple and robust method to solve the euler equations using the monotone upwind scheme for conservation laws ( muscl ) from the previous section .the relaxing tvd method provides high resolution capturing of shocks using computationally inexpensive algorithms which are straightforward to implement and to parallelize .it has been successfully implemented for simulating cosmological astrophysical fluids by .the muscl scheme is straightforward to apply to conservation laws like the advection equation since the velocity alone can be used as a marker of the direction of flow .however , applying the muscl scheme to solve the euler equations is made difficult by the fact that the momentum and energy fluxes depend on the pressure . in order to determine the direction upwind of the flow, it becomes necessary to calculate the flux jacobian eigenvectors using riemann solvers .this step requires computationally expensive algorithms .the relaxing tvd method offers an attractive alternative .we first present a motivation for the relaxing scheme by again considering the 1-d scalar conservation law .the muscl scheme for solving the linear advection equation is explicitly asymmetric in that it depends on the sign of the advection velocity .we now describe a symmetrical approach which applies to a general advection velocity .the flow can be considered as a sum of a right - moving wave and a left - moving wave . for a positive advection velocity ,the amplitude of the left - moving wave is zero and for a negative advection velocity , the amplitude of the right - moving wave is zero . in compact notation , the waves can be defined as : u^l&=\left(\frac{1-v / c}{2}\right)u\ , \end{aligned}\ ] ] where .the two waves are traveling in opposite directions with advection speed and can be described by the advection equations : the muscl scheme is straightforward to apply to solve equations ( [ eqn : rmw ] ) and ( [ eqn : lmw ] ) since the upwind direction is left for the right - moving wave and right for the left - moving wave .the 1-d relaxing advection equation then becomes where and .for the discretized solution given by equation ( [ eqn : conservation ] ) , the boundary fluxes are now a sum of the fluxes and from the right - moving and left - moving waves , respectively .note that the relaxing advection equation will correctly reduce to the linear advection equation for any general advection velocity . using this symmetrical approach ,a general algorithm can be written to solve the linear advection equation for an arbitrary advection velocity .this scheme is indeed inefficient for solving the linear advection equation since one wave will have zero amplitude .however , the euler equations can have both right - moving and left - moving waves with non - zero amplitudes .we now discuss the 1-d relaxing tvd scheme and later generalize it to higher spatial dimensions .consider a 1-d system of conservation laws , where for the euler equations , we have and the corresponding flux terms .we now replace the vector conservation law with the relaxation system where is a free positive function called the freezing speed .the relaxation system contains two coupled vector linear advection equations . in practice , we set and use it as an auxiliary vector to calculate fluxes . equation ( [ eqn : relax1 ] ) reduces to our 1-d vector conservation law and equation ( [ eqn : relax2 ] ) is a vector conservation law for . in order to solve the relaxed system, we decouple the equations through a change of variables : { \boldsymbol{u}}^l&=\frac{{\boldsymbol{u}}-{\boldsymbol{w}}}{2}\ , \end{aligned}\ ] ] which then gives us equations ( [ eqn : w1 ] ) and ( [ eqn : w2 ] ) are vector linear advection equations , which can be interpreted as right - moving and left - moving flows with advection speed .note the similarity with their scalar counterparts , equations ( [ eqn : rmw ] ) and ( [ eqn : lmw ] ) .the 1-d vector relaxing conservation law for becomes where and .the vector relaxing equation can now be solved by applying the muscl scheme to equations ( [ eqn : w1 ] ) and ( [ eqn : w2 ] ) .again , note the similarity between the vector relaxing equation and its scalar counterpart , equation ( [ eqn : rlae ] ) .the relaxed scheme is tvd under the constraint that the freezing speed be greater than the characteristic speed given by the largest eigenvalue of the flux jacobian . for the euler equations ,this is satisfied for considered the freezing speed to be a positive constant in their relaxing scheme while we generalize it to be a positive function .time integration is again performed using a second - order runge - kutta scheme and the time step is determined by satisfying the cfl condition , note that a new freezing speed is computed for each partial step in the runge - kutta scheme .the cfl number should be chosen such that will be larger than and .we now summarize the steps needed to numerically solve the 1-d euler equations . at the beginning of each partial step in the runge - kutta time integration scheme , we need to calculate the cell - averaged variables defined at grid cell centres .first for the half time step , we calculate the fluxes and the freezing speed .we then set the auxiliary vector and construct the right - moving waves and left - moving waves .the half time step is given by where the first - order upwind scheme is used to compute fluxes at cell boundaries for the right - moving and left - moving waves .for the full time step , we construct the right - moving waves and left - moving waves , using the half - step values of the appropriate variables . the full time step , is computed using the second - order tvd scheme .this completes the updating of to .we have found that a minor modification to the implementation described above gives more accurate results .consider writing the flux of the right - moving and left - moving waves as : { \boldsymbol{f}}^l = c{\boldsymbol{g}}^l\ , \end{aligned}\ ] ] where is the flux of and is the flux of .the linear advection equations for and are similar to equations ( [ eqn : w1 ] ) and ( [ eqn : w2 ] ) , but where we replace with and with . for each partial step in the runge - kutta scheme , the net fluxes at cell boundariesare then taken to be where we use . in practice , this modified implementation has been found to resolve shocks with better accuracy in certain cases . note that the two different implementations of the relaxing tvd scheme are identical when a constant freezing speed is used .the 1-d relaxing tvd scheme can be generalized to higher dimensions using the dimensional splitting technique by . in three dimensions ,the euler equations can be dimensionally split into three separate 1-d equations which are solved sequentially .let the operator represent the updating of to by including the flux in the direction .we first complete a forward sweep , and then perform a reverse sweep using the same time step to obtain second - order accuracy .we will refer to the combination of the forward and reverse sweeps as a double sweep .a more symmetrical sweeping pattern can be used by permutating the sweeping order when completing the next double time step .the dimensional splitting or operator splitting technique can be summarized as follows : u^{t_3}&=u^{t_2 + 2{\delta t}_2}=l_zl_xl_yl_yl_xl_zu^{t_2}\ , \\[8pt ] u^{t_4}&=u^{t_3 + 2{\delta t}_3}=l_yl_zl_xl_xl_zl_yu^{t_3}\ , \end{aligned}\ ] ] where , , and are newly determined time steps after completing each double sweep .the cfl condition for the 3-d relaxing tvd scheme is similarly given by equation ( [ eqn : relaxcfl ] ) , but with \ .\ ] ] where .note that since is on average a factor of smaller than , a dimensionally split scheme can use a longer time step compared to an un - split scheme .the dimensional splitting technique also has other advantages .the decomposition into a 1-d problem allows one to write short 1-d algorithms , which are easy to optimize to be cache efficient .a 3-d hydro code is straightforward to implement in parallel .when sweeping in the x direction , for example , one can break up the data into 1-d columns and operate on the independent columns in parallel .a sample 3-d relaxing tvd code , implemented in parallel using openmp directives , is provided in the appendix .a rigourous and challenging test for any 3-d eulerian or lagrangian hydrodynamic code is the sedov - taylor blast wave test .we set up the simulation box with a homogeneous medium of density and negligible pressure and introduce a point - like supply of thermal energy in the centre of the box at time .the challenge is to accurately capture the strong spherical shock wave which sweeps along material as it propagates out into the ambient medium .the sedov - taylor test is used to model nuclear - type explosions . in astrophysics , it is often used as a basic setup to model supernova explosions and the evolution of supernova remnants ( see * ? ? ?the analytical sedov solution uses the self - similar nature of the blast wave expansion ( see * ? ? ?consider a frame fixed relative to the centre of the explosion .the spherical shock front propagates outward and the distance from the origin is given by where for an ideal gas with .the velocity of the shock is given by since the ambient medium has negligible pressure , the shocks will be very strong .the density , velocity , and pressure directly behind the shock front are : v_2&=\left(\frac{2}{\gamma+1}\right)v_{sh}\ , \\[8pt ] \ p_2&=\left(\frac{2}{\gamma+1}\right)\rho_1v_{sh}^2\ .\end{aligned}\ ] ] the immediate post - shock gas density is constant in time , while the shocked gas velocity and pressure decrease as and , respectively .the full analytical sedov - taylor solutions can be found in .the 3-d relaxing tvd code based on the van leer flux limiter is applied to capturing the sedov - taylor blast wave .we set up a box with cells and constant initial density . at time , we inject a supply of thermal energy into one cell at the centre of the box .the simulation is stopped at time , in which the shock front has propagated out to a distance of cells from the centre . in figure ( [ fig : st1 ] ) and ( [ fig : st2 ] ) we plot the radial distributions of density , momentum , and pressure , normalized to , , and respectively .the data points are taken from a random subset of cells and the solid lines are the analytical sedov - taylor solutions . despite solving a spherically symmetric problem on an explicitly non - rotationally invariant cartesian grid ,the anisotropic scatter in the results is small .the distance of the shock front from the centre of the explosion as a function of time is indeed given by equation ( [ eqn : stre ] ) , demonstrating that the 3-d relaxing tvd code ensures the correct shock propagation .the resolution of the shock front is roughly two grid cells .the numerical shock jump values of , , and are resolution dependent and come close to the theoretical values for our test with cells .we leave it as an exercise for the reader to test the code using the minmod and superbee flux limiters .for astrophysical applications , both hydrodynamical and gravitational forces are included .the gravitational forces arise from the self - gravity of the fluid and can also come from an external field .the euler equations with the gravitational source terms included are given as : \frac{{\partial}(\rho v_i)}{{\partial t}}+\frac{{\partial}}{{\partial x}_j}(\rho v_iv_j+p\delta_{ij})=-\rho\frac{{\partial}\phi}{{\partial x}_i}\ , \\[8pt ] \frac{{\partial}e}{{\partial t}}+\frac{{\partial}}{{\partial x}_j}[(e+p)v_j]=-\rho v_i\frac{{\partial}\phi}{{\partial x}_i}\ .\end{gathered}\ ] ] where is the gravitational potential .poisson s equation , relates the gravitational potential to the density field .the general solution can be written as where the kernel is given by in the discrete case , the integral in equation ( [ eqn : poisson ] ) becomes a sum and poisson s equation can be solved using fast fourier transforms ( fft ) to do the convolution .the forces are then calculated by finite differencing the potential ( see * ? ? ?the addition of gravatitational source terms in the euler equations is easily handled using the operator splitting technique described in [sec : mdscl ] .consider the double sweep : where the operator represents the updating of by including the flux in the direction and the operator represents the gravitational acceleration of the fluid . during the gravitational step ,the flux terms in the euler equations are ignored .the density distribution does not change and only the fluid momenta and total energy density are updated .the stellar density in the cores of globular and open clusters is high enough for stellar collisions to take place with significant frequency .current observations and simulations suggest that the merger of two main sequence stars produces a blue straggler .the blue stragglers are out - lying main sequence stars which lie beyond the main sequence turnoff in the colour - magnitude diagram ( cmd ) of a star cluster .the blue stragglers are more massive , brighter , and bluer than the turnoff stars .since more massive stars evolve faster than lower mass stars and are not expected to lie beyond the turnoff , this suggests that blue stragglers must have formed more recently . in principlethe merger of two main sequence stars can produce a young remnant star provided that significant mixing occurs in the process .the mixing produces a higher hydrogen fraction in the core of the remnant than that of the parent stars which have already burnt most of the hydrogen to helium in their cores . used low resolution sph simulations with particles to simulate the merging of polytropes and found that they fully mixed .however , medium resolution sph simulations with particles of or polytropes showed only weak mixing .it is worth noting that polytropes are more representative of low mass main sequence stars with large convective envelopes while polytropes resemble main sequence stars near the turnoff which have little mass in their convective envelopes .high resolution sph simulations involving particles have now been applied to simulating stellar collisions .the merging stars process is mostly subsonic and strong shocks are not expected . in the absence of shocks , sph particles will follow flow lines of constant entropy due to the lagrangian nature of the method . as a result , the particles may experience sedimentation .in addition , the mixing can also depend on the adopted smoothing length and the form of artificial viscosity .for a sph fluid , the reynolds number is of order , where is the total number of particles and is the number of particles over which the smoothing is done . for and ,the reynolds number is .however , a fluid with a low reynolds number will tend to experience laminar flow .hence , sph may under mix .it is a worthwhile exercise to model the merging process using eulerian hydrodynamical simulations .the differences between eulerian and lagrangian approaches may lead to very different results on mixing . as of present, no such work has been reported in the literature .we consider the off - axis collision of two main sequence stars with and , which are modeled using polytropes .a polytrope with polytropic index has equilibrium density and pressure profiles which are related by the density profile is determined by solving the lane - emden equation ( see * ? ? ?we adopt an ideal gas equation of state with adiabatic index .note that for an polytrope , 90% of the total mass is contained within .we define the dynamical time to be , where is the average density . for the chosen parent stars ,the dynamical time is approximately one physical hour .the collision is simulated in a box with cells and the orbital plane coincides with the plane .initially , each parent star has a radius of 96 grid cells .the stars are set up on zero - energy parabolic orbits with a pericentre separation equal to .the initial trajectories are calculated assuming point masses . in an eulerian simulation, the vacuum can not have zero density .we set the minimum density of the cells to be of the central density of the parent stars .the hydrodynamics is done in a non - periodic box with vacuum boundary conditions .a non - trivial test of a self - gravitating eulerian hydro code is the advection of an object in hydrostatic equilibrium .the challenge is to maintain the equilibrium profile over a large number of time steps .one of the parent stars is placed in a periodic box with cells and given some initial momentum .we make the test rigorous by having the polytrope move in all three directions . in figure[ fig : poly ] we compare the mass and entropy profiles of the initial and advected polytrope .the entropic variable is used in place of the specific entropy .the parameter is defined to be the minimum entropy of the parent polytrope . after 1000 timestepsin which the polytrope has moved 256 cells in each direction , the advected polytrope has still retained its equilibrium profile .shock heating can occur in the outer envelope as the polytrope moves through the false vacuum .however , by setting the density of the false vacuum to be of the central density of the polytrope , we can minimize the spurious shock heating . in figure [fig : merge ] we show four snapshots of the merging process taken at time , 2 , 4 , and .the 2-d density maps are created by averaging over 4 planes taken about the orbital mid - plane .the density contours are spaced logarithmically with 2 per decade and covering 3 decades down from the maximum .the parent stars are initially separated by and placed on zero - energy orbits with a pericentre separation of . during the collision process ,the outer envelopes of the parent stars are shock heated and material gets ejected . in less than , the merger remnant establishes hydrostatic equilibrium .the merger remnant is a rotating oblate with mass approximately 90% of the combined mass of the parent stars .a large fraction of the mass loss is due to the vacuum boundary conditions .ejected material do not have the opportunity to fall back onto the merger remnant .however , the additional mass loss in the envelope does not present a problem since we are interested in the question of mixing in the interior of the star . in figure[ fig : profiles ] we plot the thermodynamic profiles of the merger remnant and the parent stars .the central density and pressure in the core of the merger remnant is lower than the corresponding values in the parent stars by approximately half .the entropy floor has risen by a factor of 1.6 .shock heating is expected to be minimal in the core so a change in entropy suggests that some mixing has taken place .however , it is difficult to quantify the amount of mixing from examining the thermodynamic profiles alone . to help address the question of mixing, we are implementing a particle - mesh ( pm ) scheme where test particles can be used to track passively advected quantities such as chemical composition .initially , each parent star is assigned a large number of particles with known chemical composition .the test particles are passively advected along velocity field lines . for each time step , the velocity of each particle is interpolated from the grid using a `` cloud - in - cell '' ( cic ) scheme and the equations of motions are solved using second - order runge - kutta integration .the cic interpolation scheme is also used to determine the local density , pressure , and entropy associated with each particle . with this setup, we have the benefit of being able to track thermodynamic quantities like in an sph scheme but avoid the under mixing problem since the fluid equations are solved using the eulerian scheme . future work ( trac , sills , & pen 2003 ) will have higher resolution simulations .collisions will be simulated in a box with cells .each parent star will have a radius of 192 grid cells and be assigned test particles .we will also be doing a detailed comparison between eulerian and sph simulations of stellar mergers .the self - gravitating hydro code used for the simulations is very memory friendly . for the grid , 10 gbis required to store the hydro variables , 2 gb for the potential , and less than 1 gb for the test particles . for every double timestep, approximately 1000 floating point operations per grid cell is needed to carry out the tvd hydro calculations .the potential is computed once for every double step and this requires two ffts .since eulerian codes are very memory friendly , have low floating point counts , are easily parallelized , and scale very well on shared - memory , multiple - processor machines , they can be used to run very high resolution simulations .we have presented several numerical schemes for solving the linear advection equation and given the cfl stability conditions for each scheme . we have implemented the relaxing tvd scheme to solve the euler system of conservation laws .the second - order accurate tvd scheme provides high resolution capturing of shocks , as can be seen in the riemann shock test and the sedov - taylor blast wave test .the 1-d relaxing tvd scheme can be easily generalized to higher dimensions using the dimensional splitting technique .a dimensionally split scheme can use longer time steps and is straightforward to implement in parallel .we have presented a sample astrophysical application .a 3-d self - gravitating eulerian hydro code is used to simulate the formation of blue straggler stars through stellar mergers .we hope to have convinced the reader that eulerian computational fluid dynamics is a powerful approach to simulating complex fluid flows because it is simple , fast , and accurate .we thank joachim stadel and norm murray for comments and suggestions on the writing and editing of this paper .we also thank alison sills , phil arras , and chris matzner for discussions on stellar mergers .benz , w. & hills , j. g. , 1987 , , 323 , 614 chandrasekhar , s. , 1957 , an introduction to the study of stellar structure ( new york : dover publications ) courant , r. , isaacson , e. , & reeves , m. , 1952 , comm .pure and applied math ., 5 , 243 gingold , r. a. & monaghan , j. j. , 1977 , , 181 , 375 godunov , s. k. , 1959 , math .sbornik , 47 , 271 harten , a. , 1983 , j. comp .phys . , 49 , 357 hill , j. g. & day , c. a. , 1976 , , 17 , 87 hirsch , c. , 1990 , numerical computation of internal and external flows , vol .2 : computational methods for inviscid and viscous flows ( new york : john wiley ) hockney , r. w. & eastwood , j. w. , 1988 , computer simulation using particles ( philadelphia : iop publishing ) jin , s. & xin , z. , 1995 , comm . pure and applied math ., 48 , 235 landau , l. d. & lifshitz , e. m. , 1987 , fluid mechanics ( 2nd ed . ; oxford : pergamon press ) laney , c. b. , 1998 , computational gas dyanamics ( cambridge : cambridge university press ) lax , p. d. & wendroff , b. , 1960 , comm . pure and applied math . , 10 , 537 lombardi , j. c. , jr . , rasio , f. a. , & shapiro , s. l , 1996 , , 468 , 797 lucy , l. b. , 1977 , , 82 , 1013 pen , u. , 1998 , , 115 , 19 roe , p. l.,1985 , in large - scale computations in fluid mechanics , lectures in applied mathematics , eds . b. e. engquist , s. osher , & r. c. j. somerville ( providence , ri : american mathematical society ) sandquist , e. l. , bolte , m. , & hernquist , l. , 1997 , , 477 , 335 shu , f. h. , 1992 , gas dynamics ( mill valley : university science books ) sills , a. , adams , t. , davies , m. b. , & bate , m. r. , 2002 , , 332 , 49 sills , a. , lombardi , j. c. , jr . , bailyn , c. d. , demarque , p. d. , rasio , f. a. , & shapiro , s. l. , 1997 , , 487 , 290 strang , g. , 1968 , siam j. num ., 5 , 506 van leer , b. , 1974 , j. comp .we provide a sample 3-d relaxing tvd code written in fortran 90 .the code is implemented using openmp directives to run in parallel on shared memory machines .the code is fast and memory friendly .the array u(a , i , j , k ) stores the five conserved hydro quantities for each cell ( i , j , k ) in the cartesian cubical lattice with side length nc . for each sweep, we first call the subroutine timestep to determine the appropriate time step dt which satisfies the cfl condition . the updating of u by including the flux in the x direction is performed by the sweepx subroutine .the data array u is divided into 1-d array sections u1d(a , i ) which are operated on by the relaxing subroutine .the independent columns are distributed amongst multiple processors on a shared memory machine by the openmp directives .the relaxing tvd subroutine in this sample code is written for ease of readability and therefore , is not fully optimized . at the beginning of each partial step in the runge - kutta time integration scheme ,the cell - averaged variables defined at grid cell centres are calculated by the averageflux subroutine .the fluxes at cell boundaries for the right - moving and left - moving waves are stored in fr and fl , respectively .we have implemented the minmod , superbee , and van leer flux limiters and the user of the code can easily switch between them .we have provided some initial conditions for the sedov - taylor blast wave test .the reader is encouraged to test the code and compare how the various flux limiters do at resolving strong shocks .this sample code does not implement the modified relaxing tvd scheme described at the end of , which has been found work very well with the van leer flux limiter but unstable with superbee for the 3-d sedov taylor test .we have found that the superbee limiter is often unstable for 3-d fluid simulations .please contact the authors regarding any questions on the implementation of the relaxing tvd algorithm .+ ` program main ` + ` implicit none ` + ` integer , parameter : : nc=64,hc = nc/2 ` + ` real , parameter : : gamma=5./3,cfl=0.9 ` + ` ` + ` real , dimension(5,nc , nc , nc ) : : u ` + + ` integer nsw , stopsim ` + ` real t , tf , dt , e0,rmax ` + + ` t=0 ` + ` dt=0 ` + ` nsw=0 ` + ` stopsim=0 ` + + ` e0=1e5 ` + ` rmax=3*hc/4 ` + ` tf = sqrt((rmax/1.15)**5/e0 ) ` + + ` call sedovtaylor ` + ` do ` + ` call timestep ` + ` call sweepx ` + ` call sweepy ` + ` call sweepz ` + ` call sweepz ` + ` call sweepy ` + ` call sweepx ` + ` if ( stopsim .eq .1 ) exit ` + ` call timestep ` + ` call sweepz ` + ` call sweepx ` + ` call sweepy ` + ` call sweepy ` + ` call sweepx ` + ` call sweepz ` + ` if ( stopsim .eq .1 ) exit ` + ` call timestep ` + ` call sweepy ` + ` call sweepz ` + ` call sweepx ` + ` call sweepx ` + ` call sweepz ` + ` call sweepy ` + ` if ( stopsim .eq .1 ) exit ` + ` enddo ` + ` call outputresults ` + + + ` contains ` + + + ` subroutine sedovtaylor ` + ` implicit none ` + ` integer i , j , k ` + + ` do k=1,nc ` + ` do j=1,nc ` + ` do i=1,nc ` + ` u(1,i , j , k)=1 ` + ` u(2:4,i , j , k)=0 ` + ` u(5,i , j , k)=1e-3 ` + ` enddo ` + ` enddo ` + ` enddo ` + ` u(5,hc , hc , hc)=e0 ` + ` return ` + ` end subroutine sedovtaylor ` + + + ` subroutine outputresults ` + ` implicit none ` + ` integer i , j , k ` + ` real r , x , y , z ` + + ` open(1,file=sedovtaylor.dat,recl=200 ) ` + ` do k=1,nc ` + ` z = k - hc` + ` do j=1,nc ` + ` y = j - hc ` + ` do i=1,nc ` + ` x = i - hc ` + ` r = sqrt(x**2+y**2+z**2 ) ` + ` write(1 , * ) r , u(:,i , j , k ) ` + ` enddo ` + ` enddo ` + ` enddo ` + ` close(1 ) ` + ` return ` + ` end subroutine outputresults ` + + + ` subroutine timestep ` + ` implicit none ` + ` integer i , j , k ` + ` real p , cs , cmax ` + ` real v(3 ) ` + + ` cmax=1e-5 ` + ` ! ` `omp end parallel do ` + ` ` + ` dt = cfl / cmax ` + ` if ( t+2*dt .gt .tf ) then ` + ` dt=(tf - t)/2 ` + ` stopsim=1 ` + ` endif ` + ` t = t+2*dt ` + ` nsw = nsw+1 ` + ` write(*,````(a7,i3,a8,f7.5,a6,f8.5 ) ` '' ` ) nsw = ,nsw, dt = ,dt, t = ,t ` + ` return ` + ` end subroutine timestep ` + + + ` subroutine sweepx ` + ` implicit none ` + ` integer j , k ` + ` real u1d(5,nc ) ` + + ` ! ` `omp end parallel do ` + ` return ` + ` end subroutine sweepx ` + + + ` subroutine sweepy ` + ` implicit none ` + ` integer i , k ` + ` real u1d(5,nc ) ` + ` ` + ` ! ` `omp end parallel do ` + ` return ` + ` end subroutine sweepy ` + + + ` subroutine sweepz ` + ` implicit none ` + ` integer i , j ` + ` real u1d(5,nc ) ` + ` ` + ` ! `end parallel do ` + ` return ` + ` end subroutine sweepz ` + + + ` subroutine relaxing(u ) ` + ` implicit none ` + ` real , dimension(nc ) : : c ` + ` real , dimension(5,nc ) : : u , u1,w , fu , fr , fl , dfl , dfr ` + + ` ! ! do half step using first - order upwind scheme ` + ` call averageflux(u , w , c ) ` + ` fr=(u*spread(c,1,5)+w)/2 ` + ` fl = cshift(u*spread(c,1,5)-w,1,2)/2 ` + ` fu=(fr - fl ) ` + ` u1=u-(fu - cshift(fu,-1,2))*dt/2 ` + + ` ! !do full step using second - order tvd scheme ` + ` call averageflux(u1,w , c ) ` + + ` ! !right - moving waves ` + ` fr=(u1*spread(c,1,5)+w)/2 ` + ` dfl=(fr - cshift(fr,-1,2))/2 ` + ` dfr = cshift(dfl,1,2 ) ` + ` call vanleer(fr , dfl , dfr ) ` + ` ! call minmod(fr , dfl , dfr ) ` + ` !call superbee(fr , dfl , dfr ) ` + + ` ! ! left - moving waves ` + ` fl = cshift(u1*spread(c,1,5)-w,1,2)/2 ` + ` dfl=(cshift(fl,-1,2)-fl)/2 ` + ` dfr = cshift(dfl,1,2 ) ` + ` call vanleer(fl , dfl , dfr ) ` + ` ! call minmod(fl , dfl , dfr ) ` + ` ! call superbee(fl , dfl , dfr ) ` + + ` fu=(fr - fl ) ` + ` u = u-(fu - cshift(fu,-1,2))*dt ` + ` return ` + ` end subroutine relaxing ` + + + ` subroutine averageflux(u , w , c ) ` + ` implicit none ` + ` integer i ` + ` real p , v ` + ` real u(5,nc),w(5,nc),c(nc ) ` + + ` ! !calculate cell - centered fluxes and freezing speed ` + ` do i=1,nc ` + ` v = u(2,i)/u(1,i ) ` + ` p = max((gamma-1)*(u(5,i)-sum(u(2:4,i)**2)/u(1,i)/2),0 . ) ` + ` c(i)=abs(v)+max(sqrt(gamma*p / u(1,i)),1e-5 ) ` + ` w(1,i)=u(1,i)*v ` + ` w(2,i)=(u(2,i)*v+p ) ` + ` w(3,i)=u(3,i)*v ` + ` w(4,i)=u(4,i)*v ` + ` w(5,i)=(u(5,i)+p)*v ` + ` enddo ` + ` return ` + ` end subroutine averageflux ` + + + ` subroutine vanleer(f , a , b ) ` + ` implicit none ` + ` real , dimension(5,nc ) : : f , a , b , c ` + + ` c = a*b ` + ` where ( c .gt .0 ) ` + ` f = f+2*c/(a+b ) ` + ` endwhere ` + ` return ` + ` end subroutine vanleer ` + + + ` subroutine minmod(f , a , b ) ` + ` implicit none ` + ` real , dimension(nc ) : : f , a , b ` + + ` f = f+(sign(1.,a)+sign(1.,b))*min(abs(a),abs(b))/2 . ` + ` return ` + ` end subroutine minmod ` + + + ` subroutine superbee(f , a , b ) ` + ` implicit none ` + ` real , dimension(5,nc ) : : f , a , b ` + + ` where ( abs(a ) .gt .abs(b ) ) ` + ` f = f+(sign(1.,a)+sign(1.,b))*min(abs(a),abs(2*b))/2 . `+ ` elsewhere ` + ` f = f+(sign(1.,a)+sign(1.,b))*min(abs(2*a),abs(b))/2 .` + ` endwhere ` + ` return ` + ` end subroutine superbee ` + + + ` end program main ` +
we present a pedagogical review of some of the methods employed in eulerian computational fluid dynamics ( cfd ) . fluid mechanics is governed by the euler equations , which are conservation laws for mass , momentum , and energy . the standard approach to eulerian cfd is to divide space into finite volumes or cells and store the cell - averaged values of conserved hydro quantities . the integral euler equations are then solved by computing the flux of the mass , momentum , and energy across cell boundaries . we review both first - order and second - order flux assignment schemes . all linear schemes are either dispersive or diffusive . the nonlinear , second - order accurate total variation diminishing ( tvd ) approach provides high resolution capturing of shocks and prevents unphysical oscillations . we review the relaxing tvd scheme , a simple and robust method to solve systems of conservation laws like the euler equations . a 3-d relaxing tvd code is applied to the sedov - taylor blast wave test . the propagation of the blast wave is accurately captured and the shock front is sharply resolved . we apply a 3-d self - gravitating hydro code to simulating the formation of blue straggler stars through stellar mergers and present some numerical results . a sample 3-d relaxing tvd code is provided in the appendix .
in this report , we are extending the basic typing concepts of traditional software component systems with means for specifying possible behavior of components . as with traditional types , like primitive datatypes and their composition, our _ behavioral types _ can be used for eliminating possible sources of errors at development time of software systems .this is analog to classical static type checks performed by a compiler .furthermore , we can use behavioral types for eliminating possible sources of errors at runtime .this is analog to dynamic type checks performed when accessing pointers that reference data with types that can not be statically determined in some classical programming languages .behavioral types also provide additional information about components which can be used for tool based operations that allow the discovery of components and the dynamic reconfiguration of systems .we are focusing on the osgi component framework .the following topics are covered and have been partially published before : * a discussion on behavioral types in general , including different usages . * our eclipse based implementation work on behavioral types that is manifested in the beht framework .the implementation work covers : editors , means for comparing types at development and runtime , a tool connection to resolve incompatibilities , and an aspectj based infrastructure to ensure behavioral type correctness at runtime of a system .furthermore , the implementation comprises various auxiliary operations .* we present some evaluation work based on examples .we present some core concepts on behavioral types to support a development process of component based systems . in our opinion( behavioral ) types should provide a number of core concepts to justify their classification as a _ type system _ : abstraction : : : behavioral types represent aspects of ( models of ) programs , components , or systems , providing an _ abstraction from details concerning the interaction with their environment as well as their internal structure_. type conformance : : : as in model - based development behavioral types are abstractions of components , models , or other entities .type conformance is used to _ correctly relate a component to its behavioral type_. type refinement : : : for supporting stepwise refinement , behavioral types should provide the concept of refinement to _ ensure the correct implementation of abstract specifications by concrete components_. type compatibility : : : for supporting the combination of components , behavioral types should provide the concept of type compatibility to _ help ensure the useful composition of components to systems . _type inference : : : furthermore , for the same reason , behavioral types should provide the concept of type inference to _ allow to infer the type of a composed system from the types of its constituents ._ to be useful in a development process , of course , a suitable type conformance notion has to be selected with respect to type refinement : for a pair of models conforming to a pair of types with the second model implementing the first , the second type should be a refinement of the first .furthermore , type refinement , type compatibility and type inference should agree : if a type compatible to a given type is refined by another , the later type should be compatible to the given one ; similarly , if in a composed type one type is replaced by a more refined type , the inferred type of the first composition should be a refinement of the second composition . also , for practical application in a development process , a behavioral type should not only be explicitly provided for a component by the user and checked for conformance , but ( automatically ) constructed for this component .this is especially desirable in a seamless model - based development process .finally , as type checking of expressive behavioral types is in general undecidable , an adequate level of expressiveness is needed making type checking feasible without over - restricting the expressiveness of the behavioral types . using the above concepts, behavioral types can be helpful for different aspects of in the development process of a component based system . here, we present a general motivation for the concept without speaking about our implemented system . [[ correctness - of - implementation ] ] correctness of implementation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + behavioral types can be used to relate specifications and code , e.g. , as products of different stages in a development process , to ensure a certain aspect of behavior is preserved .figure [ fig : scenrefine ] illustrates this for a model based development process with models of different degrees of abstraction state - machines and source code representing the same system . explicitly providing or automatically constructing corresponding conforming types , correctness of refinement can be checked by using these types , ensure the correctness of the implementation with respect to the abstraction implied by the type system .furthermore , refinement checking is also used in structural refinement when implementing a component by a collection of subcomponents .as shown in figure [ fig : compounds ] , the refinement relation is checked between the types of the composed components and the type of the collection of sub - components .the type of the composed component can also be derived from the respective behavioral types of the sub - components .[ [ compositionality - and - interfaces ] ] compositionality and interfaces + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + behavioral types could furthermore be used to check whether components to be composed are compatible with each other , as shown in figure [ fig : scencomprefine ] . additionally , using type coercion i.e., the inference of the least abstract types refining the investigated pairs and being compatible basically incompatible components can be composed .this generally involves an adaption of the corresponding models by providing `` glue code '' similar to automatic type casts for , e.g. , integers of different length to make the two components interact with each other .[ [ runtime - verification ] ] runtime verification + + + + + + + + + + + + + + + + + + + + finally , behavioral types can be used for runtime - verifying systems , supplying a monitor being executed in parallel with a system implementation .the monitor corresponding to a behavioral type and checking all behavioral constraints specified via the type observes the system behavior , reporting violations .monitors may be generated from behavioral types automatically .figure [ fig : scenrv ] shows a usage scenario for behavioral types in runtime verification . using an explicitly provided type or inferring it from the specification ( a more abstract model in the model based development process shown ) , this type serves as the basis for generating a runtime monitor which is then deployed with the compiled model as a runtime - verified system . this aspect is not treated in our current implementation . [[ additional - benefits ] ] additional benefits + + + + + + + + + + + + + + + + + + + additional benefits comprise the dynamic reconfiguration of systems based on type information and the discovery of components in a soa like setting .figure [ fig : sscpic ] shows a state - machine speed control as part of an adaptive cruise control system in a car , using the graphical model of the eclipse - based papyrus uml tool .it is taken from .this state - machine provides an abstract component specification created during requirements specification in the development process .it specifies that this component shall be able to perform acceleration and braking .it can be compared with other types for uml diagrams that specify some aspects of the behavior of `` speed control '' .this comparison can be used during the abstract and detailed design and not covered in this paper in a later implementation , supporting a stepwise refinement .for example in the next phase the active state can be specified in a more detailed way supporting several modes as shown in figure [ fig : ascpic ] .the standard , eco and sport mode may show different acceleration and braking behavior thereby supporting , e.g. , more fuel - efficient driving in the eco mode .however , when abstracting from possible transition guards ( mode switch ) , other behavioral functionality and other events these may limit the order of possible executions the original behavior specification still applies : each mode supports braking and acceleration .we can now extract a behavioral type of this more refined model and compare it with the first one . on this abstraction level regarding only brake and acceleration guards both specifications have the same set of execution traces . as an ultimate goal, the development environment should support the extraction and checking of behavior automatically and provide a means of informing the developer about any behavioral incompatibilities , i.e. , understandable behavioral type errors .we present an overview on osgi following our description in and refer to our semantics report for our approach to cover the semantics of osgi ( parts of this has also been published in ) .the osgi framework is a component and service platform for java .it allows the aggregation of java packages and classes into bundles ( cf .figure [ fig : osgiexample ] ) and comes with additional deployment information .the deployment information triggers the registration of services for the osgi framework .bundles provide means for dynamically configuring services , their dependencies and usages .osgi bundles are used as the basis for eclipse plugins but also for embedded applications including solutions for the automotive domain , home automation and industrial automation .bundles can be installed and uninstalled during the runtime .for example , they can be replaced by newer versions .hence , possible interactions between bundles can in general not be determined statically .bundles are deployed as .jar files containing extra osgi information .this extra information is stored in a special file inside the .jar file .bundles generally contain a class implementing an osgi interface that contains code for managing the bundle , e.g. , code that is executed upon activation and stopping of the bundle . upon activation, a bundle can register its services to the osgi framework and make it available for use by other bundles .services are implemented in java .the bundle may itself start to use existing services .services can be found using dictionary - like mechanisms provided by the osgi framework .typically one can search for a service which is provided using an object with a specified java interface . in the context of this report, we use the term osgi component as a subordinate concept for bundles , objects and services provided by bundles .the osgi standard only specifies the framework including the syntactical format specifying what bundles should contain .different implementations exist for different application domains like equinox for eclipse , apache felix or knopflerfish .if bundles do not depend on implementation specific features , osgi bundles can run on different implementations of the osgi framework .section [ sec : beht ] discusses and presents a general work on behavioral types .the use of behavioral types an the development of osgi components is described in section [ sec : osgidevproc ] .beht , our tool is discussed in section [ sec : tool ] together with related implementation questions .an evaluation is described in section [ sec : eval ] .related work is discussed in section [ sec : rw ] and a conclusion featured in section [ sec : concl ] .here , we present and discuss a general implementation independent concept of behavioral types .our behavioral types essentially support finite automata and regular expressions as the main specification format .finite automata and regular expressions can easily be transformed in one another .finite automata are used for specifying expected incoming , potential outgoing method calls , the creation and deletion of components during a time span and other events that may occur in the lifetime of a system .a component s behavior can be specified by one or multiple automata each one describing a behavioral aspect .formally , we have an alphabet of labels , a set of locations , an initial location and a set of transition edges where each transition is a tuple with and .these are aggregated into a tuple to form a behavioral specification : this view abstracts from the specifications given in section [ sec : osgi ] .our intention is to define interaction protocols or some aspects of them like the expected order of incoming and outgoing method calls for a component .specifications for different components are independent of each other as long as there is no method call ( e.g. , indicated by the same label name ) in the specifications .[ [ example - two - components - interacting ] ] example : two components interacting + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + specifications can be used for different behavioral aspects .figure [ fig : oldnewprot ] shows two excerpts of automata for outgoing and expected method calls from two different component specifications : + and + here , the first component can do two different method calls in its initial state : newprtcl , oldprtcl .the second component expects one method call newprtcl in its initial state . in this caseboth components may interact with each other , if both components use the newprtcl .[ [ interaction - protocols - for - bundles - and - objects ] ] interaction protocols for bundles and objects + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + objects and bundles can register a service protocol describing , e.g. , incoming method calls that they expect . this can be done by using : * _ regular expressions ._ thereby bundles and objects can indicate expected events .events can be incoming or outgoing method calls .thus , the regular expression specifies their order .regular expressions are terms over an alphabet of events using the for alternatives , the for concatenation and the as the star operator . *_ finite automata ._ regular expression can be described by an equivalent finite automaton , too .we define our finite automata as a set of locations , an initial location and a transition relation comprising a predecessor and a successor location labeled with an event . while in our applications the event is typically a method call or a set of method calls , other possibilities like timing events , or creation and deletion of objects and bundles are also possible . for examplethe protocol given in figure [ fig : osgiservicefile2 ] can be described as a regular expression as follows : ( ( inc : lock)(inc : read inc : write)(inc : unlock)) the expression describes a sequence that can be repeated .it starts with a lock and ends with an unlock . between lock andunlock an arbitrary number of read and write operations can occur .the inc denotes expected incoming method call . the actions from figure [ fig : osgiservicefile2 ]describe outgoing method calls .this can be written using our regular expressions as : ( out : lockf1)(out : lockf2) + ( ( out : readf1 ) ( out : readf2 ) ( out : writef1 ) ( out : writef2)) + ( out : unlockf2)(out : unlockf1 ) and ( out : lockf2)(out: lockf1) + ( ( out : readf1 ) ( out : readf2 ) ( out : writef1 ) ( out : writef2)) + ( out : unlockf1)(out : unlockf2 ) one can now use these protocol specifications , e.g. , for checking : * _ compatibility _ this addresses the question if the operations that one object expects to be called are called by another object . furthermore , the correct order of calls is of interest . *_ additional properties _ properties that relate distinct semantical aspects of bundles and objects are of interest . in the given example, the question arises whether a deadlock can occur or not . in order to perform these checks andanalysis one has to match elements of a specification for one component with elements of a specification from another component . in the given examplethe protocol comparison has to deal with two instances of a file component and has for example to relate the ( out : lockf1 ) and ( out : lockf2 ) with instances of ( inc : lock ) . [[ parameterized - specifications ] ] parameterized specifications + + + + + + + + + + + + + + + + + + + + + + + + + + + + for facilitating the relation of specifications we define parameterized specifications .these comprise : * _ parameterized regular expressions . _ here , each event used in a regular expression can be augmented with a parameter . for our examplefile component specification this results in the following expression , parameterized with .+ ( ( inc : lock)(inc : read inc : write)(inc : unlock)) * _ parameterized automata . _ similar to regular expressions , locations and events in transitions of automata can be augmented with parameters .instantiation is done , by substituting concrete values for the parameter .instantiation of parameters is dependent on concrete application scenarios .[ [ example - instantiations - of - parameterized - specifications ] ] example instantiations of parameterized specifications + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we regard two kinds of instantiations as particularly useful for describing a protocol of expected incoming method calls .consider the refined version of figure [ fig : osgiservicefile2 ] in figure [ fig : instparamspec ] for locking and unlocking a resource .the lock state as well as the method calls that lead to the lock state are parameterized . *a first instantiation is shown in figure [ fig : instparam ] . here, the parameter is instantiated by instances .each of them gets its own lock state and its own method call that lead to this lock state .+ * in case only one lock state is wanted , one can still deal with different parameterized method calls and use the instantiation shown in figure [ fig : instparamex ] +a potential major advantage by using behavioral types is the support of a seamless integration of behavioral specification throughout the development phase and the life cycle of a system .our behavioral types can be used for different purposes ( we proposed them partially in ) at development and runtime .the main idea of using behavioral types at development time is to derive them from requirements as shown in figure [ fig : devchain ] and use them for * refinement checking of different forms of specification for the same entity that are supposed to have some semantical meaning in common .for example , the abstract specification , source code and compiled code of the same component represent different abstraction levels and should fulfill the same behavioral type .checking this could be done by using static analysis at development time . at the end, a developed osgi bundle is deployed including the behavioral type files .these can now be used for additional ( dynamic ) operations in the running system .a feature that can be performed at compile and at runtime is * the compatibility checking for the composition of software components and the generation of glue code for behavioral type coercion to overcome possible incompatibilities at runtime of a system this feature can help dynamic reconfiguration .figure [ fig : behtrt ] shows two operations which can be carried out at runtime of a system : * the registration and discovery of components using the osgi framework , * the compatibility , e.g. , deadlock checking of bundle interaction protocols .furthermore , another operation that could be invoked at runtime is * the adaptation of a component to act according to a required protocol .this can be a solution for dynamic type coercion .behavioral runtime monitors ( figure [ fig : behtrvmon ] ) as featured in this paper comprise * the generation of the behavioral runtime monitor and its connection using aspects at development time and * the actual monitoring at runtime .this section presents our implementation work on beht : the eclipse based framework for behavioral types of osgi components .some parts are already published in .our behavioral types provide an abstract description of a components behavior and thus provide a way of formalizing specifications associated with the component .they can be used as a basis for checking the compatibility of components for composing components into new ones , and interaction of different components and for providing ways to make components compatible using coercion .type conformance can be enforced at compile time ( e.g. , like primitive datatypes int and float in a traditional typing system ) if decidable and feasible or at runtime of a system e.g. , like whether a pointer is assigned to an object of a desired type at runtime in a traditional typing system . in our work behavioral typesare realized as files that contain a description of ( parts of the ) behavior of an osgi component .typically , there should be one file per bundle , or class definition .but different aspects of behavior may also be realized using different files . in eclipsethe files are associated with an osgi bundle by putting them in the same project folder in the eclipse workspace . here , behavioral types are formally defined using the following ingredients .[ [ behavioral - type - automaton ] ] behavioral type automaton + + + + + + + + + + + + + + + + + + + + + + + + + a behavioral type automaton is a finite automaton represented as a tuple comprising an alphabet of labels , a set of locations , an initial location and a set of transition edges where each transition is a tuple with and .a consistency condition on our types is that all appear in some transition in . in this paper , since we are interested in method calls , is the set of method names of components .the definition presented here can be used for specifying the behavior of single objects , all objects from a classes , bundles and their interactions .it can be used for monitoring incoming method calls , outgoing method calls , or both .[ [ maximal - execution - time - table ] ] maximal execution time table + + + + + + + + + + + + + + + + + + + + + + + + + + + + in addition to the protocol defined by the behavioral type automaton , we define the maximal execution time of methods as a mapping from the set of method names to their maximal execution time in milliseconds . the specification of a maximal execution time is optional , thus , the entry indicates that no maximal execution time is set .[ [ behavioral - types ] ] behavioral types + + + + + + + + + + + + + + + + a behavioral type in our framework may comprise a behavioral type automaton and a maximal execution table .in addition to this , it may comprise parameterized specifications , ltl formula , regular expressions and information on what is specified . here, indications on the nature of events and textual descriptions are available .we have implemented a behavioral runtime monitor generator as described in section [ sec : osgidevproc ] for beht following the outline of figure [ fig : behtrvmon ] .regardless of what we intend to monitor , the monitor generation from a specification is the same .it is done automatically from a behavioral type file and generates a single java file that defines a single monitor class .figure [ fig : genmonex ] shows a generated monitor .monitors are generated as classes bearing a name derived from the original behavioral type .they comprise a map maxtimes that maps method names to their maximal execution time in milliseconds .this entry is optional .if present , this map is initialized by the constructor public clientinstance_out_realistic_simple_mon ( ) in the example of the monitor with the values specified for methods in the behavioral type file .generated from an automaton from the behavioral type our behavioral runtime monitors comprise a static enumeration type with the location names of the automaton . in the automaton ,the locations locs0 , locs1 are present . using this type a state transition function generated from the transition relationis generated .the state transition function takes a string encoding a method name event name and updates a state field protected location state of the method .this field is initialized on object creation with the name of the initial state : locs0 in the example .the generated monitors are connected to the component that shall be observed using aspectj aspects .aspectj is an extension of java that features aspect oriented programming .aspects are specified in separate files and feature pointcuts that allow the specification of locations where java code specified in the aspect shall be added to existing java code .this weaving of aspect code into existing java code is done on bytecode level .monitors are created and called from aspects .all extra code needed to integrate the monitors is defined in the aspectj files or in libraries accessed through the aspectj files .there is no need to touch the source code of a component .this independence of source code and specification is a design goal of our framework .we distinguish different kinds of monitor deployment .each kind requires its own aspect and especially its adaptation .[ [ singleton - monitors ] ] singleton monitors + + + + + + + + + + + + + + + + + + in some cases it is sufficient to use a singleton instance of a monitor .this is the case when monitoring all the method calls that occur in a bundle , within all objects of a class , or within a singleton object . for monitoring method call orders , we use a before pointcut in aspectj .figure [ fig : exaspect ] shows an example aspect : here , before the calls to methods specified in the execution pattern after the `` : '' in the pointcut of all objects of class middlewareproc an update on the state transition function the com.nextstate is inserted .we extract the name of the called method using reflection and a helper method ajmonhelpers.getmethodname and pass it to the state transition function .in addition to updating the state field in the monitor we get a boolean value indicating whether the monitored property is still fulfilled . in case of a deviation the behavioraltypeviolationexception a runtime exception is thrown .the implementation of the middlewareproc class may or may not catch this exception and react to it .[ [ multiple - monitor - instances ] ] multiple monitor instances + + + + + + + + + + + + + + + + + + + + + + + + + + in same cases we want to monitor each object of a class with an independent monitor . here, we create on call of the object s constructor an individual monitor for the object .it is added to a ( hash)map ( object monitor ) . since the aspectj pointcuts are defined with respect to the static control flow information specified in the source code of a class , on each call of a method belonging to the class to be monitored , we use the same code in each object and chose the monitor for the particular object by looking it up from the map and advance the respective monitor state . [ [ monitoring - of - time ] ] monitoring of time + + + + + + + + + + + + + + + + + + monitoring time is done using java timers within the java code associated with the pointcuts . on call of a methodwe create a timer that is scheduled to throw an exception after the specified maximal execution time . using the after pointcut, the timer is canceled if the method s execution finishes on time and thus , no exception is thrown in this case .the adaptation of an aspect for monitoring a particular component is simple .one has to take the appropriate aspectj .aj file and adapt it , by inserting the names of the classes and packages that shall be monitored and the correct monitor names .weaving of the aspects is done automatically on java bytecode level and no additional configuration needs to be done .we have developed and implemented different operations for handling and comparing behavioral types , for deciding compatibility and for deadlock freedom .simple comparison for equality of types and comparison for refinement between two automata based specifications involves the following steps . * a basis for the comparison of two types is the establishment of a set of semantical artifacts ( e.g., method calls ) that shall be considered .the default is to use the union of all semantical artifacts that are used in the two types .comparison for refinement is achieved by eliminating certain semantical artifacts from this set . for consistencythis also requires eliminating associated transitions from the types or , depending on the desired semantics , replacing an edge with an empty or label .* it is convenient to complete specifications for further comparison : specification writer may only have specified method calls or other semantical artifacts that trigger a state change . here , we automatically add an error location .we collect possible labels and for locations that do not have an edge for a label leading to another location indicating a possible semantical artifact , we add edges with the missing label to the error location . * in case of specifications which have been completed and that have no locations with two outgoing edges with the same labels , we perform a minimization of automata based specifications .this way , we merge locations and get rid of unnecessary complexity automatically . *normalization of automata based specifications .this , involves the ordering of edges and in some cases locations with respect to the lexicographic order of their labels / location names .* checking for equality involves the checking of equality of the labels on edges .optionally , one can also consider the equality of location names of an automaton . location names may imply some semantics but in our standard settings they only serve as ids .when location names serve only as ids , we construct a mapping between location names of the two automata involved in the comparison operation .these operations have been implemented in java .they do not need additional tools or non - standard plugins .in addition to the operations described in section [ sec : deccomp ] we have adapted a sat and game - based tool vissbip presented in to serve as a compatibility and deadlock checker for our behavioral types for osgi .our framework uses vissbip to support the checking of the following properties : * deadlocks checking : deadlocks resulting from potential sequences of method calls can be detected . *compatibility : a component anticipating a certain behavior of incoming method calls matches potential behavior of outgoing method calls by other components . vissbip uses a simplified version of the bip semantics . a system comprises concurrent automata with labeled edges .the automata synchronize with each other by performing edges with the same labels in parallel . otherwise , the default case is that automata do not synchronize with each other . for comparing method call based behavioral specifications we use vissbip on specifications that comprise expected incoming and outgoing method calls of components . in osgi synchronization between components happens only when one component calls a method of the other component as indicated in the behavioral specification and the osgi semantics . on the vissbip sidethis corresponds to same labels in the automata that represent the behavior .in addition to the label compatibility checking , vissbip is able to perform the introduction of priorities . one way of runtime adaption is the reaction to potential deadlocks or incompatibilities .recall figure [ fig : oldnewprot ] : it shows behavioral specifications of two components which intend to communicate with each other .possible outgoing method calls of one component and expected incoming method calls of the other component are shown .it can be seen that the first component is able to communicate using two different protocols : one starts by calling an initialization method newprtcl , the other one starts by calling an initialization method oldprtcl .the other component expects the newprtcl call .when we give these two specifications to vissbip , it will return a list of priorities where the newprtcl edge is favored over the oldprtcl edge in the first specification . in a javaimplementation the first component can use this to dynamically decide at runtime which protocol to use .* first , the component loads its own behavioral specification and the specification of the expected method calls of the second component .technically , we support loading files and the registration of models as properties / attributes of bundles as provided by the osgi framework . *next , we invoke vissbip or another checking routine . passing the behavioral specifications as parameters . *the checking routine gives us a list of priorities .in the java code we have a switch statement as a starting point for handling the different protocols .we check the priorities and go to the case for the appropriate protocol .thus , in addition to deadlock detection , we can use behavioral specifications for coping with different versions of components and desired interacting protocols .a central feature of our behavioral descriptions for osgi components is registering them to a central osgi instance . in order to inform other components of the existence of a bundle with behavioral offers and needs ,we register its behavioral properties using the osgi service registry belonging to a bundlecontext which is accessible for all bundles in the osgi system : .... registerservice(java.lang.string [ ] clazzes , java.lang.object service , java.util.dictionary<java.lang.string , ?> properties ) .... here , we register a collection of behavioral objects as properties for a service representing a bundle under a string based key . in our framework, we register a collection of behavioral models as `` behavior '' .the behavioral models are loaded from xml files that are integrated into the bundle .the behavioral models come with meta information which identify the parts of the behavior of a bundle which they describe .the service itself is represented as an object .additional interface information is passed using the clazzes argument .our evaluation features a booking system ( ) as an example .it is evaluated with respect to different aspects of behavioral types including behavioral runtime monitoring .we present the use of behavioral types to highlight some features and usages of our work on an example : a flight booking system .figure [ fig : fbcomps ] shows the main ingredients of our flight booking system .clients are served by middleware processes which are created and managed by a coordination process .middleware processes use concurrently a flight database and a payment system .the described system is an example inspired by realistic systems where the middleware is implemented using java / osgi .in addition to the middleware components we describe databases and parts of the frontend using our behavioral types to make checks of these parts possible .the following means of behavioral interaction can be distinguished : * * component calls between methods / communication protocol * in our flight booking system , a client can call a coordination process and middleware processes .middleware processes can call methods providing access to the flight database and the payment subsystem .the method calls need to respect a distinct protocol which can be encoded using our behavioral types . * * creation and deletion of new components * the coordination process creates and removes middleware process such that there is one process per client . providing support for analysis of such dynamic aspects is a long term goal for our behavioral types but not in the scope of this work . * * concurrent access to shared resources * middleware processes perform reservations , cancellations , rebookings , seat reservations and related operations on the flight database .these operations do require the locking of parts of the data while an operation is performed .for example , during a seat reservation a certain amount of the available seats in an aircraft is locked so that a customer can chose one without having to fear that another customer will chose the same seat at the very same time . in the current statewe are able to provide some behavioral types support here .[ [ example - specification - of - outgoing - method - calls - of - a - middleware - process ] ] example : specification of outgoing method calls of a middleware process + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + specifications of possible expected incoming and potential outgoing method calls give information about a communication protocol that is to be preserved .typically different interaction sequences are possible , especially since we are dealing with abstractions of behavior . in the booking system ,a middleware process communicates with a flightdatabase ( db ) and the payment system ( pay ) .the expected order of method calls for a flight booking to these systems is shown in figure [ fig : commprot1 ] .the figure shows only an excerpt of the possible states and transitions .in addition to this , the initial state allows the start of a seat reservation process and a cancellation process .moreover , figure [ fig : commprot1 ] shows only the state changing method calls of the behavioral specification of the booking process .our real behavioral specification completely lists all possible method calls in each state .this way , we can further analyze compatibility issues for example with database systems that do not support all possible method calls of a middleware process . in comparison to the outgoing methodcalls of a middleware process , the incoming method call specification is much simpler : a constructor call is performed by the coordination process upon initialization .after that , the communication with the client is done using a webserver interface comprising method calls that send raw request data to the middleware process and return raw response data that trigger , e.g. , displaying selected flights by the client where no states in the communication process can be distinguished .[ [ example - specification - of - database - elements ] ] example : specification of database elements + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + access to our database is done using method calls to a database process and is formalized using our automata based specification formalisms .the method calls result in locking and unlocking database elements .seat reservation in a flight requires that a certain partition of the available seats is blocked during the selection process so that a client can make a choice .figure [ fig : seatres ] shows our behavioral model of seat reservation for a single flight .different loads are distinguished : low means that many seats are still available , while high means that only a few seats are available .the full state indicates that no additional seat reservations can be made , only cancellations are possible .the model is an abstraction of the reality since instead of treating each seat potentially hundreds of available seats independently we only distinguish their partitioning into four equivalence classes : low , medium , high and full .[ [ example - database - elements - and - deadlocks ] ] example : database elements and deadlocks + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + access to the flight database can result in deadlocks . the model from figure [ fig : seatres ] can serve as a basis for deadlock analysis .consider the scenario shown in figure [ fig : concseatres ] : for each flight a different instance of the seat reservation model exists .given three airports a , b and c : suppose two people person 1 and person 2 want to fly from a to c via b. seats for two flights need to be reserved : from a to b and from b to c. it is not desirable to reserve a seat from b to c if no seat is available for the flight to a to b. otherwise , it might not be desirable to fly from a to b if no seat is available for the flight from b to c. during the seat reservation a deadlock may occur : if person 1 reserves the last seat for the a to b flight before doing reservations for the b to c flight and person 2 reserves the last seat for the b to c flight before a seat reservation for the a to b flight a deadlock may occur , which may result in the cancellation of both journeys although one person could have taken the journey .if it is known before to the seat reservation system that person 1 and person 2 will fly from a to c which is a reasonable assumption given the fact that they have entered their desired start and end destination into the system we are able to detect such deadlocks. they can occur if both behavioral models of the seat reservation system are already in the high state given that no other participants are doing reservations at this time we may also take compensating actions .different scenarios for the use and deployment of behavioral types have been tested by us .one example scenario is the flight booking system .osgi components and their interactions are shown .the entire system could be deployed as an osgi based middleware that offers its services to the external world using webservices .clients are represented as proxy components in the system and served by middleware processes which are created and managed by a coordination process .middleware processes use concurrently a flight database and a payment system which are represented by proxy osgi components .we have investigated the communication structure between the components and investigated deployment of monitors .this comprises the following cases : * the use of multiple monitors running in parallel and being created at runtime for different objects which are created dynamically . in the example system this is the case for the middleware processes , where processes are created as separate objects on demand and are monitored independently of each other . * the monitoring of all objects of a single class using a single monitor and the monitoring of singleton objects and the monitoring of bundle behaviorthis is , e.g. , the case in the payment subsystem .different aspects as described in section [ sec : aspects ] were adapted for this .monitors together with an implementation of the system that realized the communication between the components was deployed using the osgi equinox implementation .furthermore , we have investigated the monitoring of maximal execution time of methods . in the example systemthis is the case in the payment subsystem and access to the flight database .we did not find any major problems in our approach .[ [ behavioral - runtime - monitors - and - osgi ] ] behavioral runtime monitors and osgi + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as described in section [ sec : beht ] our behavioral type efforts are particularly aimed towards osgi .while features that are not subject to the contribution of this paper like component discovery using behavioral types are only feasible in an osgi like component framework , we did find no principal issues that prevent the use of our types for behavioral runtime monitoring in other java contexts .technically , the osgi framework gives us with bundles yet another structuring layer for software components , which we use in the specification of our types and we believe has a good granularity for the communication protocol specifications that we primarily regard .the generation of entire classes for each monitor instead of integrating the complete monitor inside of aspects like in is also justified on this granularity .the editors and generation mechanism depend on eclipse which is realized on - top of osgi .the behavioral monitor connection using aspects depends on aspectj , some technical issues are mentioned above .interface automata are one form of behavioral types . like in this work ,component descriptions are based on automata .the focus is on communication protocols between components which is one aspect that we also address in this paper .while the used formalism for expressing behavior in interface automata is more powerful ( timed automata vs. automata vs. timing annotation per method ) , interface automata do not target the main focus of this paper : checking the behavior at runtime of a component by using some form of monitoring .they are especially aimed at compatibility checks of different components interacting at compile time of a system .the term behavioral types is used in the ptolemy framework . here , the focus is on real - time systems .specification and contract languages for component based systems have been studied in the context of web services . a process algebra like language and deductive techniquesare studied in .another process algebra based contract language for web services is studied in .emphasize in the formalism is put on compliance , a correctness guaranty for properties like deadlock and livelock freedom .another algebraic approach to service composition is featured in .jml provides assertions , pre- and postconditions for java programs .it can be used to specify aspects of behavior for java methods .assertion like behavioral specifications have also been studied in the context of access permissions .behavioral types as means for behavioral checks at runtime for component based systems have been investigated in . in this work ,the focus is rather put on the definition of a suitable formal representation to express types and investigate their methodical application in the context of a model - based development process . a language for behavioral specification of components , in particular of object oriented systems but not osgi , is introduced in .compared to the requirement - based descriptions proposed in our paper , the specifications used in are still relatively close to an implementation .recent work regarding refinement of automata based specifications is , e.g. , studied in .the runtime verification community has developed frameworks which can be used for similar purpose as our behavioral type based monitors .the mop framework allows the integration of specifications into java source code files and generates aspectj aspects which encapsulate monitors .compared to this work , the intended goals are different . while we keep the specification and implementation part separate , in order to be able to use the specification for different purposes at development , compile and runtime , a close integration of specification and codeis often desired and realized in the runtime verification frameworks .a framework taking advantage of the trade - off between checking specifications at runtime and at development time has been studied in .a framework that generates independent java monitors leaving the instrumentation aspect to the implementation is described in .other topics explored in this context comprise , e.g. , the efficiency and expressiveness of monitoring but are less focused on software engineering aspects compared to this paper .monitoring of performance and availability attributes of osgi systems has been studied in . here , a focus is on the dynamic reconfiguration ability of osgi .another work using the .net framework for runtime monitor integration is described in .runtime monitors for interface specifications of web - service in the context of a concrete e - commerce service have been studied in .behavioral conformance of web - services and corresponding runtime verification has also been investigated in .runtime monitoring for web - services where runtime monitors are derived from uml diagrams is studied in .runtime enforcement of safety properties was initiated with security automata that are able to halt the underlying program upon a deviation from the expected behaviors . in our behavioral types framework , the enforcement of specifications is in parts left to the system developer , who may or may not take potential java exceptions resulting from behavioral type violations into account .our behavioral types represent an abstract view on the semantics of osgi .we have summarized our work on the osgi semantics in a report .other work does describe osgi and its semantics only at a very high level . a specification based on process algebrasis featured in .means for ensuring osgi compatibility of bundles realized by using an advanced versioning system for osgi bundles based on their type information is studied in .some investigations on the relation between osgi and some more formal component models have been done in .aspects on formal security models for osgi have been studied in .we presented our beht framework for behavioral types for osgi systems , a development process for osgi applications and some motivation and evaluation .so far , we are concentrating on eclipse / osgi systems .other application areas for the future comprise 1 ) work towards behavioral types for distributed software services 2 ) work towards real - time embedded systems .this might require leaving the java / osgi setting , since these applications typically involve c code which communicates directly with if at all an operating system .there is , however , work on extensions for real - time applications of osgi using real - time java ( e.g. , ) .additional specification formalisms and the integration of new checking techniques are another challenge .j. c. amrico , w. rudametkin , and d. donsez . managing the dynamism of the osgi service platform in real - time java applications .proceedings of the 27th annual acm symposium on applied computing , acm , 2012 .h. barringer , y. falcone , k. havelund , g. reger , d. rydeheard .quantified event automata : towards expressive and efficient runtime monitors .18th international ssmposium on formal methods , vol .7436 of lncs , springer - verlag , 2012 .( fm12 ) j. o. blech . towards a framework for behavioral specifications of osgi components .10th international workshop on formal engineering approaches to software components and architectures .electronic proceedings in theoretical computer science , 2013 .j. o. blech , y. falcone , h. rue , b. schtz .behavioral specification based runtime monitors for osgi services . leveraging applications of formal methods , verification and validation ( isola ) , vol .7609 of lncs , springer - verlag , 2012 .j. o. blech and b. schtz .towards a formal foundation of behavioral types for uml state - machines .5th international workshop uml and formal methods .paris , france , acm sigsoft software engineering notes , august 2012 .n. catao and i ahmed .lightweight verification of a multi - task threaded server : a case study with the plural tool .proceeding of formal methods for industrial critical systems ( fmics ) , vol 6959 of lncs , springer , 2011 .p. chalin , j.r .kiniry , g.t . leavens , e. poll . beyond assertions : advanced specification and verification with jml and esc / java2 .formal methods for components and objects , fmco , vol .4111 of lncs , springer 2005 . c. cheng , h. rue , a. knoll , c. buckl .synthesis of fault - tolerant embedded systems using games : from theory to practice .verification , model checking , and abstract interpretation , vol 6538 of lncs , springer 2011 .o. gadyatskaya , f. massacci , a. philippov . security - by - contract for the osgi platform .information security and privacy conference , ifip advances in information and communication technology , vol .376 , 2012 .y. gan , m. chechik , s. nejati , j. bennett , b. ofarrell , j. waterhouse .runtime monitoring of web service conversations .proceedings of the 2007 conference of the center for advanced studies on collaborative research , acm 2007 . e. b. johnsen and r. hhnle and j. schfer and rudolf schlatte and martin steffen .abs : a core language for abstract behavioral specification .post conf .proceedings 9th intl .symposium on formal methods for components and objects 2010 .springer - verlag 2010 .p. oneil meredith , d. jin , d. griffith , f. chen , g. rou .an overview of the mop runtime verification framework .international journal on software techniques for technology transfer , springer - verlag , 2011 .c. prehofer . behavioral refinement and compatibility of statechart extensions . formal engineering approaches to software components and architectures .electronic notes in theoretical computer science , 2012 .f. souza , d. lopes , k. gama , n. rosa , r. lima .dynamic event - based monitoring in a soa environment . on the move to meaningful internet systems ,7045 of lncs , springer - verlag , 2011 .tchinda , n. stouls , j. ponge .spcification et substitution de services osgi . technical report ,inria ( 2011 ) http://hal.inria.fr/inria-00619233 .
this report presents our work on behavioral types for osgi component systems . it extends previously published work and presents features and details that have not yet been published . in particular , we cover a discussion on behavioral types in general , and eclipse based implementation work on behavioral types . the implementation work covers : editors , means for comparing types at development and runtime , a tool connection to resolve incompatibilities , and an aspectj based infrastructure to ensure behavioral type correctness at runtime of a system . furthermore , the implementation comprises various auxiliary operations . we present some evaluation work based on examples .
the tractability of affine models , such as the vasiek and the cox ingersoll ross models , has made them appealing for term structure modeling .affine term structure models are based on a ( multidimensional ) factor process , which in turn describes the evolution of the spot rate and the bank account processes .no - arbitrage arguments then provide the corresponding zero - coupon bond prices , yield curves and forward rates .prices in these models are calculated under an equivalent martingale measure for known static model parameters .however , model parameters typically vary over time as financial market conditions change .they may , for instance , be of a regime switching nature and need to be permanently re - calibrated to the actual financial market conditions . in practice ,this re - calibration is done on a regular basis ( as new information becomes available ) .this implies that model parameters are not static and , henceforth , may also be understood as stochastic processes . the re - calibration should preserve the no - arbitrage condition , which provides side constraints in the re - calibration .the aim of this work is to discuss these side constraints with the help of the discrete - time multifactor vasiek interest rate model , which is a tractable , but also flexible model .we show that re - calibration under the side constraints naturally leads to heath jarrow morton models with stochastic parameters , which we call consistent re - calibration ( crc ) models .these models are attractive in financial applications for several reasons . in risk management and in the current regulatory framework ,one needs realistic and tractable models of portfolio returns .our approach provides tractable non - gaussian models for multi - period returns on bond portfolios .moreover , stress tests for risk management purposes can be implemented efficiently in our framework by selecting suitable models for the parameter process .while an in - depth market study of the performance of crc models remains to be done , we provide in this paper some evidence of improved fits .the paper is organized as follows . in section [ sec : hwe ] , we introduce hull white extended discrete - time multifactor vasiek models , which are the building blocks for crc in this work .we define crc of the hull white extended multifactor vasiek model in section [ sec : crc ] . section [ sec : real world dynamics ] specifies the market price of risk assumptions used to model the factor process under the real - world probability measure and the equivalent martingale measure , respectively . in section [ sec : parameters ] , we deal with parameter estimation from market data . in section [ sec : numerical example ] , we fit the model to swiss interest rate data , and in section [ sec : conclusion ] , we conclude .all proofs are presented in appendix [ sec : proofs ] .choose a fixed grid size and consider the discrete - time grid .for example , a daily grid corresponds to if there are 252 business days per year .choose a ( sufficiently rich ) filtered probability space with discrete - time filtration , where refers to time point .assume that denotes an equivalent martingale measure for a ( strictly positive ) bank account numeraire . denotes the value at time of an investment of one unit of currency at time into the bank account ( i.e. , the risk - free rollover relative to ) .we use the following notation .subscript indices refer to elements of vectors and matrices .argument indices refer to time points .we denote the identity matrix by .we also introduce the vectors and .we choose fixed and introduce the -dimensional -adapted factor process : which generates the spot rate and bank account processes as follows : where ; empty sums are set equal to zero . the factor process is assumed to evolve under according to : with initial factor , , , and being -adapted .the following assumptions are in place throughout the paper .[ assumption ] we assume that the spectrum of matrix is a subset of and that matrix is non - singular .moreover , for each , we assume that is independent of under and has standard normal distribution . in assumption [ assumption ], the condition on matrix ensures that is invertible and that the geometric series generated by converges .the condition on ensures that is symmetric positive definite . under assumption[ assumption ] , equation defines a stationary process ; see , section 11.3 .the model defined by equations and is called the discrete - time multifactor vasiek model . under the above model assumptions, we have for : for , the conditional distribution of , given , depends only on the value at time and on lag . in other words ,the factor process is a time - homogeneous markov process . at time , the price of the zero - coupon bond ( zcb ) with maturity date with respect to filtration and equivalent martingale measure is given by : = \e^\ast \left[\left .\exp \left\{-\delta\sum_{s = t}^{m-1}\boldsymbol1^\top\boldsymbol x(s ) \right\}\right|{\cal f}(t)\right].\ ] ] for the proof of the following result , see appendix [ sec : proofs ] .[ theo : arn prices ] the zcb prices in the discrete - time multifactor vasiek models and with respect to filtration and equivalent martingale measure have an affine term structure : with , and for : in the discrete - time multifactor vasiek models and , the term structure of interest rates ( yield curve ) takes the following form at time for maturity dates : with the spot rate at time given by .the possible shapes of the vasiek yield curve are restricted by the choice of the parameters , and .these parameters are not sufficiently flexible to exactly calibrate the model to an arbitrary observed initial yield curve .therefore , we consider the hull white extended version ( see ) of the discrete - time multifactor vasiek model .we replace the factor process defined in as follows . for fixed ,let satisfy : with starting factor , and function .model assumption corresponds to , where the first component of is replaced by the time - dependent coefficient and all other terms ceteris paribus . without loss of generality, we choose the first component for this replacement .note that parameter is redundant in this model specification , but for didactical reasons , it is used below .the time - dependent coefficient is called the _ hull white extension _ , and it is used to calibrate the model to a given yield curve at a given time point . the upper index denotes that time point and corresponds to the time shift we apply to the hull white extension in model .the factor process generates the spot rate process and the bank account process as in .the model defined by ( [ eq : spot rate ] , [ eq : arn+ ] ) is called the hull white extended discrete - time multifactor vasiek model . under these model assumptions, we have for : for , the conditional distribution of , given , depends only on the factor at time . in this case ,factor process is a time - inhomogeneous markov process .note that the upper index in the notation is important since the conditional distribution depends explicitly on the lag .[ theo : arn+ prices ] the zcb prices in the hull white extended discrete - time multifactor vasiek model ( [ eq : spot rate ] , [ eq : arn+ ] ) with respect to filtration and equivalent martingale measure have affine term structure : with as in theorem [ theo : arn prices ] , and for : in the hull white extended discrete - time multifactor vasiek model ( [ eq : spot rate ] , [ eq : arn+ ] ) , the yield curve takes the following form at time for maturity dates : with spot rate at time given by . note that the coefficient in theorem [ theo : arn+ prices ] is not affected by the hull white extension and depends solely on , whereas the coefficient depends explicitly on the hull white extension .we consider the term structure model defined by the hull white extended factor process and calibrate the hull white extension to a given yield curve at time point .we explicitly introduce the time index in model because the crc algorithm is a concatenation of multiple hull white extended models , which are calibrated at different time points , see section [ sec : crc ] below .assume that there is a fixed final time to maturity date and that we observe at time the yield curve for maturity dates .for these maturity dates , the hull white extended discrete - time multifactor vasiek yield curve at time , given by theorem [ theo : arn+ prices ] , reads as : for given starting factor and parameters , and , our aim is to choose the hull white extension such that we get an exact fit at time to the yield curve , that is , the following theorem provides an equivalent condition to , which allows one to calculate the hull white extension explicitly .[ theo : calibration ] denote by the yield curve at time obtained from the hull white extended discrete - time multifactor vasiek model ( [ eq : spot rate ] , [ eq : arn+ ] ) for given starting factor , parameters , and and hull white extension . for given ,identity holds if and only if the hull white extension fulfills : where , and + are defined by : with and given by theorem [ theo : arn prices ] .theorem [ theo : calibration ] shows that the hull white extension can be calculated by inverting the lower triangular positive definite matrix .the crucial extension now is the following : we let parameters , and vary over time , and we re - calibrate the hull white extension in a consistent way at each time point , that is according to the actual choice of the parameter values using theorem [ theo : calibration ] .below , we show that this naturally leads to a heath jarrow morton ( hjm ) approach to term structure modeling .assume that , and are -adapted parameter processes with and satisfying assumption [ assumption ] , -a.s ., for all .based on these parameter processes , we define the -dimensional -adapted crc factor process , which evolves according to steps ( i)(iv ) of the crc algorithm described below . thus , factor process will define a spot rate model similar to .in the crc algorithm , steps [ subsubsec crc step 1][subsubsec crc step 3 ] below are executed iteratively .assume that the initial yield curve observation at time 0 is given by .let be an -measurable hull white extension , such that condition is satisfied at time for initial factor and parameters , and . by theorem [ theo : calibration ] , the values are given by : this provides hull white extended vasiek yield curve identically equal to for given initial factor and parameters , , .assume factor , parameters and and hull white extension are given .define the hull white extended model by : with starting value , -measurable parameters , and and hull white extension .we update the factor process at time according to the -dynamics , that is , we set : this provides -measurable yield curve at time for maturity dates with and , and recursively for : this is exactly the no - arbitrage price under if the parameters , and and the hull white extension remain constant for all .assume that at time , the parameters are updated to .we may think of this parameter update as a consequence of model selection after we observe a new yield curve at time .this is discussed in more detail in section [ sec : parameters ] below .the no - arbitrage yield curve at time from the model with parameters and hull white extension is given by : the parameter update requires re - calibration of the hull white extension , otherwise arbitrage is introduced into the model . this re - calibration provides -measurable hull white extension at time .the values are given by ( see theorem [ theo : calibration ] ) : and the resulting yield curve under the updated parameters is identically equal to .note that this crc makes the upper index in the yield curve superfluous , because the hull white extension is re - calibrated to the new parameters , such that the resulting yield curve remains unchanged .therefore , we write in the sequel for the crc yield curve with factor , parameters and hull white extension .( end of algorithm . ) for the implementation of the above algorithm , we need to consider the following issue .assume we start the algorithm at time with initial yield curve .at times , for , calibration of requires yields with times to maturity beyond .either yields for these times to maturity are observable , and the length of is reduced in every step of the crc algorithm or an appropriate extrapolation method beyond the latest available maturity date is applied in every step .we analyze the yield curve dynamics obtained by the crc algorithm of section [ re - calibration algorithm ] .due to re - calibration , the yield curve fulfills the following identity for : where the first line is based on the -measurable parameters and hull white extension , and the second line is based on the -measurable parameters and hull white extension after crc step ( iii ) .note that in the re - calibration only can be chosen exogenously , and the hull white extension is used for consistency property .our aim is to express as a function of and .using equations and , we have for : this provides the following theorem ; see appendix [ sec : proofs ] for the proof .[ theorem hjm view ] under equivalent martingale measure , the yield curve dynamics obtained by the crc algorithm of section [ re - calibration algorithm ] has the following hjm representation for : with . observe that in theorem [ theorem hjm view ] , a remarkable simplification happens .simulating the crc algorithm and to future time points does not require the calculation of the hull white extensions according to , but the knowledge of the parameter process is sufficient .the hull white extensions are fully encoded in the yield curve process , and we can avoid the inversion of ( potentially ) high dimensional matrices . *crc of the multifactor vasiek spot rate model can be defined directly in the hjm framework assuming a stochastic dynamics for the parameters .however , solely from the hjm representation , one can not see that the yield curve dynamics is obtained , in our case , by combining well - understood hull white extended multifactor vasiek spot rate models using the crc algorithm of section [ sec : crc ] ; that is , the hull white extended multifactor vasiek model gives an explicit functional form to the hjm representation .* the crc algorithm of section [ sec : crc ] does not rely directly on having independent and gaussian components .the crc algorithm is feasible as long as explicit formulas for zcb prices in the hull white extended model are available .therefore , one may replace the gaussian innovations by other distributional assumptions , such as normal variance mixtures .this replacement is possible provided that conditional exponential moments can be calculated under the new innovation assumption . under non - gaussian innovations , it will no longer be the case that the hjm representation does not depend on the hull white extension .* interpretation of the parameter processes will be given in section [ sec : parameters ] , below .all previous derivations were done under an equivalent martingale measure for the bank account numeraire . in order to statistically estimate parameters from market data , we need to specify a girsanov transformation to the real - world measure , which is denoted by .we present a specific change of measure , which provides tractable spot rate dynamics under .assume that and are - and -valued -adapted processes , respectively .let be the factor process obtained by the crc algorithm of section [ re - calibration algorithm ] .then , we assume that the -dimensional -adapted process describes the market price of risk dynamics .we define the following -density process : the real - world probability measure is then defined by the radon nikodym derivative : an immediate consequence is that for : has a standard gaussian distribution under , conditionally on .this implies that under the real - world measure , the factor process is described by : where we define : as in assumption [ assumption ] , we require to be such that the spectrum of is a subset of . formula describes the dynamics of the factor process obtained by the crc algorithm of section [ re - calibration algorithm ] under real - world measure .the following corollary describes the yield curve dynamics obtained by the crc algorithm under , in analogy to theorem [ theorem hjm view ] .[ hjm under the real world measure ] under real - world measure satisfying , the yield curve dynamics obtained by the crc algorithm of section [ re - calibration algorithm ] has the following hjm representation for : with .compared to theorem [ theorem hjm view ] , there are additional drift terms and , which are characterized by the market price of risk parameters and .the yield curve dynamics obtained by the crc algorithm of section [ re - calibration algorithm ] require exogenous specification of the parameter process of the multifactor vasiek models and and the market price of risk process , i.e. , we need to model the process : by equation , the one - step ahead development of the crc factor process under reads as : with -measurable parameters , and and hull white extension .thus , on the one hand , the factor process evolves according to , and on the other hand , parameters evolve according to the financial market conditions .note that the process of hull white extensions is fully determined through crc by . in order to distinguish the evolutions of and , respectively , we assume that process changes at a slower pace than the factor process , and therefore , parameters can be assumed to be constant over a short time window .this assumption motivates the following approach to specifying a model for process . for each time point , we fit multifactor vasiek models and with fixed parameters on observations from a time window of length . for estimation , we assume that we have yield curve observations for times to maturity .since yield curves are not necessarily observed on a regular time to the maturity grid , we introduce the indices to refer to the available times to maturity .varying the time of estimation , we obtain time series for the parameters from historical data . finally , we fit a stochastic model to these time series . in the following ,we discuss the interpretation of the parameters and present two different estimation procedures .the two procedures are combined to obtain a full specification of the model parameters . by equation , we have under for : &=\left(\mathds{1}-\beta\right)^{-1}\left(\mathds{1}-\beta^{m - t}\right)\boldsymbol b+\beta^{m - t}\boldsymbol x(t),\\ \mathbb e^{\ast}\left[r(m)\middle|\mathcal f(t)\right]&=\boldsymbol 1^\top\left(\mathds{1}-\beta\right)^{-1}\left(\mathds{1}-\beta^{m - t}\right)\boldsymbol b+\boldsymbol 1^\top\beta^{m - t}\boldsymbol x(t ) . \end{aligned}\ ] ] thus , determines the speed at which the factor process and the spot rate process return to their long - term means : =\left(\mathds{1}-\beta\right)^{-1}\boldsymbol b\quad\text{and}\quad\lim_{m\to\infty}\mathbb e^{\ast}\left[r(m)|\mathcal f(t)\right]=\boldsymbol 1^\top\left(\mathds{1}-\beta\right)^{-1}\boldsymbol b.\ ] ] a sensible choice of adapts the speed of mean reversion to the prevailing financial market conditions at each time point . by equation , we have under for : =\sigma,\quad\text{and}\quad\mathrm{var}^\ast\left[r(t)\middle|\mathcal f(t-1)\right]=\boldsymbol 1^\top\sigma\boldsymbol 1.\ ] ] thus , matrix plays the role of the instantaneous covariance matrix of , and it describes the instantaneous spot rate volatility . on each time window , we want to use yield curve observations to estimate the parameters of time - homogeneous vasiek models and . in general , this model is not able to reproduce the yield curve observations exactly .one reason might be that the data are given in the form of parametrized yield curves , and the parametrization might not be compatible with the vasiek model . for example , this is the case for the widely - used svensson family . another reason might be that yield curve observations do not exactly represent risk - free zero - coupon bonds .the discrepancy between the vasiek model and the yield curve observations can be accounted for by adding a noise term to the vasiek yield curves .this defines a state space model with the factor process as the hidden state variable . in this state space model, the parameters of the factor dynamics can be estimated using kalman filter techniques in conjunction with maximum likelihood estimation ( section 3.6.3 ) .this is explained in detail in sections [ subsubsec : kalman mle transition][subsubsec : kalman mle likelihood ] below .the evolution of the unobservable process under is assumed to be given on time window by : with initial factor and parameters and . the initial factor is updated according to the output of the kalman filter for the previous time window .the initial factor is set to zero for the first time window available .parameters are assumed to be constant over the time window \{,,}. thus , we drop the index compared to equations and . for estimation , we assume that the factor process evolves according to the time - homogeneous multifactor vasiek models and in that time window . the hull white extension is calibrated to the yield curve at time given the estimated parameter values of the time - homogeneous model .we assume that the observations in the state space model are given by : where : with and given by theorem [ theo : arn prices ] and -dimensional -measurable noise term for non - singular .we assume that is independent of and under and that .the error term describes the discrepancy between the yield curve observations and the model . for , we would obtain a yield curve in that corresponds exactly to the multifactor vasiek one . given the parameter and market price of risk value , we estimate the factor using the following iterative procedure . for each fixed value of and fixed time , we consider the -field and describe the estimation procedure in this state space model. fix initial factor , and initialize : =\boldsymbol a+\alpha\boldsymbol x(t - k|t - k-1),\\ \sigma(t - k+1|t - k)&=\mathrm{cov}\left(\boldsymbol x(t - k+1)\middle|\mathcal f^{\widehat{\boldsymbol y}}(t - k)\right)=\sigma . \end{aligned}\ ] ] at time , we have : =\boldsymbol d+d\boldsymbol x(k|k-1),\\ f(k)&=\mathrm{cov}\left(\widehat{\boldsymbol y}(k)\middle|\mathcal f^{\widehat{\boldsymbol y}}(k-1)\right)=d\sigma(k|k-1)d^\top+s,\\ \boldsymbol \zeta(k)&=\widehat{\boldsymbol y}(k)-\boldsymbol y(k|k-1 ) .\end{aligned}\ ] ] the prediction error is used to update the unobservable factors .=\boldsymbol x(k|k-1)+k(k)\boldsymbol\zeta(k),\\ \sigma(k|k)&=\mathrm{cov}\left(\boldsymbol x(k)\middle|\mathcal f^{\widehat{\boldsymbol y}}(k)\right)=\left(\mathds{1}-k(k)d\right)\sigma(k|k-1 ) , \end{aligned}\ ] ] where denotes the kalman gain matrix given by : for the unobservable factor process , we have the following forecast : =\boldsymbol a + \alpha\boldsymbol x(k|k),\\ \sigma(k+1|k)&=\mathrm{cov}\left(\boldsymbol x(k+1)\middle|\mathcal f^{\widehat{\boldsymbol y}}(k)\right)=\alpha\sigma(k|k)\alpha^\top+\sigma .\end{aligned}\ ] ] the kalman filter procedure above allows one to infer factors given the parameter and market price of risk values .of course , in this section , we are interested in estimating these values in the first place . for this purpose, the procedure above can be used in conjunction with maximum likelihood estimation . for the underlying parameters , we have the following likelihood function given the observations : the maximum likelihood estimator ( mle ) is found by maximizing the likelihood function over , given the data . as in the em ( expectation maximization ) algorithm , maximization of the likelihood functionis alternated with kalman filtering until convergence of the estimated parameters is achieved .assume factor process is given under by and for : where and .furthermore , assume that is a diagonalizable matrix with for and diagonal matrix .then , the transformed process evolves according to : where and . for , the -step ahead conditional distribution of under given by : where , and .suppose we have estimated , the diagonal matrix and on the time grid with size , for instance , using mle , as explained in section [ sec : kalman mle ] .we are interested in recovering the parameters , and of the dynamics on the refined time grid with size from , and .the diagonal matrix and vector are reconstructed from the diagonal matrix as follows : where logarithmic and power functions applied to diagonal matrices are defined on their diagonal elements .note that for , we have : therefore , we recover from and as follows . where .consider for the increments . from the formulas for , and , we observe that the -conditional mean of : and the -conditional volatility of : live on different scales as ; in fact , volatility dominates for large . under for , we have : =\mathrm{cov}\left[\mathcal d_t\boldsymbol z,\mathcal d_t\boldsymbolz\middle|\mathcal f_{t-1}\right]+\mathbb e\left[\mathcal d_t\boldsymbol z\middle|\mathcal f_{t-1}\right]\mathbb e\left[\mathcal d_t\boldsymbol z\middle|\mathcal f_{t-1}\right]^\top\\\quad&=\mathrm{cov}\left[\boldsymbol z(t),\boldsymbol z(t)\middle|\mathcal f_{t-1}\right]+\left(\mathbb e\left[\boldsymbol z(t)\middle|\mathcal f_{t-1}\right]-z(t-1)\right)\left(\mathbb e\left[\boldsymbol z(t)\middle|\mathcal f_{t-1}\right]-z(t-1)\right)^\top\\&=\psi+\left(\boldsymbol c+\left(d-\mathds1\right)\boldsymbol z(t-1)\right)\left(\boldsymbol c+\left(d-\mathds 1\right)\boldsymbol z(t-1)\right)^\top .\end{aligned}\ ] ] therefore , setting , we obtain as : =t\mathbb e\left[\mathcal d_t\boldsymbol z\left(\mathcal d_t\boldsymbolz\right)^\top\middle|\mathcal f_{t-1}\right]t^\top\\&\quad = t\psi t^\top+t\left(\boldsymbol c+\left(d-\mathds 1\right)\boldsymbol z(t-1)\right)\left(\boldsymbol c+\left(d-\mathds 1\right)\boldsymbol z(t-1)\right)^\top t^\top\\&\quad=\frac{1}{d}t\upsilon t^\top+o\left(\frac{1}{d}\right)=t\psi t^\top+o\left(\frac{1}{d}\right)=\sigma+o\left(\frac{1}{d}\right ) , \end{aligned}\ ] ] we consider the yield curve increments within the discrete - time multifactor vasiek models and .the increments of the yield process for fixed time to maturity are given by : where .for times to maturity , we get under : =\frac{1}{\tau_1\tau_2\delta^2}\boldsymbol b(t , t+\tau_1)^\top\mathbb e\left[\mathcal d_t\boldsymbol x\left(\mathcal d_t\boldsymbol x\right)^\top\middle|{\cal f}_{t-1}\right]\boldsymbol b(t , t+\tau_2).\ ] ] by equation for small grid size , we estimate the last expression by : \approx\frac{1}{\tau_1\tau_2}\boldsymbol 1^\top\left(\mathds{1}-\beta^{\tau_1}\right)\left(\mathds{1}-\beta\right)^{-1}\sigma\left(\mathds{1}-\beta^\top\right)^{-1}\left(\mathds{1}-\left(\beta^\top\right)^{\tau_2}\right)\boldsymbol 1.\ ] ] formula is interesting for the following reasons : * it does not depend on the unobservable factors . *it allows for direct cross - sectional estimation of and .that is , and can directly be estimated from market observations without knowing the market - price of risk .* it is helpful to determine the number of factors needed to fit the model to market yield curve increments .this can be analyzed by principal component analysis . *it can also be interpreted as a small - noise approximation for noisy measurement systems of the form .let and be market observations for times to maturity and and at times , also specified in section [ sec : kalman mle ] .then , the expectation on the left hand side of can be estimated by the realized covariation : the quality of this estimator hinges on two crucial assumptions .first , higher order terms in are negligible in comparison to .second , the noise term in leads to a negligible distortion in the sense that observations are reliable indicators for the underlying vasiek yield curves .realized covariation estimator can be used in conjunction with asymptotic relation to estimate parameters and at time in the following way .for given symmetric weights , we solve the least squares problem : ^ 2\bigg\ } , \end{aligned}\ ] ] where we optimize over and satisfying assumption [ assumption ]. finally , we aim at determining parameters and of the change of measure specified in section [ sec : real world dynamics ] . for this purpose , we combine mle estimation ( section [ sec : kalman mle ] ) with estimation from realized covariations of yields ( section [ calibration real world 2 ] ) .first , we estimate and by and as in section [ calibration real world 2 ] .second , we estimate , and by maximizing the log - likelihood : for fixed and over , and with spectrum in , i.e. , the constraint on the matrix ensures that the factor process is stationary under the real - world measure . from equation , we have and .this motivates the inference of by : and the inference of by : we stress the importance of estimating as many parameters as possible from the realized covariations of yields prior to using maximum likelihood estimation .the mle procedure of section [ sec : kalman mle ] is computationally intensive and generally does not work well to estimate volatility parameters .we choose , which corresponds to a daily time grid ( assuming that a financial year has 252 business days ) .for the swiss currency ( chf ) , we consider as yield observations the swiss average rate ( sar ) , the london interbank offered rate ( libor ) and the swiss confederation bond ( swcnb ) . see figures [ fig : data start ] and [ fig : libor comment ] .* _ short times to maturity . _the sar is an ongoing volume - weighted average rate calculated by the swiss national bank ( snb ) based on repo transactions between financial institutions .it is used for short times to maturity of at most three months . for sar, we have the over - night saron that corresponds to a time to maturity of ( one business day ) and the sar tomorrow - next ( sartn ) for time to maturity ( two business days ) .the latter is not completely correct , because saron is a collateral over - night rate and tomorrow - next is a call money rate for receiving money tomorrow , which has to be paid back the next business day .moreover , we have the sar for times to maturity of one week ( sar1w ) , two weeks ( sar2w ) , one month ( sar1 m ) and three months ( sar3 m ) ; see also .* _ short to medium times to maturity . _the libor reflects times to maturity , which correspond to one month ( libor1 m ) , three months ( libor3 m ) , six months ( libor6 m ) and 12 months ( libor12 m ) in the london interbank market . *_ medium to long times to maturity . _the swcnb is based on swiss government bonds , and it is used for times to maturity , which correspond to two years ( swcnb2y ) , three years ( swcnb3y ) , four years ( swcnb4y ) , five years ( swcnb5y ) , seven years ( swcnb7y ) , 10 years ( swcnb10y ) , 20 years ( swcnb20y ) and 30 years ( swcnb30y ) .these data are available from 8 december 1999 , and we set 15 september 2014 to be the last observation date .of course , sar , libor and swcnb do not exactly model risk - free zero - coupon bonds , and these different classes of instruments are not completely consistent , because prices are determined slightly differently for each class . in particular , this can be seen during the 20082009 financial crisis .however , these data are in many cases the best approximation to chf risk - free zero - coupon yields that is available . for the longest times to maturity of swcnb , one may also raise issues about the liquidity of these instruments , because insurance companies typically run a buy - and - hold strategy for long - term bonds .in figures [ fig : rcov start][fig : rcov end ] , we compute the realized volatility of yield curves for different times to maturity and window length ; see equation . in figures [ fig : libor comment ] and [ fig : rcov end ] , we observe that sar fits swcnb better than libor after the financial crisis of 2008 .for this reason , we decide to drop libor and build daily yield curves from sar and swcnb , only .the mismatch between libor , sar and swcnb is attributable to differences in liquidity and the credit risk of the underlying instruments . in this numerical example , we restrict ourselves to multifactor vasiek models with and of diagonal form : where . in the following ,we explain exactly how to perform the delicate task of parameter estimation in the multifactor vasiek models and using the procedure explained in section [ sec : parameters ] .we select short times to maturity ( sar ) to estimate parameters , , , and .this is reasonable because these parameters describe the dynamics of the factor process and , thus , of the spot rate . as we are working on a small ( daily ) time grid , asymptotic formulas andare expected to give good approximations .additionally , it is reasonable to assume that the noise covariance matrix in data - generating model is negligible compared to .therefore , we can estimate the left hand side of by the realized covariation of observed yields ; see estimator .then , we determine the hull white extension in order to match the prevailing yield curve interpolated from sar and swcnb . for , window length ( lhs ) and ( rhs ) .] for , window length ( lhs ) and ( rhs ) . ] for , window length ( lhs ) and ( rhs ) . ] for , window length ( lhs ) and ( rhs ) . ] for , window length ( lhs ) and ( rhs ) . ] for , window length ( lhs ) and ( rhs ) . ] for 1 , 63 , 252 , 504 , window length ( lhs ) and ( rhs ) .note that libor looks rather differently from sar and swcnb after the financial crisis of 2008 . ] for 1 , 63 , 252 , 504 , window length ( lhs ) and ( rhs ) .note that libor looks rather differently from sar and swcnb after the financial crisis of 2008 . ]we need to determine the appropriate number of factors .the more factors we use , the better we can fit the model to the data . however , the dimensionality of the estimation problem increases quadratically in the number of factors , and the model may become over - parametrized .therefore , we look for a trade - off between the accuracy of the model and the number of parameters used . in figure[ fig : number of factors sample dates ] , we determine and by solving optimization numerically for three observation dates and .a three - factor model is able to capture rather accurately the dependence on the time to maturity . in figure[ fig : number of factors all dates start ] , we compare the realized volatility of the numerical solution of to the market realized volatility for all observation dates .we observe that in several periods , the two - factor model is not able to fit the sar realized volatilities accurately for all times to maturities .the three - factor model achieves an accurate fit for most observation dates .the model exhibits small mismatches in 2001 , 20082009 and 20112012. these are periods characterized by a sharp reduction in interest rates in response to financial crises . in september 2011 , following strong appreciation of the swiss franc with respect to the euro , the snb pledged to no longer tolerate euro - franc exchange rates below the minimum rate of , effectively enforcing a currency floor for more than three years . as a consequence of the european sovereign debt crisis and the intervention of the snb starting from 2011, we have a long period of very low ( even negative ) interest rates . for , and three observation dates compared to the realized volatility of the two- ( lhs ) and three - factor ( rhs ) vasiek model fitted by optimization for , , , , , , and .the three - factor model achieves an accurate fit . ] for , and three observation dates compared to the realized volatility of the two- ( lhs ) and three - factor ( rhs ) vasiek model fitted by optimization for , , , , , , and .the three - factor model achieves an accurate fit . ]considering the results of figure [ fig : number of factors all dates start ] , we restrict ourselves from now on to three - factor vasiek models with parameters and : where , and . 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] 0.42 for and a selection of times to maturity compared to the realized volatility of the two- and three - factor vasiek models fitted by optimization for , , , , , , and .,title="fig : " ] , , , , , , , , and . ] , , , , , , , , and . ] , , , , , , , , and . ] , , , , , , , , and . ] , , , , , , , , and . ] , , , , , , , , and . ] in figure [ fig : parameters start ] , we plot the numerical solutions of optimizations and for all observation dates .the parameters are reasonable for most of the observation dates .we observe that the estimates of are close to one for all observation dates .our values for the speed of mean reversion are reasonable on a daily time grid .note that scales as on a -days time grid ; see section [ calibration real world 2 ] .the speeds of mean reversion of and are higher than that of for most of the observation dates .we also see that the volatility of is lower than that of and . in 2011 ,we observe large spikes in the factor volatilities . starting from 2011, we have a period with strong correlations among the factors .from these results , we conclude that the three - factor vasiek model is reasonable for swiss interest rates . particularly challenging for the estimationis the period 20112014 of low interest rates following the european sovereign debt crisis and the snb intervention . in figure[ fig : parameters start a ] ( rhs ) , we observe that the difference in the speeds of mean - reversion under the risk - neutral and real - world measures is negligible .the difference between and is considerable in certain time periods . from the estimation results ,we conclude that a constant market price of risk assumption is reasonable and set from now on . in figure[ fig : loglikelihood ] , we compute the objective function of optimization for and compare it to the numerical solution .we observe that in 20032005 and 20102014 , the parameter configuration is nearly optimal . in these periods, we have very low interest rates , and therefore , estimates of and close to zero are reasonable .given the estimated parameters , we calibrate the hull white extension by equation to the full yield curve interpolated from sar and swcnb ; see figure [ fig : hwe ] .we point out that our fitting method is not a purely statistical procedure ; rather , it is a combination of estimation and calibration in accordance with the paradigm of robust calibration , as explained in .( lhs ) and values of , and given by optimization in the three - factor model for , , , , , , , , and .we compare the value of the objective function for and the numerical solution of the optimization . the configuration is almost optimal in low interest rate times . ] ( lhs ) and values of , and given by optimization in the three - factor model for , , , , , , , , and .we compare the value of the objective function for and the numerical solution of the optimization .the configuration is almost optimal in low interest rate times . ]( rhs ) as of 29 september 2006 .the parameters are estimated as in figure [ fig : parameters start ] .the initial factors are obtained from the kalman filter for the estimated parameters . the calibration of the hull white extension requires yields on a time to maturity grid of size .these are interpolated from sar and swcnb using cubic splines . ]( rhs ) as of 29 september 2006 .the parameters are estimated as in figure [ fig : parameters start ] .the initial factors are obtained from the kalman filter for the estimated parameters .the calibration of the hull white extension requires yields on a time to maturity grid of size .these are interpolated from sar and swcnb using cubic splines . ] in the following , we use the crc approach to construct a modification of the vasiek model with stochastic volatility .we model the process by a heston - like approach .we assume deterministic correlations among the factors and stochastic volatility given by : where , , non - singular , and for each , has a standard gaussian distribution under , conditionally given .moreover , we assume that is multivariate gaussian under , conditionally given .note that and are allowed to be correlated .the matrix valued process is constructed combining this stochastic volatility model with fixed correlation coefficients .this model is able to capture the stylized fact that volatility appears to be more noisy in high volatility periods ; see figure [ fig : sigma ] .we use the volatility time series of figure [ fig : sigma ] to specify , and .we rewrite the equation for the evolution of the volatility as : and use least square regression to estimate , and . from the regression residuals, we estimate the correlations between and .figures [ fig : parameter crc start][fig : parameter crc end ] show the estimates of , and .section [ sec : model selection ] provides a full specification of the three - factor vasiek crc model under the risk - neutral and real - world probability measures .various model quantities of interest in applications can then be calculated by simulation . , and by least square regression ( two different scales ) .we use a time window of 252 observations for the regression . ] , and by least square regression ( two different scales ) .we use a time window of 252 observations for the regression . ] , and ( lhs ) and , and ( rhs ) by least square regression .we use a time window of 252 observations for the regression . ] , and ( lhs ) and , and ( rhs ) by least square regression .we use a time window of 252 observations for the regression . ] , and ( lhs ) and correlations ] ( rhs ) .we use a time window of 252 observation for the regression .the residuals are calculated using the parameter estimates of figure [ fig : parameters start ] . ]the crc approach has the remarkable property that yield curve increments can be simulated accurately and efficiently using theorem [ theorem hjm view ] and corollary [ hjm under the real world measure ] .in contrast , spot rate models with stochastic volatility without crc have serious computational drawbacks . in such models ,the calculation of the prevailing yield curve for given state variables requires monte carlo simulation .therefore , the simulation of future yield curves requires nested simulations .we backtest properties of the monthly returns of a buy and hold portfolio investing equal proportions of wealth in the zero - coupon bonds with times to maturity of 2 , 3 , 4 , 5 , 6 and 9 months and 1 , 2 , 3 , 5 , 7 and 10 years .we divide the sample into disjoint monthly periods and calculate the monthly return of this portfolio assuming that at the beginning of each period , we invest in the bonds with these times to maturity in equal proportions of wealth .the returns and some summary statistics are shown in figure [ fig : returns ] .we observe that the returns are positively skewed , leptokurtic and have heavier tails than the gaussian distribution .these stylized facts are essential in applications . for each monthly period, we select a three - factor vasiek model and its crc counterpart with stochastic volatility . then, we simulate for each period realizations of the returns of the test portfolio . by construction, the vasiek model generates gaussian log - returns and is unable to reproduce the stylized facts of the sample ; see tables [ tab : stats1 ] and [ tab : stats2 ] and figure [ fig : stats ] .increasing the number of factors does not help much , because the log - returns remain gaussian . on the other hand , crc of the vasiek model with stochastic volatilityprovides additional modeling flexibility . in particular, we can see from the statistics in table [ tab : stats2 ] and the confidence intervals in figure [ fig : stats ] that the model matches the return distribution better than the vasiek model .as explained in figure [ fig : stats ] , statistical tests assuming the independence of disjoint monthly periods show that the difference between the vasiek model and its crc counterpart is statistically significant .we conclude that the three - factor crc vasiek model is a parsimonious and tractable alternative that provides reasonable results .+ simulations of the test portfolio returns in the vasiek model and its crc counterpart with stochastic volatility . for each monthly period , we check if the market return lies in the confidence interval .this is more often the case for the crc than for the standard vasiek model . a one - sided binomial test assuming the independence of monthly periods shows that the difference is statistically significant ( for the and 0.00017 for the quantiles ) .the result remains significant if every second month is discarded to account for dependencies ( ) .this suggests that the crc vasick model is able to match the return distribution better than its counterpart with constant parameters . ]simulations of the test portfolio returns in the vasiek model and its crc counterpart with stochastic volatility . for each monthly period , we check if the market return lies in the confidence interval .this is more often the case for the crc than for the standard vasiek model . a one - sided binomial test assuming the independence of monthly periods shows that the difference is statistically significant ( for the and 0.00017 for the quantiles ) .the result remains significant if every second month is discarded to account for dependencies ( ) .this suggests that the crc vasick model is able to match the return distribution better than its counterpart with constant parameters . ]simulations of the test portfolio returns in the vasiek model and its crc counterpart with stochastic volatility . for each monthly period , we check if the market return lies in the confidence interval .this is more often the case for the crc than for the standard vasiek model . a one - sided binomial test assuming the independence of monthly periods shows that the difference is statistically significant ( for the and 0.00017 for the quantiles ) .the result remains significant if every second month is discarded to account for dependencies ( ) .this suggests that the crc vasick model is able to match the return distribution better than its counterpart with constant parameters.,scaledwidth=90.0% ] the type of analysis that was performed in the previous section is an integral component of the present regulatory framework for risk management . in the basel framework ,the capital charge for the trading book is based on quantile risk measures . under the internal model approach ( , section 2.vi.d ) , a bank calculates quantiles for the distribution of possible 10-day losses based on recent market data under the assumption that the trading book portfolio is held fixed over the time period .the approach relies on accurate modeling of the distribution of portfolio returns over holding periods of multiple days .a similar analysis is required by the basel ( , section 2.vi.d ) regulatory framework for model validation and stress testing : model validation is performed by backtesting the historical performance of the model , and stress tests are carried out using the same methodology by calibrating the model to historical periods of significant financial stress .these tasks can be accomplished using the crc approach by selecting suitable classes of affine models and parameter processes .the approach is fairly general , since there are few restrictions on the parameter processes . in particular , it allows for stochastic volatility and can be used to create realistic non - gaussian distributions of multi - period bond returns ( see section [ subsec : bt ] ) .nevertheless , computing these bond return distributions does not require nested simulations .this is crucial for reasons of efficiency .moreover , the flexibility in the specification of the parameter processes makes the crc approach well suited for stress testing , because it allows one to freely select and specify stress scenarios .* _ flexibility and tractability ._ consistent re - calibration of the multifactor vasiek model provides a tractable extension that allows parameters to follow stochastic processes .the additional flexibility can lead to better fits of yield curve dynamics and return distributions , as we demonstrated in our numerical example .nevertheless , the model remains tractable .in particular , yield curves can be simulated efficiently using theorem [ theorem hjm view ] and corollary [ hjm under the real world measure ] .this allows one to efficiently calculate model quantities of interest in risk management , forecasting and pricing . *_ model selection . _crc models are selected from the data in accordance with the robust calibration principle of .first , historical parameters , market prices of risk and hull white extensions are inferred using a combination of volatility estimation , mle and calibration to the prevailing yield curve via formulas ( [ eq : co - var estimate][eq : lambda mat ] , [ eq : re - calibration step 2 ] ) .the only choices in this inference procedure are the number of factors of the vasiek model and the window length .then , as a second step , the time series of estimated historical parameters are used to select a model for the parameter evolution .this results in a complete specification of the crc model under the real world and the pricing measure .* _ application to modeling of swiss interest rates ._ we fitted a three - factor vasiek crc model with stochastic volatility to swiss interest rate data .the model achieves a reasonably good fit in most time periods .the tractability of crc allowed us to compute several model quantities by simulation .we looked at the historical performance of a representative buy and hold portfolio of swiss bonds and concluded that a multifactor vasiek model is unable to describe the returns of this portfolio accurately .in contrast , the crc version of the model provides the necessary flexibility for a good fit .[ sec : proofs ] we prove theorem [ theo : arn prices ] by induction as in ( theorem 3.16 ) where zcb prices are derived under the assumption that and are diagonal matrices .note that we have the relation , which proves the claim for .assume that theorem [ theo : arn prices ] holds for .we verify that it also holds for . under equivalent martingale measure , we have using the tower property for conditional expectations and the induction assumption : first , observe that the condition imposes conditions only on the values .secondly , note that the vector , such that the condition is satisfied , can be calculated recursively in the following way . 1 ._ first component ._ we have , and : see theorem [ theo : arn+ prices ] . solving the last equation for , we have : from and the equation for in theorem [ theo : arn prices ] , we obtain : this is equivalent to : 2 . _recursion ._ assume we have determined for .we want to determine .we have , and iteration of the recursive formula for in theorem [ theo : arn+ prices ] implies : solving the last equation for and using , we have : from and the equation for in theorem [ theo : arn prices ] , we obtain : this is equivalent to : 99 harms , p. ; stefanovits , d. ; teichmann , j. ; wthrich , m.v . consistent re - calibration of yield curve models .available online : http://arxiv.org/abs/1502.02926[arxiv.org/abs/1502.02926 ] ( may 2016 ) .bank for international settlements ( bis ) .basel committee on banking supervision ( bcbs ) .basel ii : international convergence of capital measurement and capital standards . a revised framework comprehensive version .available online : http://www.bis.org/publ/bcbs128.htm[www.bis.org/publ/bcbs128.htm ] ( may 2016 ) .jordan , t.j .saron an innovation for the financial markets .launch event for swiss reference rates , zurich , 25 august 2009 .available online : http://www.snb.ch/en/mmr/speeches/id/ref_20090825_tjn_1[www.snb.ch/en/mmr/speeches/id/ref_20090825_tjn_1 ] ( may 2016 ) .
the discrete - time multifactor vasiek model is a tractable gaussian spot rate model . typically , two- or three - factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way . in practice , re - calibration of the model to the prevailing market conditions leads to model parameters that change over time . therefore , the model parameters should be understood as being time - dependent or even stochastic . following the consistent re - calibration ( crc ) approach , we construct models as concatenations of yield curve increments of hull white extended multifactor vasiek models with different parameters . the crc approach provides attractive tractable models that preserve the no - arbitrage premise . as a numerical example , we fit swiss interest rates using crc multifactor vasiek models .
computing an ensemble of solutions of fluid flow equations for a set of parameters or initial / boundary conditions for , e.g. , quantifying uncertainty or sensitivity analyses or to make predictions , is a common procedure in many engineering and geophysical applications .one common problem faced in these calculations is the excessive cost in terms of both storage and computing time .thanks to recent rapid advances in parallel computing as well as intensive research in ensemble - based data assimilation , it is now possible , in certain settings , to obtain reliable ensemble predictions using only a small set of realizations .successful methods that are currently used to generate perturbations in initial conditions include the bred - vector method , , the singular vector method , , and the ensemble transform kalman filter , . despite all these efforts ,the current level of available computing power is still insufficient to perform high - accuracy ensemble computations for applications that deal with large spatial scales such as numerical weather prediction . in such applications , spatial resolutionis often sacrificed to reduce the total computational time . for these reasonsthe development of efficient methods that allow for fast calculation of flow ensembles at a sufficiently fine spatial resolution is of great practical interest and significance .only recently , a first step was taken in where a new algorithm was proposed for computing an ensemble of solutions of the time - dependent navier - stokes equations ( nse ) with different initial condition and/or body forces . at each time step, the new method employs the same coefficient matrix for all ensemble members .this reduces the problem of solving multiple linear systems to solving one linear system with multiple right - hand sides .there have been many studies devoted to this type of linear algebra problem and efficient iterative methods have been developed to significantly save both storage and computing time , e.g. , block cg , block qmr , and block gmres .even for some direct methods , such as the simple lu factorization , one can save considerable computing cost . because the main goal of the ensemble algorithm is computational efficiency, it is natural to consider using reduced - order modeling ( rom ) techniques to further reduce the computational cost . specifically , we consider the proper orthogonal decomposition ( pod ) method which has been extensively used in the engineering community since it was introduced in to extract energetically coherent structures from turbulent velocity fields .pod provides an optimally ordered , orthonormal basis in the least - squares sense , for given sets of experimental or computational data .the reduced order model is then obtained by truncating the optimal basis .research on pod and its application to the unsteady nse has been and remains a highly active field .recent works improving upon pod have dealt with the combination of galerkin strategies with pod , stabilization techniques , and regularized / large eddy simulation pod models for turbulent flows . in this paper, we study a galerkin proper orthogonal decomposition ( pod - g - rom ) based ensemble algorithm for approximating solutions of the nse .accordingly , our aim in this paper is to develop and demonstrate a procedure for the rapid solution of multiple solutions of the nse , requiring only the solution of one reduced linear system with multiple right - hand sides at each time step .the ensemble method given in is first - order accurate in time and requires a cfl - like time step condition to ensure stability and convergence .two ensemble eddy viscosity numerical regularizations are studied in to relax the time step restriction .these two methods utilized the available ensemble data to parametrize the eddy viscosity based on a direct calculation of the kinetic energy in fluctuations without further modeling .they both give the same parametrization for each ensemble member and thus preserve the efficiency of the ensemble algorithm .the extension of the ensemble method to higher - order accurate ensemble time discretization is nontrivial .for instance , the method is not extensible to the most commonly used crank - nicolson scheme . making use of a special combination of a second - order in time backward difference formula and an explicit second - order adams - bashforth treatment of the nonlinear term ,a second - order accurate in time ensemble method was developed in .another second - order ensemble method with improved accuracy is presented in .the ensemble algorithm was further used in to model turbulence . by analyzing the evolution of the model variance, it was proved that the proposed ensemble based turbulence model converges to statistical equilibrium , which is a desired property of turbulence models .let , , denote an open regular domain with boundary and let ] we define the norms }\|v(\cdot , t)\|_{s } .\ ] ] the subspace of consisting of weakly divergence free functions is defined as a weak formulation of ( [ eq : nse ] ) is given as follows : for , find \rightarrow x ] that , for almost all ] . for and , let .then , the is defined , for , by for , let denote approximations , e.g. , interpolants or projections , of the initial conditions .then , the full space - time discretization of ( [ eq : nse ] ) , or more precisely of , we consider is given as follows : _ given , for , and ^d ] , find , for and for , and satisfying _we refer to this discretization as _ en - full - fe _ indicating that we are referring to an ensemble - based discretization of using a high - dimensional finite element space .this ensemble - based discretization of the nse is noteworthy because the system is not only _ linear _ in the unknown functions and , but because of the use of ensembles , we also have that the coefficient matrix associated with is independent of , i.e. , at each time step , all members of the ensemble can be determined from linear algebraic systems all of which have the same coefficient matrix . on the other hand , the linear system can be very large because in practice can be very large .this observation , in fact , motivates interest in building reduced - order discretizations of the nse .because and are assumed to satisfy the condition , can be more compactly expressed as follows : _ given , for , and ^d ] , find , for and for , satisfying _ note that in general it is a difficult matter to construct a basis for the space so that in practice , one still works with .we introduce the reduced system so as to facilitate the analyses given in later sections .the pod model reduction scheme can be split into two main stages : an offline portion and an online portion . in the offline portion ,one collects into what is known as a snapshot set the solution of a partial differential equation ( pde ) , or more precisely , of a discrete approximation to that solution , for a number of different input functions and/or evaluated at several time instants .the snapshot set is hopefully generated in such a way that it is representative of the behavior of the exact solution .the snapshot set is then used to generate a pod basis , hopefully of much smaller cardinality compared to that of the full finite element space , that provides a good approximation to the data present in the snapshot set itself . in the online stage ,the pod basis is used to generate approximate solutions of the pde for other input functions ; ideally these will be accurate approximations achieved much more cheaply compared to the use of a standard method such as a standard finite element method .in the rest of this section , we delve into further detail about the generation of the snapshot set , the construction of the pod basis in a finite element setting , and how the pod basis can be used to construct a reduced - order model for the nse in the ensemble framework .this section will focus on the framework specific to this paper ; for more detailed presentations about pod , see , e.g. , .the offline portion of the algorithm begins with the construction of the snapshot set which consists of the solution of the pde for a number of different input functions and/or evaluated at several different time instants .given a positive integer , let denote a uniform partition of the time interval ] into intervals , introduced in definition [ def21 ] , which is used to discretize the pde , i.e. , we have .we first define the set of snapshots corresponding to exact solutions of the weak form of the nse . for , we select different initial conditions and denote by the exact velocity field satisfying , evaluated at , , which corresponds to the initial condition . then , the space spanned by the so obtained snapshots is defined as in the same manner , we can construct a set of snapshots , , , of finite element approximations of the velocity solution determined from a standard finite element discretization of .note that one could also determine , at lesser cost but with some loss of accuracy , the snapshots from the ensemble - based discretization .we can then also define the space spanned by the discrete snapshots as note that .the snapshots are finite element solutions so the span of the snapshots is a subset of the finite element space .additionally , it is important to note that by construction , the snapshots satisfy the discrete continuity equation so that the span of the snapshots is indeed a subspace of the discretly divergence free subspace . if we denote by the vector of coefficients corresponding to the finite element function . with , we may also define the _ snapshot matrix _ as i.e. , the columns of are the finite element coefficient vectors of the discrete snapshots .to construct a reduced basis that results in accurate approximations , the snapshot set must contain sufficient information about the dynamics of the solution of the pde . in our context , this requires one to not only take a sufficient number of snapshots with respect to time , but also to select a set of initial conditions that generate a set of solutions that is representative of the possible dynamics one may encounter when using other initial conditions . in the pod framework for the nse , the literature on selecting this set is limited .one of the few algorithms which has been explored in the ensemble framework is the previously mentioned bred - vectors algorithm given in .further exploration of this and other approaches for the selection of initial conditions is a subject for future research . using the set of discrete snapshots ,we next construct the pod basis .we define the pod function space as there are a number of equivalent ways in which one may characterize the problem of determining ; for a full discussion see ( * ? ? ?* section 2 ) .for example , the pod basis construction problem can be defined as follows : determine an orthonormal basis for such that for all , solves the following constrained minimization problem where is the kronecker delta and the minimization is with respect to all orthonormal bases for .we note that by defining our basis in this manner we elect to view the snapshots as finite element functions as opposed to finite element coefficient vectors .define the correlation matrix , where denotes the gram matrix corresponding to full finite element space .then , the problem is equivalent to determine the dominant eigenpairs satifying where denotes the euclidean norm of a vector .the finite element coefficient vectors corresponding to the pod basis functions are then given by alternatively , we can let , and define so that and then determine the singular value decomposition of the modified snapshot matrix ; the vectors , are then given as the first left singular vectors of which correspond to the first singular values .we next illustrate how a pod basis is used to construct a reduced - order model for the nse within the ensemble framework .the discretized system that defines the pod approximation mimics that for the full finite element approximation , except that now we seek an approximation in the pod space having the basis .specifically , for , we define the pod approximate initial conditions as and then pose the following problem : _ given , for and for , find satisfying _ we refer to this discretization as _ en - pod _ indicating that we are referring to an ensemble - based discretization of ( 2.2 ) using a low - dimensional pod space .note that because , i.e. , the pod approximation is by construction discretely divergence free , the pressure term in the pod - discretized nse drops out and we are left with a system involving only the pod approximation to the velocity .one further point of emphasis is that the initial conditions used in are different from the initial conditions used to construct the snapshot set , i.e. , we use initial conditions to solve the full finite element system to determine the snapshots , and now solve additional approximations of the nse by solving the much smaller pod system . as was the case for, the pod system is linear in the unknown and the associated coefficient matrix does not depend on , i.e. , it is the same for all realizations of the initial condition . on the other hand ,is a system of equations in unknowns whereas involves equations in the same number of unknowns , where and denote the total number of pod and finite element degrees of freedom , respectively .thus , if , _ solving _ requires much less cost compared to solving . in this way the offline cost of constructing the pod basis can be amortized over many online solves using the much smaller pod system .we address the _ assembly _ costs related to in section [ numexp ] .we prove the conditional , nonlinear , long - time stability of solutions of ( [ en - pod - weak ] ) .the projection operator : is defined by denote by the spectral norm for symmetric matrices and let denote the pod mass matrix with entries {i , i'}= ( \varphi_i , \varphi_i') ] , .it is shown in that as , we have the following lemma , see of for proof .[ lm : skew ] for any , [ th : en - pod ] [ stability of en - pod ] for and , let satisfy .suppose the time - step condition holds .then , for , the proof is provided in appendix [ app1 ] . in the time - step condition , the constant c is dependent on the shape of the domain and the mesh as a result of the use of inverse inequality in the proof . for a fixed mesh on a fixed domain, c is a generic constant that is independent of the time step , the solution and viscosity .we next provide an error analysis for en - pod solutions .[ lm : l2err ] [ norm of the error between snapshots and their projections onto the pod space ] we have and thus for , the proof of follows exactly the proof of ( * ? ? ?* theorem 3 ) ; is then a direct consequence of .[ lm : h1err][ norm of the error between snapshots and their projections in the pod space ] we have and thus for , the proof of follows exactly the proof of ( * ? ? ?* lemma 3.2 ) ; is then direct consequence of .[ lm : projerr][error in the projection onto the pod space ] consider the partition used in section [ snapshot ] . for any ^d) ] , let .then , the error in the projection onto the pod space satisfies the estimates let be the error between the true solution and the pod approximation , then we have the following error estimates .[ error analysis of en - pod][th : erren - pod ] consider the method ( [ eq : conv ] ) and the partition used in section 3.1 .suppose that for any , the following conditions hold then , for any , there is a positive constant such that the proof is provided in appendix [ app3 ] .we investigate the efficacy of our algorithm via the numerical simulation of a flow between two offset circles . before we discuss the examples and the numerical results , we briefly discuss the computational costs associated with the _ en - pod _ algorithm and how they compare to those of the _ en - full - fe _ algorithm .as stated in section [ podsec ] , we can split the computational cost of our algorithm into offline and online portions . in the offline portion, we generate the snapshot matrix by solving the navier - stokes equations for perturbations . using , we then generate a reduced basis to be used in our online calculations .it is fair to assume that the cost of creating the snapshot matrix will dominate the cost of generating the reduced basis associated with the eigenvalue problem , especially when we consider that there exist very efficient techniques for determining the partial svd of matrices . turning to the cost of solving the navier - stokes equation ,the discrete systems that arise from a fem discretization have been studied at great length . whereas it is possible to use a nonlinear solver such as newton s method or a nonlinear multigrid iteration, these methods often suffer from a lack of robustness .instead , it is more popular to linearize the system and then to use the schur complement approach .this allows for the use of a linear multigrid solver or krylov method such as gmres to solve the problem . for full details ,see , e.g. , .unfortunately , there are a number of factors such as the mesh size , the value of the reynolds number , and the choice of pre - conditioner which make it very difficult to precisely estimate how quickly these methods converge . estimating the online cost of the _ en - pod _ method , however , is much easier . because the pod discrete system is small and dense and the ensemble method has right - hand sides , the most efficient way to solve this problem is , at each time step , to do a single lu factorization and a backsolve for each right - hand side . denoting again by the cardinality of the reduced basis , the online cost of the _ en - pod _method is we note that this process is highly parallelizable .for example , if we have access to total processors , then we can remove the factor in the second term .it is important to note that the _ assembly _ of the low - dimensional reduced basis system requires manipulations involving the reduced basis which , as we have seen , are finite element functions so that , in general , that assembly involves computational costs that depend on the dimension of the finite element space .thus , naive implementations of a reduced basis method involve assembly costs that are substantially greater than solving costs and which , given the availability of very efficient solvers , do not result in significant savings compared to that incurred by the full finite element discretization . for linear problems the stiffness matrix is independent of the solution so that one can assemblethe small reduced basis stiffness matrix during the offline stage . for nonlinear problems , the discrete system changes at each time step ( and generally at each interrogation of a nonlinear solver ) so that , in general , it is not an easy matter to avoid the high assembly costs . however , because the nonlinearity in the navier - stokes system is quadratic , the assembly costs can again be shifted to the offline stage during which one assembles a low - dimensional third - order tensor that can be reused throughout the calculations . turning to the computational cost for the fem ensemble method , as mentioned previously ,the most efficient way to solve the resulting systems is a block solver ( e.g. , block gmres ) . in trying to estimate the computational cost, we run into the same problem as we do for estimating the cost of solving the standard fem discretization of the navier - stokes problem ; specifically , it is very difficult to precisely determine how quickly any block solver converges . due to the difficulties outlined above in a priori estimation of the computational costs for both our algorithms we omit any cpu time comparison in the numerical experiments .instead , we focus on the accuracy of our _ en - pod _method , demonstrating that it is possible to achieve similar results as those given by the _ en - full - fe _ method . a more rigorous and thorough analysis comparing the computational cost of the _ en - pod _ and_ en - full - fe _ method is a subject of future research . for the numerical experimentwe examine the two - dimensional flow between two offset circles with viscosity coefficient .specifically , the domain is a disk with a smaller offset disc inside .let , , , and ; then the domain is given by no - slip , no - penetration boundary conditions are imposed on both circles .all computations are done using the fenics software suite . the deterministic flow driven by the counterclockwise rotational body force displays interesting structures interacting with the inner circle .a krmn vortex street is formed which then re - interacts with the inner circle and with itself , generating complex flow patterns . for our test problems ,we generate perturbed initial conditions by solving a steady stokes problem with perturbed body forces given by with different perturbations defined by varying .we discretize in space via the - taylor - hood element pair .meshes were generated using the fenics built - in * mshr * package with varying refinement levels .an example mesh is given in figure [ meshex1 ] . in order to generate the pod basis ,we use two perturbations of the initial conditions corresponding to and . using a mesh that results in 16,457 total degrees of freedom and a fixed time step ,we run a standard full finite element code for each perturbation from to . for the time discretizationwe use the crank - nicolson method and take snapshots every seconds . in figure[ eigvaldecay ] , we illustrate the decay of the singular values generated the snapshot matrix .the purpose of this example is to illustrate our theoretical error estimates and to show the efficacy of our method in a `` data mining '' setting , i.e. , to show that we can accurately represent the information contained in the _ en - full - fe _ approximation which requires the specification of 16,457 coefficients by the _ en - pod _ approximation that requires the specification of a much smaller number of coefficients , in fact , merely 10 will do .thus , we determine the _ en - pod _ approximation using the same perturbations , mesh , and time step as were used in the generation of the pod basis .we verify at each time step that condition is satisfied . in order to illustrate the accuracy of our approach , we provide , in figure [ noromex1 ] , plots of the velocity field of the ensemble average at the final time for both the _ en - full - fe _ and _ en - pod _ approximations .we also provide in figure [ errordiff ] ( left ) the difference between the two ensemble averages at the final time .in addition , in figure [ energy_ex1 ] , we plot , for and for both methods , the energy and the enstrophy . of the en - full - fe ( left ) approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=207 ] of the en - full - fe ( left ) approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=113 ] of the en - full - fe ( left ) approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=207 ] of the en - full - fe ( approximation and the en - pod approximation with reduced basis vectors.,title="fig:",height=207 ] of the en - full - fe ( approximation and the en - pod approximation with reduced basis vectors.,title="fig:",height=113 ] we need pod basis functions to reproduce the flow with a reasonable level of accuracy .this is seen in table [ tabex1](a ) which shows a small discrete error corresponding to 10 basis vectors and , as the number of basis vectors increases beyond that , the error appears to decreases monotonically .visual confirmation is given by comparing the two plots in figure [ noromex1 ] as well as figure [ errordiff ] ( left ) ; at time the _ en - pod _method appears to produce a flow which is very similar to that for the _ en - full method_. additionally , in figure [ energy_ex1 ] , we plot the energy and enstrophy of _ en - pod _ with varying cardinalities for the pod basis and for the _ en - full - fe _ method .it can be seen that as the number of pod basis vectors increases our approximation improves with the _ en - pod _ energy and enstrophy becoming indistinguishable from that for the _ en - full - fe _ for or more pod basis functions . & error & & & error + 2 & 0.042157 & & 2 & 0.042418 + 4 & 0.019224 & & 4 & 0.019347 + 6 & 0.035701 & & 6 & 0.035804 + 8 & 0.064799 & & 8 & 0.064946 + 10 & 0.004741 & & 10 & 0.004923 + 12 & 0.003565 & & 12 & 0.003803 + 14 & 0.002979 & & 14 & 0.003217 + 16 & 0.002490 & & 16 & 0.0028368 + 18 & 0.001952 & & 18 & 0.002430 + 20 & 0.001035 & & 20 & 0.001610 + , the energy ( left ) and enstrophy ( right ) of the ensemble determined for the en - full - fe approximations and for the en - pod approximation of several dimensions.,title="fig : " ] , the energy ( left ) and enstrophy ( right ) of the ensemble determined for the en - full - fe approximations and for the en - pod approximation of several dimensions.,title="fig : " ] of course , the approximation of solutions of pdes using reduced - order models such as pod are used not in the context of section [ datam ] , but , in our setting , for values of the perturbation parameter different from those used to generate the reduced - order basis .thus , we consider the problem described in section [ twoc ] except that now we apply the _ en - pod _method , using the basis generated as described in section [ twoc ] , for the two ensemble values and , both of which are different from the values used to generate the snapshots used to construct the pod basis . for comparison purposes, we also determine the _ en - full - fe _ approximation for this ensemble .note that these two values of take us to an _ extrapolatory setting _, i.e. , these values are outside of the interval $ ] bracketed by the values of used to generate the pod basis .using a reduced - order method in an extrapolatory setting is usually a stern test of its efficacy .the results for this ensemble are given in table [ tabex1](b ) and figures [ noromex2 ] , [ errordiff2 ] , and [ energy_ex2 ] .the discussion in section [ datam ] corresponding to example 1 carries over to this example except that the magnitude of the error is slightly larger ; compare table [ tabex1](a ) and table [ tabex1](b ) . of the en - full - fe ( left )approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=207 ] of the en - full - fe ( left ) approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=113 ] of the en - full - fe ( left ) approximation and the en - pod approximation with reduced basis vectors ( right).,title="fig:",height=207 ] of the en - full - fe ( approximation and the en - pod approximation with reduced basis vectors.,title="fig:",height=207 ] of the en - full - fe ( approximation and the en - pod approximation with reduced basis vectors.,title="fig:",height=113 ] , the energy ( left ) and enstrophy ( right ) of the ensemble determined for the en - full - fe approximations and for the en - pod approximation of several dimensions.,title="fig : " ] , the energy ( left ) and enstrophy ( right ) of the ensemble determined for the en - full - fe approximations and for the en - pod approximation of several dimensions.,title="fig : " ]in this work , an ensemble - proper orthogonal decomposition method for the nonstationary navier - stokes equations is proposed and analyzed .this method is built on a recently developed ensemble method that allows for the efficient determination of the multiple solutions of nse . by incorporating the proper orthogonal decomposition technique, the ensemble - pod method introduced here significantly reduces the computational cost compared with that for the original ensemble method .the method presented herein only works with low reynolds number flows because the stability condition degrades quickly as the reynolds number increases . to handle high reynoldsnumber flows , one has to consider incorporating regularization techniques. for single navier - stokes solves , there is existing in vast literature in this regard , but , in the ensemble setting , regularization has barely been studied .the only existing works are in .the study of regularization methods in the ensemble and ensemble - pod setting is a focus of our current research .we also note that in certain applications it may be desirable to construct a reduced basis for the pressure .we did not consider this in this work ; doing so would require some sort of stabilization , such as the supremer stabilization introduced in , to compensate for the newly introduced lbb type condition .the incorporation of this type of method into the framework developed in this paper is also a subject of future research. 99 i. akhtar , a. h. nayfeh , and c. j. ribbens , _ on the stability and extension of reduced - order galerkin models in incompressible flows _ , theor .fluid dyn . 23( 2009 ) , no .3 , 213?237 .f. ballarin , a. manzoni , a. quarteroni , and g .rozza , _ supremizer stabilization of pod - galerkin approximation of parametrized steady incompressible navier?stokes equations . _( 2015 ) int .j. numer . meth .engng , 102 : 1136?1161 .doi : 10.1002/nme.4772 .j. baiges , r. codina , and s. idelsohn , _ explicit reduced - order models for the stabilized finite element approximation of the incompressible navier - stokes equations _ , int . j. numer .fluids 72 ( 2013 ) , no .12 , 1219?1243 c.h .bishop , b.j .etherton and s.j .majumdar , _ adaptive sampling with the ensemble transform kalman filter .part i : theoretical aspects _ , month .weath . review , 129 ( 2001 ) , 420 - 436 .r. buizza and t. palmer , _ the singular - vector structure of the atmospheric global circulation _ , journal of the atmospheric sciences , 52 ( 1995 ) , 1434 - 1456 .j. burkardt , m. gunzburger , and h .- c .lee , _ pod and cvt - based reduced - order modeling of navier - stokes flows _ , comput . meth .engrg . , 196 ( 2006 ) , 337 - 355 . a. caiazzo , t. iliescu , v. john , and s. schyschlowa , _ a numerical investigation of velocity - pressure reduced order models for incompressible flows _ , j. comput .( 2014 ) , 598?616 .d. chapelle , a. gariah , p. moireau , and j. sainte - marie , _ a galerkin strategy with proper orthogonal decomposition for parameter - dependent problems ?analysis , assessments and applications to parameter estimation _ , esaim math .modelling numer .47 ( 2013 ) , no . 6 , 1821?1843 .d. chapelle , a. gariah and j. sainte - marie , _galerkin approximation with proper orthogonal decomposition : new error estimates and illustrative examples _ ,esaim : math . model .anal . , 46 ( 2012 ) , 731 - 757 .y. feng , d. owen , and d. peric , _ a block conjugate gradient method applied to linear systems with multiple right hand sides _ ,comp . meth .mech . & engng . 127( 1995 ) , 203 - 215 .r. freund and m. malhotra , _ a block qmr algorithm for non - hermitian linear systems with multiple right - hand sides _, linear algebra and its applications , 254 ( 1997 ) , 119 - 157 .e. gallopulos and v. simoncini , _ convergence of block gmres and matrix polynomials _ , lin .appl . , 247 ( 1996 ) , 97 - 119 .v. girault and p. raviart , _ finite element approximation of the navier - stokes equations _ , lecture notes in mathematics , vol . 749 , 1979 .m. gunzburger , _ finite element methods for viscous incompressible flows - a guide to theory , practices , and algorithms _ , academic press , london , 1989 .m. gunzburger , j. peterson , and j. shadid , _ reduced - order modeling of time - dependent pdes with multiple parameters in the boundary data _ , comput ., 196 ( 2007 ) , 1030 - 1047 .p. holmes , j. lumley and g. berkooz , _ turbulence , coherent structures , dynamical systems and symmetry _, cambridge university press , cambridge , uk , 1996 .s. huffel , _ partial singular value decomposition algorithm _ , j. comput .33 ( 1990 ) 105 - 112 .t. iliescu and z. wang , _ variational multiscale proper orthogonal decomposition : navier - stokes equations _ ,pdes , 30 ( 2014 ) , 641 - 663 .n. jiang , _ a higher order ensemble simulation algorithm for fluid flows _ , journal of scientific computing , 64 ( 2015 ) , 264 - 288 .n. jiang , _ a second - order ensemble method based on a blended backward differentiation formula timestepping scheme for time - dependent navier - stokes equations _, numerical methods for partial differential equations , 2016 , in press , doi : 10.1002/num.22070 . n. jiang , s. kaya , and w. layton , _ analysis of model variance for ensemble based turbulence modeling _ , comput, 15 ( 2015 ) , 173 - 188 .n. jiang and w. layton , _ an algorithm for fast calculation of flow ensembles _ , international journal for uncertainty quantification , 4 ( 2014 ) , 273 - 301 .n. jiang and w. layton , _ numerical analysis of two ensemble eddy viscosity numerical regularizations of fluid motion _ , numer .meth . part .equations , 31 ( 2015 ) , 630 - 651 . k. kunisch and s. volkwein , _galerkin proper orthogonal decomposition methods for parabolic problems _ , numer math , 90 ( 2001 ) , 117 - 148 .w. layton , _ introduction to the numerical analysis of incompressible viscous flows _ , society for industrial and applied mathematics ( siam ) , 2008 .a. logg , k .- a .mardal , g. wells , et al ._ automated solution of differential equations by the finite element method _ , springer .[ doi:10.1007/978 - 3 - 642 - 23099 - 8 ] j. l. lumley , _ the structure of inhomogeneous turbulent flows _ , in atmospheric turbulence and wave propagation , edited by a. m. yaglom and v. i. tatarski ( nauka , moscow , 1967 ) , pp .167 - 178 .r. rannacher , _ finite element methods for the incompressible navier - stokes equations _ , fundamental directions in mathematical fluid dynamics ( birkhauser 2000 ) , pp .191 - 293 .s. sirisup and g.e .karniadakis , _ stability and accuracy of periodic flow solutions obtained by a pod - penalty method _ , j. phys .d 202 ( 2005 ) , no .3 , 218?237 .z. toth and e. kalnay , _ ensemble forecasting at nmc : the generation of perturbations _ , bull .74 ( 1993 ) , 2317 - 2330 .s. volkwein , _ optimal control of a phase - field model using proper orthogonal decomposition _ , z. angew .mech . , 81 ( 2001 ) , 83 - 97 .wang , z. , akhtar , i. , borggaard j. and iliescu , t. 2011 _ two - level discretizations of nonlinear closure models for proper orthogonal decomposition_. j. comput . phys .230 , 126?146 .wang , z. , akhtar , i. , borggaard j. and iliescu , t. 2012 _ proper orthogonal decomposition closure models for turbulent flows : a numerical comparison_. comput . meth .engng 237?240 , 10?26 .as the only difference is the choice of basis functions , we follow closely the proof of ( * ? ? ?* theorem 1 ( stability of befe - ensemble ) ) . setting in ( [ en - pod - weak ] ) and applying the cauchy - schwarz and young inequalities to the right - hand side yields next , we bound the trilinear term using the poincar inequality , lemma [ lm : skew ] and the inverse inequality : by construction , the pod basis functions are orthonormal with respect to the inner product so that .then , reduces to using young s inequality again results in combining with ( [ ineq : tri ] ) and then adding and subtracting results in assuming that the restriction ( [ ineq : cfl - h ] ) holds , we have combining the last two results then yields summing up the above inequality results in .by and the cauchy - schwarz inequality , we have so that we rewrite for all .setting and using the triangle inequality as well as lemma [ lm : l2err ] , we have , for , from which easily follows .similarly , by using lemmas [ lm : inverse ] and [ lm : h1err ] , we have from which easily follows .for , the true solutions of the nse satisfies where is defined as let where is the projection of in subtracting ( [ en - pod - weak ] ) from ( [ eq : convtrue ] ) gives set and rearrange the nonlinear terms . by the definition of the projection, we have . thus becomes we rewrite the first two nonlinear terms on the right hand side of as follows using the same techniques as in the proof of theorem 5 of , with the assumption that , we have the following estimates on the nonlinear terms and using young s inequality , and the result from the stability analysis , i.e. , , we have using the inequality , young s inequality and , we have we next rewrite the third nonlinear term on the right - hand side of : following the same steps as in the proof of theorem 5 of , we have the following estimates on the above nonlinear terms by skew symmetry , lemma [ lm : skew ] , inequality and the inverse inequality , we have for the last nonlinear term we have next , consider the pressure term . since we have for the other terms , are bounded as finally , combining , we now have the following inequality : by the timestep condition .take the sum of ( [ eq : err2 ] ) from to and multiply through by . since , we have and . using the result from the stability analysis , i.e. , and assumption , we have now applying lemma [ lm : projerr ] gives the next step will be the application of the discrete gronwall inequality ( girault and raviart , p. 176 ) . recall that . to simplify formulas, we drop the second and third term on the left hand side of .then by the triangle inequality and lemma [ lm : projerr ] , absorbing constants , we have .
the definition of partial differential equation ( pde ) models usually involves a set of parameters whose values may vary over a wide range . the solution of even a single set of parameter values may be quite expensive . in many cases , e.g. , optimization , control , uncertainty quantification , and other settings , solutions are needed for many sets of parameter values . we consider the case of the time - dependent navier - stokes equations for which a recently developed ensemble - based method allows for the efficient determination of the multiple solutions corresponding to many parameter sets . the method uses the average of the multiple solutions at any time step to define a linear set of equations that determines the solutions at the next time step . to significantly further reduce the costs of determining multiple solutions of the navier - stokes equations , we incorporate a proper orthogonal decomposition ( pod ) reduced - order model into the ensemble - based method . the stability and convergence results for the ensemble - based method are extended to the ensemble - pod approach . numerical experiments are provided that illustrate the accuracy and efficiency of computations determined using the new approach . ensemble methods , proper orthogonal decomposition , reduced - order models , navier - stokes equations .
our group has designed and implemented a unified accelerator application programming interface ( api ) called xal .xal is designed to aid in the development of science control applications for beam physics .accordingly , the xal api is a _ physics - centric software programming interface .the physics applications interact with a model of an accelerator that resides in computer memory .xal also contains the software infrastructure that creates the accelerator model .xal loads a text - based ( xml ) description of an accelerator and assembles software objects such that an accurate model of the accelerator exists in computer memory .xal is based on ual , the unified accelerator library ._ the original motivation for xal was to provide an accelerator independent interface for applications to interact with i / o from a live accelerator .this allows physicists to write beam physics control applications ( orbit correctors , beam profile monitors , rf tuners , etc . ) to the xal api so that they can run on any accelerator .some pseudo - code illustrating the principles of an xal - based orbit correction application may illustrate the essence of the concept ..... accelerator theaccel = xalfactory.newaccel(``sns.xml '' ) bpm [ ] thebpms = theaccel.getnodesoftype(bpm ) horzdipole [ ] thecorrectors = theaccel.getnodesoftype(dch ) for each bpm in thebpms read bpm.avgpos ( ) and set a corrector magnet accordingly .... to aid in writing applications that take into account design values , the accelerator description file contains all design information for the accelerator .this condition allows , for example , a physics application to compare the design field of a quadrupole with its read - back ( runtime ) field . with all design information incorporated into a software model of an accelerator, we have discovered an excellent simulation engine .as long as the software accelerator has a convenient means for traversing beam - line devices in a spatially sequential manner , we can use design values along the way to simulate beam - dynamics .this scenario allows for a drastic departure from traditional accelerator simulation codes .traditionally simulators have been isolated software products .they load some type of lattice description of an accelerator and apply predefined beam - dynamics to an initial beam .ultimately this design has led to huge codes ( to account for various beam - line element types ) .further , these codes typically operate with only one type of simulation ( multi - particle or rms envelope , but not both ) .the architecture presented here contains a novel approach to the simulation domain .it is our conjecture that the method presented here better captures reality in that there is some sort of _ software beam actually traversing a software model of a real accelerator . _our approach is based upon the element - algorithm - probe design pattern .the core concept of this design pattern is the separation of beam - dynamics code from the actual beam - line elements .it is desirable to keep the code that corresponds to beam - line elements as simple as possible so that the application writer has a clean interface to a beam - line element .the element - algorithm - probe pattern enforces this concept by requiring beam - dynamics code to exist in a separate entity , called an ialgorithm .deferred until runtime is the binding of beam - dynamics to actual beam - line elements .this deployment strategy allows for conceptually correct simulations .first it is truly modular .the three concepts , beam - line elements , beam - dynamics , and the beam are compartmentalized into separate code .second it is truly maintainable . to support a new beam type or new beam - line element typedoes not cause code bloat .finally it is truly extensible . via the mechanism of a java interface, various beam - dynamics algorithms can be written for the same type of beam - line element and switched at will at runtime .modularity , maintainability , and extensibility provide true power and flexibility to our architecture .it may help to understand the facets of java that we exploit in order to implement the element - algorithm - probe design pattern . at the center of the element - algorithm - probe patternis the concept of a java _ interface . essentially , an interface is a contract between a user and an implementor .the contract says that the implementor of an interface is required to provide an implementation of the methods defined in the interface ._ for example , consider the interface .... public interface thermometer { public double gettemperature ( ) ; } .... using this interface , a programmer can assume being able to perform operations on a thermometer no matter how the thermometer actually obtains the temperature .this is desirable because a thermometer implementor can change how the temperature is actually obtained ( if , say , a new sensor system was installed ) without requiring all thermometer users to recompile their code .we use the same idea with beam - dynamics code .beam - dynamics reside in files that _ implement ( the computer science term for acknowledging involvement in the contract from the implementors point of view ) the ialgorithm interface . since the simulation engine knows how to do beam - dynamics calculations solely by interacting with ialgorithms , it is trivial to swap beam - dynamics algorithms at will . _ the ialgorithm interface looks like this . ....public interface ialgorithm { public void propagate(ielement , iprobe ) ; public class legalelementtype ( ) ; public class legalprobetype ( ) ; } .... conceptually , an ialgorithm implementor is required provide an implementation of the method propagate ( ) to modify the the beam ( iprobe ) according to the beam dynamics of the the beam - line element ( ielement ) . in essence ,all that the simulation engine knows about are the three data types ( all defined in interfaces ) ialgorithm , iprobe , and ielement .the beauty of this design is that there are separate code locations for beam - line elements , beam - dynamics , and the beam itself .the iprobe interface should ideally contain the bare minimum information to fully represent a beam .such beam information consists of beam current , beam charge , particle charge , particle rest energy , particle kinetic energy , etc .further , since a probe represents the state of the beam at a position in the beam - line , a probe also contains a beam - line - position attribute .the current iprobe specification serves the purpose of representing a beam for a single particle , particle ensemble , and envelope simulations ( in both two and three dimensions ) .figure 1 represents a suitable inheritance hierarchy of probe types to handle the aforementioned simulation types .it is important to note that there are various approaches toward simulating beam - dynamics .for example , accurate approaches may involve slicing nodes up into small pieces .an aggregate of approximations done on sufficiently small elements is typically more accurate than one overall approximation .however , normally this is only practical in elements that have special behavior .so the question arises : _ how are probes propagated through elements ? _ it is the responsibility of the ialgorithm implementor to handle all beam - dynamics , including the propagation mechanism .sample propagation mechanisms will be presented later in this paper .however , keep in mind the most appropriate propagation mechanism when implementing algorithms for the particular problem at hand .we have already introduced the concept of the ialgorithm interface .now let us pursue a few more details regarding its implementation .the ialgorithm interface provides a generic way of assembling algorithms in a simulation engine . in practiceany particular ialgorithm implementation only makes sense in the context of a particular beam - line element type and probe type .for example , a hypothetical ialgorithm called quadparticlemapper would expect a quadrupole as its ielement and a particle as its iprobe .providing such specificity is the job of the legalelementtype ( ) and legalprobetype ( ) methods .an implementation of the quadparticlemapper could look like this . .... public class quadparticlemapper(ielement p_elem , iprobe p_probe ) { public class legalelementtype(){return quadrupole.class ; } public class legalprobetype(){return particle.class ; } public void propagate(){quadrupole / particle beam dynamics } } .... by providing these methods , the simulation engine can do type checking upon algorithm binding . it would not make sense to bind this algorithm to a wirescanner .providing these methods helps to avoid that condition .designing an actual simulation merely involves putting together elements , algorithms , and probes in a semantically meaningful way .it turns out that , to the first order , the beam dynamics through a particular node type can be captured by a transfer matrix .this property allows for a straight - forward means of simulating a particle traveling down a beam - line .an object - oriented approach would be to create a particlemapper class that transforms the particle probe by the simple vector - matrix multiplication where is the coordinate vector of the particle at the start of the node , and is the transfer matrix of the node .further , can be obtained by the particlemapper via the use of an abstract method that is implemented by beam - dynamics algorithms for individual nodes ( quadrupoleparticlemapper , rfcavityparticlemapper , etc . ) .a suitable class design can be seen in figure 2 . to further illustrate some of these concepts , the basic layout of the particlemapper class looks like this ..... abstract public matrix computetransfermatrix ( ) ; public void propagate(ielement pelem , iprobe pprobe ) { //type - cast the probe and element to what we expect particle theprobe = ( ( particle)pprobe ) ; acceleratornode thenode = ( ( acceleratornode)pelem ) ; //do the vector - matrix multiplication theprobe.setcoords(computetransfermatrix .times(theprobe.getcoords ( ) ) ) ; //advance the probe the length of the node theprobe.advanceposition(thenode.getlength ( ) ) ; return ; } .... once the computetransfermatrix ( ) operations are implemented for the node - specific dynamics , all that remains is writing a driver program .a driver program binds algorithms to nodes and injects the probe .here is a pseudo - code driver . .... //instantiate the xal accelerator model accelerator theaccel = xalfactory.newaccel(``sns.xml '' ) //bind the algorithms quadrupole [ ] thequads = theaccel.getnodesoftype(quad ) rfcavity [ ] thecavities = theaccel.getnodesoftype(rfc ) for each quad in thequads bind a quadparticlemapper instance to quad for each rfcav in thecavities bind a rfcavityparticlemapper instance to rfcav //instantiate a probe particle p1 = new particle(initial conditions ... ) //run the probe down the beam - line acceleratornode [ ] thenodes = theaccel.getallnodes ( ) ; for each node in thenodes node.propagate(p1 ) .... and that is it !the particle probe will be transformed by each beam - line element according the the bound algorithm .note that the pseudo - code is a basic proof of concept and does not contain the code necessary to broadcast probe increment intermediate data to produce , for example , a plot . the single particle simulation can be applied to a two - dimensional case by only considering the first four elements of .further , the single particle simulation can be extended to a multi - particle simulation ( in two and three dimensions ) by constructing a container of particle probes and writing beam - dynamics algorithms that properly transform the collection .the only matter that complicates ( and complicate it does ! ) a multi - particle simulation is the concept of space - charge . before biting off this task ,however , a presentation of another type of simulation that accounts for space - charge is warranted .the rms qualities of a beam can be represented by the 6x6 symmetric matrix that statistically expresses the boundaries of a beam in transverse , longitudinal , and phase space by using moments of the beam distribution .rms envelopes are convenient because applying beam - dynamics involves a simple matrix operation .namely , the same transfer matrix used in single particle simulations can propagate rms envelopes according to the conjugation the other important concept in this simulation is space charge .a matrix is a statistical representation of a beam , which is a multi - particle entity .therefore , each particle in the beam is aware ( electromagnetically ) of all other particles in the beam .it turns out that to the first order the effects of space charge can be captured in a matrix .while it may not be mathematically trivial to calculate the matrix , having the calculation in such a form makes the integration into our simulation engine simple .however it should not be overlooked that this quantity is very important to the correctness of simulation .the envelope simulation is more complex than a single - particle simulation in that we will propagate envelopes through elements using more than one propagation mechanism .specifically , we may be able to compute a better approximation of behavior through quadrupoles than rf cavities .this condition is due to the fact that the matrix for a quadrupole adheres to the semi - group property . or where is the length of the quadrupole being considered .to more accurately consider space charge , we take advantage of the semi - group property of the transfer matrices . in the propagate ( )method of the semigroupenvelopemapper ( see figure 3 ) we subsection the node ( e.g. , a quadrupole ) into slices of length where is the length of the quadrupole .then we run the probe through these slices , applying space charge kicks after every subsection ( see figure 4 ) .since rf cavity transfer ( ) matrices do not in general adhere to a semi - group property , we are forced to take a more simplistic approach toward transforming the envelope .we will slice the node in two , treating each half as a drift - space ( to account for space charge ) and hit the envelope in the middle of the node with the numerically approximated matrix ( see figure 5 ) .as a final exercise it will be useful to consider the design of a multi - particle simulation .the true complication of designing a multiple - particle ( ensemble ) simulation is the computation of space - charge effects .unfortunately , to model multiple particles , space - charge effects can not be accurately captured by a transfer matrix . on the other hand ,the architecture outlined in this paper keeps the details of the space - charge calculations from interfering with code cleanliness . .... .... the two core concepts of a multi - particle ( ensemble ) simulation are * calculation of the electric self - fields of the ensemble * using the calculated fields to update the particle coordinates .there are various approaches that can be taken for both tasks .all that we attempt to show here is that by correctly isolating these concepts , a clean software architecture can be maintained .namely , an ensemble probe should encapsulate the logic necessary to obtain the electric self fields of the ensemble .that being the case , various ensemble probe implementations could be swapped at will to employ different field calculation techniques .for example , many electric field calculation techniques involve solving poisson s equation to obtain the electric potential of the ensemble . by hiding this code from the simulation engine (it is contained within the ensemble probe implementation ) , the implementor could exploit parallel processing facilities ( see figure 6 ) . by moving the calculation of electric fields out of the beam - dynamics code ,the beam - dynamics algorithm developer is free to choose space - charge consideration techniques with minimal impact to code clarity .one may decide to take the `` thin lens kick '' approach that has been used previously in this paper .one may alternatively decide to apply a `` trajectory integration '' based approach .the key point here is that by separating codes into their logical components allows for a high degree of flexibility in simulation technique .we are enthusiastic to report that the results obtained in the rms envelope simulation have been validated against trace3d figure 7 shows agreement between simulation results of the sns medium energy beam transport ( mebt ) using both trace3d and the xal simulation engine .it is encouraging that a problem domain with so many interdependencies(particle physics ) can be simulated with a clean architecture .as we move toward the future , we are anticipating the ability to implement model reference control techniques .that is , within the xal model there is access to a live accelerator and a simulated accelerator .having both at hand allows the comparison of live behavior with simulated behavior to develop control strategies .the key to effectively implementing an environment conducive to model reference control is architectural discipline when designing both the i / o and simulation aspects of xal . as long as the interface to the two are respectively clean , hybridization of the two will be a straightforward extension . 2 j. galambos , c.m .chu , t.a .pelaia , a. shishlo , c.k .allen , n. pattengale .`` sns application programming environment '' , epac 2002 n. malitsky and r. talman .`` unified accelerator libraries '' , aip 391(1996 ) n. malitsky and r. talman . `` the framework of unified accelerator libraries '' , icap 1998 c. k. allen and n. d. pattengale .`` simulation of bunched beams with ellipsoidal symmetry and linear space charge effects '' , lanl technical report 2002 k. crandall , d. p. rusthoi , `` trace 3-d documentation , '' los alamos national laboratory report la - ur-97 - 886 , may 1997 .
a modular , maintainable and extensible particle beam simulation architecture is presented . design considerations for single particle , multi particle , and rms envelope simulations ( in two and three dimensions ) are outlined . envelope simulation results have been validated against trace3d . hybridization with a _ physics - centric contol - system abstraction provides a convenient environment for rapid deployment of applications employing model - reference control strategies . _
quantum filtering theory was pioneered by belavkin in remarkable papers and was more lucidly reconsidered by bouten _ et al .this theory is now recognized as a very important basis for the development of various engineering applications of the quantum theory such as quantum feedback control , quantum dynamical parameter estimation , and quantum information processing .we here provide a brief summary of the quantum filtering theory by using the same notations as those in .let us consider an open system in contact with a field , particularly a vacuum electromagnetic field .this interaction is completely described by a unitary operator that obeys the following quantum stochastic differential equation ( qsde ) termed the hudson - parthasarathy equation : \hat{u}_t,~~ \hat{u}_0=\hat{i},\ ] ] where and are the system operator and hamiltonian , respectively .the quantum wiener process , which is a field operator , satisfies the following quantum ito rule : the time evolution of any system observable under the interaction ( [ hp - eq ] ) is described by the unitary transformation .the infinitesimal change in this transformation is calculated as )d\hat{b}_t + j_t([\hat{x } , \hat{c}])d\hat{b}_t{^{\dagger}}.\ ] ] here , we have defined + \hat{c}{^{\dagger}}\hat{x}\hat{c}-{\frac{1}{2}}\hat{c}{^{\dagger}}\hat{c}\hat{x } -{\frac{1}{2}}\hat{x}\hat{c}{^{\dagger}}\hat{c} ] for all and .this implies that the observation is a classical stochastic process .( for this reason , we omit the hat " on , but note that it itself is not a -number . )it is also noteworthy that satisfies the _ quantum nondemolition ( qnd ) _condition , =0~\forall s\leq t ] , where and represent the system quantum state and the field vacuum state , respectively .it should be noted that the following two conditions must hold in order for the above quantum conditional expectation to be defined : first , is a commutative algebra , and second , is included in the commutant of . but these conditions are actually satisfied as shown above .consequently , the optimal filter for the system dynamics ( [ qsde ] ) is given by the change in as follows : \big[dy_t-\pi_t(\hat{c}+\hat{c}{^{\dagger}})dt\big ] .\end{aligned}\ ] ] we can further incorporate some control terms into the above equation . typically , a bounded real scalar control input , which should be a function of the observations up to time , is included in the coefficients of the hamiltonian .we lastly remark that the conditional system state is associated with the system observable by the relation , which leads to the dynamics of termed the stochastic master equation .a key assumption in the filtering theory is that perfect knowledge about the system dynamics model ( [ qsde ] ) is required in order that the filter ( [ filter - eq ] ) provides the best estimate of the ( controlled ) system observable .however , this assumption is generally unrealistic , and we depend on only an approximate model of the system .this not only violates the optimality of the estimation but also possibly leads to the instability of the estimation error dynamics .this problem is well recognized in the classical filtering theory and various alternative estimators for uncertain systems , which are not necessarily optimal but robust to the uncertainty , have been proposed .( we use the term filter " to refer to only the optimal estimator . ) for example , in a _ risk - sensitive _ control problem in which an exponential - of - integral cost function is minimized with respect to the control input , it is known that the corresponding risk - sensitive observer enjoys enhanced robustness property to a certain type of system uncertainty .moreover , by focusing on specific uncertain systems , it is possible to design a _ robust observer _ such that the variance of the estimation error is guaranteed to be within a certain bound for all admissible uncertainties .it is considered that the above mentioned robust estimation methods are very useful in the quantum case since it is difficult to specify the exact parameters of a quantum system in any realistic situation : for instance , the total spin number of a spin ensemble . with this background , james has developed a quantum version of the risk - sensitive observer for both continuous and discrete cases and applied it to design an optimal risk - sensitive controller for a single - spin system .we should remark that , however , the above papers did not provide an example of a physical system such that the quantum risk - sensitive observer is actually more robust than the nominal optimal filter .therefore , in this paper , we focus on the robust observer and develop its quantum version .more specifically , we consider a class of quantum linear systems subjected to time - varying norm - bounded parametric uncertainties and obtain a quantum robust observer that guarantees a fixed upper bound on the variance of the estimation error .although in the linear case much of classical control theory applies to quantum systems , the robust observer obtained in this paper does not have a classical analogue in the following sense .first , unlike the classical case , the error covariance matrix must be symmetrized because of the noncommutativity of the measured system observables .second , due to the unitarity of quantum evolution , the uncertainties are included in the system representation in a different and more complicated way than those in the classical system considered previously ; as a result , both the structure of the quantum robust observer and the proof to derive it differ substantially from those found in .the other contribution of this paper is that it actually provides a quantum system such that both the robust observer and the risk - sensitive observer show better performance in the estimation error than the nominal optimal filter .this paper is organized as follows .section ii provides a basic description of general linear quantum systems , in which case the optimal filter ( [ filter - eq ] ) is termed the _quantum kalman filter_. in addition , we derive a linear risk - sensitive observer . in both cases , an explicit form of the optimal control input is provided . the quantum version of the robust observer is provided in section iii .section iv discusses robustness properties of the proposed robust observer and the risk - sensitive observer by considering a typical quantum control problem feedback cooling of particle motion .section v concludes the paper .we use the following notations : for a matrix , the symbols and represent its transpose and elementwise complex conjugate of , i.e. , and , respectively ; these rules can be applied to any rectangular matrix including column and row vectors .a hermitian matrix is positive semidefinite if for any vector ; the inequality represents the positive semidefiniteness of .in this paper , we consider a single one - dimensional particle interacting with a vacuum electromagnetic field . the extension to the multi - particle case is straightforward .in particular , we focus on the particle position and momentum .the system hamiltonian and operator are respectively given by where ^{{\mathsf t}} ] and noting the commutation relation ={{\rm i}}\hbar ] .the output equation ( [ output ] ) becomes it follows from eq .( [ filter - eq ] ) that the best estimate of the system observable , ^{{\mathsf t}}\in{\mathbb r}^2 ] is given by where satisfies the following riccati differential equation \big[\frac{1}{\hbar}f v^{\mu}_t + { \rm im}(\tilde{c})\sigma\big ] k^{\mu}_t \nonumber \\ & & \hspace*{-0.8em } \mbox { } + \mu(k^{\mu}_t v^{\mu}_t m + m v^{\mu}_t k^{\mu}_t)=o , \nonumber \\ & & \hspace*{-1em } k_t^{\mu}={\frac{1}{2}}\big [ ( i-\mu nv^{\mu}_t)^{-1}n+n(i-\mu v^{\mu}_t n)^{-1 } \big ] .\end{aligned}\ ] ] therefore , is a separated controller composed of the solutions to the observer equation ( [ risk - filter ] ) and the two coupled riccati equations ( [ risk - riccati-1 ] ) and ( [ risk - riccati-2 ] ) . it is notable that these set of equations are identical to those in the quantum lqg optimal control problem when the risk parameter is zero . in this sense ,the lqg optimal controller is sometimes referred to as the linear _ risk - neutral _ controller .this paper deals with a linear quantum system such that specific uncertainties are included in the system hamiltonian and the system operator as follows : where the real symmetric matrix and the complex row vector represent time - varying parametric uncertainties that satisfy the following bounds : here , the nonnegative scalar constants , and are known ( denotes the identity matrix ) . by defining ,\ ] ] the dynamics of the system observable ^{{\mathsf t } }= [ j_t(\hat{q})~j_t(\hat{p})]^{{\mathsf t}} ] represents the estimate of the system observable .note that , as in the case of the risk - sensitive observer , is not necessarily the optimal estimate of .furthermore , we here assume that the control input is fixed to a linear function of the observer state , , where is a row vector with the size .then , an explicit form of that enjoys a guaranteed estimation error bound is provided in the following theorem .we remark again that the theorem can be easily generalized to the multi - particle case .+ + * theorem 1 .* suppose there exist positive scalars and such that the following two coupled riccati equations have positive definite solutions and : where the matrices and and the vector are defined by the definition of the matrices are given in appendix a : eqs .( [ q1 ] ) , ( [ q2 ] ) , and ( [ q3 ] ) .the scalars and are given by and , respectively .then , the observer ( dy_t - f'x_t dt)\end{aligned}\ ] ] generates the estimate ^{{\mathsf t}} ] , where and satisfy eqs .( [ uncertain - qsde ] ) and ( [ notdetermined ] ) , respectively . then , obeys the following linear qsde : where , \delta\bar{a}_t = \left [ \begin{array}{cc } \delta a_t & o \\ \delta a_t - k\delta f_t & o \\\end{array } \right ] , \nonumber \\ & & \hspace*{-1.2em } \bar{b}_\delta = \left [ \begin{array}{c } { { \rm i}}\sigma(\tilde{c}+\delta\tilde{c}_t)^{{\mathsf t } } \\{ { \rm i}}\sigma(\tilde{c}+\delta\tilde{c}_t)^{{\mathsf t}}-k \\ \end{array } \right ] . \nonumber\end{aligned}\ ] ] let us now consider the symmetrized covariance matrix of ; .this satisfies the following generalized uncertainty relation : .\ ] ] noting and the quantum ito rule , the time evolution of is calculated as \nonumber \\ & & \hspace*{2em } \mbox{}+\big[\bar{v}_t+\frac{{{\rm i}}\hbar}{2}\bar{\sigma}\big ] ( \bar{a}+\delta\bar{a}_t)^{{\mathsf t } } + \hbar\bar{b}_\delta^*\bar{b}_\delta^{{\mathsf t } } \nonumber \\ & & \hspace*{0.8em } = ( \bar{a}+\delta\bar{a}_t)\bar{v}_t + \bar{v}_t(\bar{a}+\delta\bar{a}_t)^{{\mathsf t } } + \bar{d}+\delta\bar{d}_t .\nonumber\end{aligned}\ ] ] the matrices and are given by -\hbar\left [ \begin{array}{cc } o & mk^{{\mathsf t } } \\km^{{\mathsf t } } & km^{{\mathsf t}}+mk^{{\mathsf t}}-kk^{{\mathsf t } } \\\end{array } \right ] , \nonumber \\ & & \hspace*{-1.1em } \delta\bar{d}_t = \left [ \begin{array}{cc } \delta d_t & \delta d_t \\ \delta d_t & \delta d_t \\ \end{array } \right ] -\hbar\left [ \begin{array}{cc } o & \delta m_t k^{{\mathsf t } } \\k\delta m_t^{{\mathsf t } } & k\delta m_t^{{\mathsf t}}+\delta m_t k^{{\mathsf t } } \\\end{array } \right ] , \nonumber\end{aligned}\ ] ] where \sigma^{{\mathsf t } } , \nonumber \\ & & \hspace*{-1em } m:=\sigma^{{\mathsf t}}{\rm im}(\tilde{c})^{{\mathsf t}},~~ \delta m_t:=\sigma^{{\mathsf t}}{\rm im}(\delta\tilde{c}_t)^{{\mathsf t}}. \nonumber\end{aligned}\ ] ] our goal is to design and such that the condition is satisfied for all admissible uncertainties ; in this case , it follows from the lemma shown in appendix b that the relation is satisfied . for this purpose , we utilize the following matrix inequalities : for all and the uncertain matrices satisfying eqs .( [ uncertain - bound-1 ] ) and ( [ uncertain - bound-2 ] ) , we have the proof of the above inequalities and the definition of the matrices are given in appendix a. therefore , the condition ( [ proof - ineq ] ) holds for all admissible uncertainties if there exists a positive definite matrix such that the following riccati inequality holds : especially we here aim to find a solution of the form with and denoting positive definite matrices .then , partitioning the matrix into with matrices , we obtain let us now assume that the riccati equation ( [ riccati1 ] ) , which is equal to , has a solution .then , the equality yields .moreover , is then calculated as \big[k-\frac{1}{\mu_2}p_2 f'\mbox{}^{{\mathsf t } } -\frac{\mu_1}{\mu_2}m \big]^{{\mathsf t } } \nonumber \\ & & \hspace*{-1em } \mbox { } -\frac{1}{\mu_2}(p_2f'\mbox{}^{{\mathsf t}}+\mu_1 m ) ( p_2f'\mbox{}^{{\mathsf t}}+\mu_1 m)^{{\mathsf t } } \nonumber \\ & & \hspace*{-1em } \mbox { } -p_2(l^{{\mathsf t}}b^{{\mathsf t}}p_1^{-1}+p_1^{-1}bl)p_2 .\nonumber\end{aligned}\ ] ] hence , the optimal that minimizes the maximum eigenvalue of is given by .\ ] ] then , the existence of a solution in eq .( [ riccati2 ] ) directly implies . as a result, we obtain , which leads to the objective condition ( [ proof - ineq ] ) .therefore , according to the lemma in appendix b , we have . then , as the third and fourth diagonal elements of the matrix are respectively given by and , we obtain eq .( [ upper - bound ] ) . + + the basic idea to determine the form of the quantum robust observer ( [ robust - filter ] ) is found in several papers that deal with uncertain linear classical systems .however , the structure of the quantum robust observer differs substantially from that of the classical robust observer derived in .the reason for this is as follows .first , unlike the classical case , the covariance matrix of the augmented system ( [ augmented ] ) , which is used to express the performance of the robust observer , must be symmetrized in order for to be a physical observable .second , the uncertainty appears both in the drift matrix and the diffusion matrix in complicated ways ; this is because , as has been previously mentioned , the system matrices are strongly connected with each other due to the unitarity of quantum evolution .the classical correspondence to the uncertain quantum system ( [ uncertain - qsde ] ) and ( [ uncertain - output ] ) has not been studied . for this reason , the resulting robust observer ( [ robust - filter ] ) and the proof to derive it do not have classical analogues .actually , for standard classical systems whose system matrices can be specified independently of one another , the process shown in appendix a is unnecessary .we now present an important property that the quantum robust observer should satisfy : when the uncertainties are small or zero , the robust observer should be close or identical to the optimal quantum kalman filter , respectively .this natural property is proved as follows . + + * proposition 2 .* consider the case where the uncertainties converge to zero : and .then , there exist parameters and such that the robust observer ( [ robust - filter ] ) converges to the stationary kalman filter ( [ linear - filter ] ) with satisfying the riccati equation in eq .( [ riccati ] ) . + + * proof .* let us consider the positive parameters as follows : in this case , for example , the matrix is calculated as which becomes zero as , and .similarly , in these limits , we have , and .then , since eq .( [ riccati1 ] ) is equivalently written as the limit implies that the solution of the above equation satisfies .we then obtain , and .therefore , in this case , eq .( [ riccati2 ] ) with is identical to the riccati equation in eq .( [ riccati ] ) .the robust observer ( [ robust - filter ] ) then converges to the stationary kalman filter ( [ linear - filter ] ) with . + + the above proposition also states that we can find the parameters and such that the robust observer ( [ robust - filter ] ) approximates the stationary kalman filter when the uncertainties are small , because the solutions of the riccati equations ( [ riccati1 ] ) and ( [ riccati2 ] ) are continuous with respect to the above parameters .we lastly remark on the controller design . in theorem 1 ,we have assumed that the control input is a linear function .this is a reasonable assumption in view of the case of the lqg and risk - sensitive optimal controllers .hence , it is significant to study the optimization problems of the vector such that some additional specifications are further achieved .for example , that minimizes the upper bound of the estimation error , , is highly desirable .however , it is difficult to solve this problem , since the observer dynamics depends on in a rather complicated manner .therefore , the solution to this problem is beyond the scope of this paper .the main purpose of this section is to show that there actually exists an uncertain quantum system such that both the robust observer and the risk - sensitive observer perform more effectively than the kalman filter , which is no longer optimum for uncertain systems .moreover , we will carry out a detailed comparison of the above three observers by considering each estimation error .this is certainly significant from a practical viewpoint .first , let us describe the system .the control objective is to stabilize the particle position at the origin by continuous monitoring and control . in other words, we aim to achieve with a small error variance .the system observable is thus given by .\ ] ] for the hamiltonian part , , we assume the following : the control hamiltonian is proportional to the position operator : ,\ ] ] where is the input , and the free hamiltonian is of the form , where denotes the potential energy of the particle . in general , the potential energy can assume a complicated structure .for example , doherty __ have considered a nonlinear feedback control problem of a particle in a double - well potential . since the present paper deals with only linear quantum systems , we approximate to the second order around the origin and consider a spatially local control of the particle . in particular , we examine the following two approximated free hamiltonians : former is sometimes referred to as an anti - harmonic oscillator , while the latter is a standard harmonic oscillator approximation .the system matrices corresponding to are respectively given by ,~~ g_2=\left [ \begin{array}{cc } 0.05 & 0 \\ 0 & 2 \\\end{array } \right].\ ] ] in the case of the harmonic oscillator hamiltonian , the system is autonomously stable at the origin .in contrast , in the case of the anti - harmonic oscillator , the system becomes unstable when we do not invoke any control .however , it is observed that the control hamiltonian ( [ fb - hamiltonian ] ) with an appropriate control input can stabilize the system .an example is the lqg optimal controller with the following tuning parameters of the cost function ( [ lqg ] ) : ,~~ r=\frac{1}{5},~~ n=\left [ \begin{array}{cc } 2 & 0 \\ 0 & 0 \\ \end{array } \right].\ ] ] figure i illustrates an estimate of the particle position in both the unstable autonomous trajectory and the controlled stable trajectory ; in the latter case , the control objective is actually satisfied .an example of the unstable autonomous trajectory ( dot line ) and the controlled stable trajectory ( solid line ) shown by . ] second , we describe the uncertainty included in the system .in particular , we consider two situations in which uncertain hamiltonians and are added to and , respectively .the unknown time - varying parameter is bounded by the known constant , i.e. , ] and ,~~ \bar{\theta}_2 = \left [ \begin{array}{cc } \tilde{c}_2 & 0^{{\mathsf t } } \\-\tilde{c}_1 & 0^{{\mathsf t } } \\ \end{array } \right ] , \nonumber \\ & & \hspace*{-1em } \delta\bar{j}_1 = \left [ \begin{array}{c } \delta\tilde{c}_2 \\\delta\tilde{c}_1 \\\end{array } \right],~~ \delta\bar{j}_2 = [ \delta\tilde{c}_1^{{\mathsf t}}~~ \delta\tilde{c}_2^{{\mathsf t } } ] .\nonumber\end{aligned}\ ] ] we here denoted ] and ] satisfies , where , \nonumber \\ & & \hspace*{-0.66em } \bar{b}_o = \left [ \begin{array}{c } { { \rm i}}\sigma\tilde{c}^{{\mathsf t } } \\{ { \rm i}}\sigma\tilde{c}^{{\mathsf t}}-b_o \\ \end{array } \right ] .\nonumber\end{aligned}\ ] ] let be the symmetrized covariance matrix of . as mentioned in the proof of theorem 1 , this matrix satisfies . by using this relation ,we obtain , where is given by \nonumber \\ & & \hspace*{0em } -\hbar\left [ \begin{array}{c|c } o & [ b_o{\rm im}(\tilde{c})\sigma]^{{\mathsf t } } \\\hline b_o{\rm im}(\tilde{c})\sigma & b_o{\rm im}(\tilde{c})\sigma + [ b_o{\rm im}(\tilde{c})\sigma]^{{\mathsf t}}-b_ob_o^{{\mathsf t } } \\\end{array } \right ] .\nonumber\end{aligned}\ ] ] as a result , the variance of the estimation error is given by , where is the stationary solution of the following lyapunov equation : the estimation error between the true system and the kalman filter designed for the nominal system is immediately evaluated by setting in the above discussion .
in the theory of quantum dynamical filtering , one of the biggest issues is that the underlying system dynamics represented by a quantum stochastic differential equation must be known exactly in order that the corresponding filter provides an optimal performance ; however , this assumption is generally unrealistic . therefore , in this paper , we consider a class of linear quantum systems subjected to time - varying norm - bounded parametric uncertainties and then propose a robust observer such that the variance of the estimation error is guaranteed to be within a certain bound . although in the linear case much of classical control theory can be applied to quantum systems , the quantum robust observer obtained in this paper does not have a classical analogue due to the system s specific structure with respect to the uncertainties . moreover , by considering a typical quantum control problem , we show that the proposed robust observer is fairly robust against a parametric uncertainty of the system even when the other estimators the optimal kalman filter and risk - sensitive observer fail in the estimation .
among the main differences between quantum and classical information is the fundamental impossibility to exactly duplicate an unknown quantum state .this was established by the no - cloning theorem of dieks and wootters and zurek ; for a review , see ref . .while _ exact _cloning is thus impossible , it remains an important goal to _ approximately _ clone quantum states .this possibility , which is particularly important for quantum communication and cryptography , was first discussed by buek and hillery , who showed that it is possible to create copies ( approximate clones ) of unknown quantum states with a quality that does not depend on the initial state .approximate cloning can be optimized in different ways . in so - called asymmetric cloning , the amount of information transferred from the input state to the copy is an adjustable parameter .the quality of the copy and the distortion that the cloning process causes on the original system both depend on this parameter : if the quality of the copy increases , the distortion of the original necessarily increases simultaneously .this is quantified by the fidelity of the two output systems , which is defined as the overlap of these states with the input state .this tradeoff relates , e.g. , the amount of information that an eavesdropper can extract from a quantum communication channel to the error rate of the transmitted information .asymmetric quantum cloning was first proposed for copying a single qubit to a single copy qubit , and subsequently extended to arbitrary dimensions ( including the continuous case ) .an implementation of universal asymmetric cloning in an optical experiment was proposed locally and at a distance .an experimental realization of asymmetric cloning was reported by zhao _et al_. using two entangled photon pairs .the quality of the cloning can be improved if the initial state is restricted to part of the full hilbert space .an example of this state - dependent cloning is phase - covariant cloning , where the input state is an equal - weight superposition of two basis states .the goal is then to optimally clone the state in such a way that the phase information is conserved . in this paper , we construct a two - qubit quantum logic circuit that implements the optimal asymmetric phase - covariant cloning for arbitrary input phase .our cloning machine does not require any ancilla qubits and uses only two gate operations .the cloning process is implemented experimentally in an nmr system , using nuclear - spin qubits . forthe gate operations we use controlled geometrical phase gates and demonstrate the trade - off in fidelity for the two output qubits .in the following , we consider phase - covariant cloning : the original qubit to be cloned is in an equal - weight superposition of the two basis states , with an unknown phase difference .we clone this state onto a second qubit that is originally in state .the cloning is acomplished by a unitary operation acting on the initial product state .the operation can be specified by its effect on the two orthogonal initial states and : or by its matrix representation this operation is equivalent ( up to local operation and phases ) to , and the rotation angle can be used to adjust how much information is transferred to the second qubit . , while qubit b is the blank one which initially in state .the unitary operation denotes and denotes .,width=7 ] after the cloning operation , the partial density operators for the two qubits are the state - overlap between the original and the two output qubits are the choice of the angle thus determines how much information is transferred from qubit a to qubit b : for , the overlap of qubit a with the initial state is 1 , while the overlap of the copy qubit is just the random value .when , we obtain the case of optical symmetric cloning , with , which has been shown to be the optimal value for symmetric phase - covariant cloning . for ,the information is transferred completely to the copy qubit , with .compared to the logic circuit of the symmetric cloning machine proposed in ref , this scheme needs fewer logic gates .we therefore expect it to perform better in practice , being less affected by experimental imperfections , such as errors in rotation angles of radio - frequency pulses .geometric quantum phases have the remarkable property that they depend only on global parameters ( e.g. the area of a circuit ) and are therefore not sensitive to some local variations of the trajectory .it was therefore suggested , that quantum gate operations using geometric gates may be less susceptible to experimental imperfections and therefore yield higher fidelity we therefore used geometric phase gates to implement the two controlled gate operations required for the cloning operation ( [ e : un ] ) ( see fig .1 ) . we first discuss the relevant operation for a single qubit and then extend the procedure to the controlled operations . within the two - level system , we consider two orthogonal states and , which undergo a cyclic evolution described by the operator : .the parameter is thus the total phase difference of the two states acquired during this circuit . in the computational basis ( , ), the cyclic states can be written as where are the spherical coordinates of the state vector on the bloch sphere .for an arbitrary input state with , after the cyclic evolution for the state , the output state is ,where this operation becomes a purely geometric gate operation if the dynamic contribution to the total phase vanishes .the cyclic trajectories used in the experiment . is transported along the path a - b - c - a , and transported along the path a-b-c-a.,width=6 ] in the experiment , we use a nonadiabatic geometric phase and transform the input states along geodesic circuits on the bloch sphere , as shown in fig .2 . here , the cyclic states are . for the circuit shown in fig .2 , the solid angle subtended by the circuit is equal to , the rotation angle during the second part of the circuit .the geometric phase becomes thus . for the circuit of fig .2 , we may substitute the above values for and .the propagator becomes then we now apply this geometrical gate to controlled operations in a two - qubit system where qubit a is the control qubit , while qubit b undergoes the geometric circuit if qubit a is in state but remains invariant if the control qubit is in state .the hamiltonian of the 2-qubit system is ( in angular frequency units ) for the subsystem of qubit b , we can write the reduced hamiltonian i^b_z , \ ] ] where is the eigenvalue of ( ) and the corresponding computational value ( ) .if we use a rotating frame with a frequency of , the hamiltonian vanishes for , , while it becomes for .this hamiltonian generates controlled rotations around the z - axis . to generate the trajectories of fig .2 , we rotate the rotation axis using radio - frequency pulses . to generate a rotation around the -axis , e.g., we use the sequence where the notation for the rotations is , and stand for free evolution under the control hamiltonian for the duration . the circuit of fig . 2is thus generated by the sequence this represents the first gate operation of fig .1 . for the second operation, we have to reverse the roles of control and target qubit and apply the following sequence to qubit a : now setting the rf frequency to .for the experimental implementation , we used the two nuclear spins of - labelled chloroform as qubits .the system hamiltonian corresponds to eq .( [ e : ham ] ) with a spin - spin coupling constant hz .experiments were performed at room temperature on a bruker av-400 spectrometer .the system was first prepared in a pseudopure state using the method of spatial averaging with the pulse sequence which is read from left to right ( as the following sequences ) .the rotations are implemented by radio - frequency pulses . is a pulsed field gradient which destroys all coherences ( x and y magnetizations ) and retains longitudinal magnetization ( z magnetization component ) only . represents a free precession period of the specified duration under the coupling hamiltonian ( no resonance offsets ) . from the state , we prepared the initial state by rotating qubit a ( the nuclear spin ) into the -plane .experiments were done for . for each value of , we performed the cloning operation , using the geometric gate operations ( [ e : u1 ] ) and ( [ e : u2 ] ) for different asymmetry parameters . to experimentally determine the fidelities ( 6 ) , we need the density operators of the initial state of qubit a and the final states of both qubits . for this purpose , we parametrize the density operators as where is a bloch vector . the fidelities ( 6 ) are then for the initial state , we have and the fidelities become the transverse components can be measured as the transverse magnetisation components of the free induction decay . to perform the trace over the other spin, we can either apply a decoupling field to the other spin or integrate over the two lines in the spectrum . for the present experiment, we chose the second possibility .experimentally observed nmr spectra of - chloroform before and after a quantum cloning operation .the original state was an equal - weight superposition with equal phases ( ) , shown in the top row .the middle row shows the resulting spectra for a symmetric cloning operation ( ) , and the bottom row the result of an asymmetric cloning operation ( ) .the left hand column holds the input qubit , the right hand column the copy qubit ., width=340 ] figure [ f.spectra ] shows a typical example of a cloning operation .the input qubit ( qubit a ) was initialized into a pseudo - pure state , as described in section [ s : nmr ] , using the phase angle , and the target qubit was set to .this state corresponds to transverse magnetisation of spin and is therefore directly observable in the nmr spectrometer .the upper row of figure [ f.spectra ] shows the fourier transform of the measured free induction decay ( fid ) of the signal .only one of the two resonance lines is observable , indicating that the target qubit is in the state .the middle row of figure [ f.spectra ] shows the corresponding spectra after a symmetrical cloning operation , with the propagator of eq .( [ uc ] ) integrating the signal for each spin species , we find for the x - components and , in good agreement with the theoretical values of .the corresponding fidelities are and ( theoretical values : 0.854 ) .the bottom row shows the same result for an asymmetric cloning operation . here , the rotation angle was set to . as a result , the target qubit has the higher fidelity : , again in good agreement with the theoretical values of 0.750 and 0.933 .the experimental fidelities versus the different parameter of the asymmetric cloning machine .the theoretical values of fidelities are plotted as solid lines . andthe different symbols are corresponding to the experimental fidelities of two qubits with different angles of the initial state .,width=302 ] figure [ nf1 ] shows a more systematic check of the effect of the rotation angle on the two fidelities .we compare the fidelities of both qubits with the theoretical value as a function of the asymmetry parameter .the theoretical curve is independent of the phase of the initial state .experimental data were measured for 4 different initial phases as a function of the rotation angle .all four data sets are in good agreement with the expectation .for vanishing rotation , the input qubit is not disturbed ( ) , while the target qubit bears no information ( ) . for a -rotation ,the roles of original and target qubit are reversed , and at , both qubits share the information equally .trade - off diagrams in the asymmetric cloning machine respectively for different phase angles of the initial state .the full line shows the theoretical expectation for phase - covariant cloning , while the dashed line represents the limiting value for a universal cloning machine .the different symbols refer to experimental data points for different initial conditions . , width=302 ] this apparent complementarity of the two fidelities can be quantified . according to eqs.([fb ] ) , the points are located on a quarter - circle whose origin is at and whose radius is 0.5 .figure [ nf2 ] verifies this relation . here, the experimental fidelities are plotted against each other for different rotation angles and different initial phases , represented by the different symbols .all experimental points are close to the circle representing the theoretical expectation .the dashed curve in figure [ nf2 ] represents the theoretical prediction for a universal cloning machine . evidently , the theoretical curve for phase - covariant cloning as well as the experimental data are outside of this range , except for angles close to 0 or , where the information is located on a single qubit .in summary , we have experimentally realized an optimal asymmetric phase - covariant cloning machine . as a function of a continuous angle variable in the cloning operation ,the phase information of the input state is transferred to the two output states such that either the original qubit is only slightly disturbed ( ) or that most of the phase information is transferred to the second qubit ( ) .the case of symmetric cloning is recovered for . in the case of quantum cryptography, this tradeoff determines how much information the eavesdropper can gain for a given disturbance of the transmitted information .the fidelities found for this phase - covariant cloning were higher than the upper bound for universal cloning .the cloning was implemented experimentally on an nmr quantum information processor . for the cloning operations, we used cyclic rotations of the qubits in such a way that the system acquired a geometrical phase .this procedure has been proposed for shielding the gate operation from such perturbations that leave the area of the quantum mechanical trajectory invariant and thereby improve the overall fidelity .we thank prof .z. d. wang and dr .q. chen for helpful discussions .this project is supported by the national fundamental research program ( grant no .2001 cb 309300 ) , nsfc under grant no .10425524 and no . 10429401 , the national science fund of china , and the european commission under contract no .007065 ( marie curie ) .99 d. dieks , phys .a * 92 * , 271(1982 ) . w. k. wootters and w. h. zurek , nature(london ) * 299 * , 802(1982 ) .v. scarani , s. iblisdir , n. gisin and a. acin , rev .mod . phys . * 77 * , 1225 ( 2005 ) .v. buek and m. hillery , phys .a * 54 * , 1844(1996 ) .n. j. cerf , acta phys .48 * , 115(1998 ) .n. j. cerf , phys .lett . * 84 * , 4497(2000 ) . n. j. cerf , mod . opt . * 47 * , 187(2000 ) . c. s. niu and r. b. griffiths , phys .a * 58 * , 4377(1998 ) .s. iblisdir , a. and n. j. cerf _ et al .a * 72 * , 042328(2005 ) .s. l. braunstein , v. buek and m. hillery , phys .a * 63 * , 052313(2001 ) .a. e. rastegin , phys .a * 66 * , 042304(2002 ) .r. filip , phys . rev .a * 69 * , 032309(2004 ) .r. filip , phys .a * 69 * , 052301(2004 ) .z. zhao , a. n. zhang and x. q. zhou _et al . _ ,lett . * 95 * , 030502(2005 ) .d. bruss , m. cinchetti , g. m. dariano and c. macchiavello , phys .a. * 62 * , 012302(2000 ) . w. h. zhang , l. b. yu and l. ye , phys .a * 356 * , 195(2006 ) .j. f. du , t. durt and p. zou _et al . _ ,lett . * 94 * , 040505(2005 ) .d. g. cory , m. d. price and t. f. havel , phys .d * 120 * , 82(1998 ) .s. l. zhu and p. zanardi , phys .a * 72 * , 020301(2005 ) .m. v. berry , proc .london , ser a * 392 * , 45(1984 ) .y. aharonov and j. anandan , phys .* 58 * , 1593(1987 ) .s. l. zhu , z. d. wang and y. d. zhang , phys . rev .b * 61*,1142(2000 ) .s. l. zhu and z. d. wang , phys .lett . * 85 * , 1076(2000 ) .p. zanardi and m. rasetti , phys .a * 264 * , 94(1999 ) .g. falci , r. fazio and g. m. palma _ et al ._ , nature(london ) * 407 * , 355(2000 ) . l. m. duan , j. i. cirac and p. zoller , science * 292*,1695 ( 2001 ) .j. a. jones , v. vedral , a. ekert and g. castagnoli , nature(london ) * 403 * , 869(2000 ) .x. b. wang and m. keiji , phys .* 87 * , 097901(2001 ) .s. l. zhu and z. d. wang , phys .* 89 * , 097902(2002 ) .s. l. zhu and z. d. wang , phys .a * 67 * , 022319(2003 ) .s. l. zhu and z. d. wang , phys .a * 66 * , 042322(2002 ) .x. d. zhang , s. l. zhu , l. hu and z. d. wang , phys .a * 71 * , 014302(2005 ) .j. f. du , p. zou and z. d. wang , phys .a * 74 * , 020302(2006 ) .giacomo mauro dariano and chiara macchiavello , phys .a. * 67 * , 042306(2003 ) .
while exact cloning of an unknown quantum state is prohibited by the linearity of quantum mechanics , approximate cloning is possible and has been used , e.g. , to derive limits on the security of quantum communication protocols . in the case of asymmetric cloning , the information from the input state is distributed asymmetrically between the different output states . here , we consider asymmetric phase - covariant cloning , where the goal is to optimally transfer the phase information from a single input qubit to different output qubits . we construct an optimal quantum cloning machine for two qubits that does not require ancilla qubits and implement it on an nmr quantum information processor .
computational physics has emerged as a third branch of physics , grafted onto the traditional pair of theoretical and experimental physics . at first , computer use seemed to be a straightforward off - shoot of theoretical physics , providing solutions to sets of differential equations too complicated to solve by hand .but soon the enormous quantitative improvement in speed yielded a qualitative shift in the nature of these computations .rather than asking particular detailed questions about a model system , we now use computers more often to model the whole system directly . answers to relevant questions are then extracted only after a full simulation has been completed .the data analysis following such a virtual lab experiment is carried out by the computational physicist in much the same way as it would be done by an experimenter of observer analyzing data from a real experiment or observation . with this shift from theory to experimentation, computers have become important laboratory tools in all branches of science .there is one striking difference , though , between the use of a computer and that of other types of lab equipment . whereas laboratory tools are typically designed for a particular purpose , computers are usually bought off the shelf , and used as is , without any attempt to customize them to the particular usage at hand . in contrast , it would be unthinkable for a astronomy consortium to build a new observatory around a huge pair of binoculars , as a simple scaled - up version of commercial bird - watching equipment. the reason for this difference in buying pattern has nothing to do with an inherent difference between the activities of computing , experimenting , or observing . building a special - purpose computer is not more difficult than building a telescope , or any other major type of customized laboratory equipment .rather , the difference in attitude has everything to do with the fact that our computational ability has gone through an extraordinary period of sustained rapid exponential growth in speed .imagine that binoculars would grow twice as powerful every one or two years .if that were the case , astronomers might as well simply buy the latest model binoculars , and use those for their observations .planning to build a big telescope would be self - defeating : in the ten or so years it would take to design and build the thing , technology would have progressed so much that commercial binoculars would out - perform the special - purpose telescope . over the last forty years , computer speed has exponentially increased . as a result, there has never been a particularly great need for physicists to design and build their own computer . as with all cases of exponential growth ,this tendency will necessarily flatten off .how and when this flattening will occur is difficult to predict .this will depend on technological and economic factors that are as yet uncertain .but it is already the case that increase in computer speed is significantly more modest than what could be expected purely from the ongoing miniaturization of computer chips .this trend , in the case of general purpose computers , will be discussed briefly in [ gpc ] various alternatives , in the form of special - purpose computing equipment , are mentioned in [ spc ] one such alternative , the grape family of special - purpose computer hardware , is reviewed in [ gr ] some astrophysical applications of these grape machines are discussed in [ aa ] a preview of coming grape attractions is presented in [ fut ]after mainframes and minicomputers turned out to be no longer cost - effective , some time around the early - to - mid eighties , the only general - purpose computers used in physics were workstations and supercomputers .at first , there was an enormous gap in performance between the two types of machines , but over the last fifteen years this gap has narrowed steadily .for example , during the eighties , supercomputers increased in speed by about a factor of , while microprocessors saw an increase of a factor of .the main reason was that workstations at first were rather inefficient , requiring many machine cycles for a single floating point operation . with increased chip size, this situation improved rapidly .in contrast , the first supercomputers , built in the mid seventies , were designed specifically to deliver at least one new floating point result for each clock cycle , through the use of pipelines .although the speed of the floating point components for supercomputers has continued to increase over the years , most of the increase in their peak speed has been realized through increasing the number of processing units .this increase in parallelism has made the sharing of memory by different processors increasingly cumbersome , involving significant hardware overhead : a full interconnect between processors and a central memory bank requires an amount of additional hardware that scales as .in contrast , the much faster speed - up of microprocessor - based workstations has been possible exactly because there was ( as yet ) no need for parallelism . throughout the eighties, chips did not contain enough transistors to allow floating point operations to be performed on a single chip in one cycle .therefore , personal computers used to have a special floating point accelerator chip , in addition to the central processor chip , and even this accelerator typically needed several cycles even for the simplest operations of addition and multiplication . as a result ,increase in the number of transistors per chip translated linearly into an increase in speed .however , this situation changed as soon as it became possible to put a complete computer on a single chip , including a floating point unit with the capability of producing a new output every cycle . while in itself a great achievement, this capability also creates new trouble . from this point on, the scaling of general - purpose computers , based on microprocessors , will become less favorable , for the following reasons . with further miniaturization, a single chip will soon contain several floating point units , with an extremely fast on - chip interconnect .these interconnections , however , require a significant ` real estate ' overhead on the chip : many extra components have to be added to the chip in order to implement the administrative side of this fast communication efficiently .in addition , the off - chip communication with the main memory is far slower , and tends to form a bottleneck . as a result of both factors ,a shrinking in feature size by a factor two no longer guarantees a speed - up of a factor , but rather . in the eighties , when the feature width would become a factor two smaller , four times as many transistors would fit on one chip , and in addition the shrinking of the size of the transistors by a factor two would allow a clock speed nearly twice as high as before .however , this gain of a factor of eight from now on will be offset by a communication penalty of a factor .the conclusion is that microprocessors are now facing the same problem of increasing ` internal administrative bureaucracy ' that supercomputer processors have had to deal with for the last twenty years .until the late seventies , almost all scientific calculations were carried out on general - purpose computers . around that time , microprocessors began to offer a better price - performance ratio than supercomputers . by itself , this was not very helpful to a physicist , given the fact that a single microprocessor could only offer a speed of 10 kflops or so , peanuts compared to the supercomputers of those days , with peak speeds above 100 mflops .the key to success was to find a way to combine the speed of a large number of those cheap microprocessors .this was exactly what several groups of physicists did , in the eighties .they took large numbers of off - the - shelf microprocessors , and hooked them up together . building these machines was not too hard , and indeed raw speeds at low prices were reached relatively easily .the main problem was that of software development . to get a special - purpose machine to do a relevant physics calculation , and to report the results in understandable form , provided formidable challenges .for example , writing a reasonably efficient compiler for such a machine was a tedious and error - prone job .in addition , developing application programs was no simple task either .an interesting and somewhat unexpected development has been the commercialization of these machines , originally built by and for physicists .the design of most of the current highly - parallel general - purpose computers has been directly or indirectly influenced by the early special - purpose computers .this blurring of the distinction between special - purpose and general - purpose computers may continue in the future , when demand for higher peak speeds will force increasing parallelization to occur .this development reflects the fact that the so - called special - purpose machines in physics actually attacked a general type of problem : how to let many individual processors cooperate on a single computational task .the fact that the applications have been rather specialized in many cases ( to particle physics , astrophysics , or hydrodynamics ) is less important than the fact that each application required a carefully balanced strategy at dynamic inter - processor communication . as a result ,the experience gained from the development of both hardware and software for special - purpose computers has turned out to be very helpful for the development of their general - purpose counterparts as well . in the late eighties ,an alternative model was developed .following the example of some special - purpose components designed as back - end processors in radio telescopes , the idea was advanced to design special hardware components to speed up critical stages within large - scale simulations , most of which would still be delegated to general - purpose workstations .a similar idea had already been employed for general - purpose computers as well . in the early eighties, personal computers would come with a central processor that could handle floating - point calculations only in software , at rather low efficiency .significant speed - up , of an order of magnitude , could be obtained by including a so - called floating - point accelerator , at only a fraction of the cost of the original computer .another example is the use of graphics accelerators in most modern personal computers .building a special hardware accelerator for a critical segment of a physics simulation is another example of this general approach . in this way, the good cost - performance ratio of special - purpose hardware can be combined with the flexibility of existing workstations , without much of a need for special software development .this approach can be compared to using hand - coded assembly - language or machine - code for an inner loop in an algorithm that otherwise is programmed in a higher - level language the difference being that this inner loop is now realized directly in silicon .in 1984 a group of astrophysicists and computer scientists built the digital orrery , a 10 mflops special - purpose computer designed to follow the long - term evolution of the orbits of the planets ( applegate 1985 ) .for that purpose , ten processors were connected in a ring , one for each planet ( or test particle ) .the processors were designed around an experimental 64-bit floating - point chip set developed by hp .each chip could perform one floating point operation in 1.25 .a central controller send instructions to all processors at each machine cycle .a few years later , results from the orrery lead to the important discovery of the existence of a positive lyapunov coefficient for the evolution of the orbit of pluto , which was interpreted as a sign of chaos ( sussman & wisdom 1988 ) .besides the question of the long - term evolution of planetary orbits , there were many other problems in gravitational dynamics that required far more than the typical speed available to astrophysicists in the mid - eighties .while significant speed - up was obtained with the introduction of more efficient algorithms ( barnes & hut 1986 , 1989 ) , many problems in stellar dynamics could not be effectively tackled with the hardware available at that time . among those problems ,the most compute - intensive was the long - term simulation of star clusters past core collapse .the record in that area in the late eighties was held by makino & sugimoto ( 1987 ) and makino ( 1989 ) , for -body calculations with and , respectively .unfortunately , the computational costs for these types of calculations scales roughly with , which meant that realistic simulations of globular clusters , with in the range , were still a long way off .the only hope to make significant progress in this area was to make use of the fastest supercomputers available , in the most efficient way possible .therefore , the next step we took was a detailed analysis of the algorithms available for the study of dense stellar systems ( makino & hut 1988 , 1990 ) , following the earlier analysis given by makino ( 1986 ) .our analysis showed that the best integration schemes available , in the form of aarseth s individual - timestep predictor - corrector codes ( aarseth 1985 ) , were close to the theoretical performance limit .based on these results , we predicted that a speed of order 1 teraflops would be required to model globular star clusters , and to verify the occurrence of gravothermal oscillations in such models ( hut 1988 ) .unfortunately , such speeds were not commercially available in those days , and it was clear that they would not be available for another ten years or so .the fastest machine that we could lay our hands on was the connection machine cm-2 , which was first being shipped by thinking machines in 1987 . in the fall of that year ,jun makino and i spent a few months at thinking machines , to perform an in - depth analysis of the efficiency of various algorithms for stellar dynamics simulations on the cm-2 .the results were somewhat disappointing ( makino and hut 1989 ) , in that most large - scale simulations could utilize only % of the peak - speed of the cm-2 .as a result , even with a formidable peak speed of tens of gigaflops , most of our simulations only obtained a speed of a few hundred megaflops , when scaled up to a full cm-2 configuration .the main reason for its poor performance was the slowness of the communication speed compared to the speed of the floating point calculations .since we needed a teraflops in order to study gravothermal oscillations and other phenomena in dense stellar systems , it was rather disheartening that we could not even reach an effective gigaflops . and given the typical increase in speed of supercomputers , by a factor of every five years , it seemed clear that we would have to wait till well after the year 2000 , before being able to compute at an effective teraflops speed . in reaction to our experiences ,sugimoto took up the challenge and formed a small team at tokyo university to explore the feasibility of building special - purpose hardware for stellar dynamics simulations .this group started their project in the spring of 1989 , resulting in the completion of their first machine in the fall of that same year ( ito 1990 ) .the name grape stands for gravity pipe , and indicates a family of pipeline processors that contain chips specially designed to calculate the newtonian gravitational force between particles .a grape processor operates in cooperation with a general - purpose host computer , typically a normal workstation . the force integration and particle pushingare all done on the host computer , and only the inter - particle force calculations are done on the grape . since the latter require a computer processing power that scales with , while the former only require computer power , load balance can always be achieved by choosing values large enough .the development history of the grape series of special - purpose architectures shows a record of rapid performance improvements ( see table 1 ) .the limited - precision grape-1 achieved 240 mflops in 1989 ; its successor , the grape-3 , reached 15 gflops in 1991 .over 30 grape-3 systems are currently in use worldwide in applications ( such as tree codes and sph applications ) where high numerical precision is not a critical factor .a prototype board of the full - precision grape-2 achieved 40 mflops in 1990 .the full grape-4 system reached 1.1 teraflops ( peak ) in 1995 .individual grape-4 boards , delivering from 3 to 30 gflops depending on configuration , are currently in use at 5 institutions around the world .a third development track is represented by the grape-2a and md - grape machines , which include a user - loadable force look - up table that can be used for arbitrary central force laws ( targeted at molecular dynamics applications ) .overall , the pace of development has been impressive : 10 special - purpose machines with a broadening range of applications and a factor of 4000 speed increase in just over 6 years .the grape-4 developers have won the gordon bell prize for high - performance computing in each of the past two years . in 1995 ,the prize was awarded to junichiro makino and makoto taiji for a sustained speed of 112 gflops , achieved using one - sixth of the full machine on a 128k particle simulation of the evolution of a double black - hole system in the core of a galaxy .the 1996 prize was awarded to toshiyuki fukushige and junichiro makino for a 332 gflops simulation of the formation of a cold dark matter halo around a galaxy , modeled using 768k particles on three - quarters of the full machine .modifying an existing program to use the grape hardware is straightforward , and entails minimal changes .subroutine and function calls ( written in c or fortran ) to the grape hardware replace the force - evaluation functions already found in existing -body codes .communication between host and grape is accomplished through a collection of about a dozen interface routines .the force evaluation code which is replaced typically consists of only a few dozen lines at the lowest level of an algorithm .thus , using the grape calls only for small , localized changes which in no way inhibit future large - scale algorithm development .the grape interface has been successfully incorporated into the barnes - hut tree algorithm ( barnes & hut 1986 ; makino 1991 ) and the p m scheme ( hockney & eastwood 1988 ; brieu , summers , & ostriker 1995 ) . here is a typical code fragment for the newtonian force calculation on a workstation : subroutine accel_workstation do 10 k = 1,ndim do 20 i=1,nbody accnew(i , k)=0.0 20 continue 10 continue do 30 i=1,nbody-1 do 40 j = i+1,nbody do 50 k = 1,3 dx(k)=pos(k , j)-pos(k , i ) 50 continue r2inv=1.0/(dx(1)**2+dx(2)**2+dx(3)**2+eps2 ) r3inv = r2inv*sqrt(r2inv ) do 60 k=1,3 accnew(k , i)=accnew(k , i)+r3inv*mass(j)*dx(k ) accnew(k , j)=accnew(k , j)-r3inv*mass(i)*dx(k ) 60 continue 40 continue 30 continue end to use the grape , all that has to be done is to replace the inner loop of the force calculations by a few special function calls in order to offload the bulk of the computation onto the grape hardware : subroutine accel_grape call g3init ( ) xscale = 1.0d0/1024 call g3setscales(xscale , mass(1 ) ) call g3seteps2(eps2 ) call g3setn(nbody ) do 20 i=1,nbody call g3setxj(i-1,pos(1,i ) ) call g3setmj(i-1,mass(i ) ) 20 continue nchips = g3nchips ( ) do 30 i=1,nbody , nchips ii = min(nchips ,nbody - i + 1 ) call g3frc(pos(1,i),accnew(1,i),pot(i),ii ) 30 continue call g3free endin this brief review , there is no room for an exhaustive review of the scientific results that have been obtained with the few dozen grape machines that have been installed in a number of different research institutes around the world .in addition to the four fields listed below , the grapes have been used in a variety of other areas , for example to study the role of exponential divergence of neighboring light trajectories on gravitational lensing , the formation of large - scale structure in the universe , the role of violent relaxation in galaxy formation , and the effectiveness of hierarchical merging in galaxy clusters .ida & makino ( 1992a , b ) used the grape-2 to investigate the evolution of the velocity distribution of a swarm of planetesimals , with an embedded protoplanet .they confirmed that equipartition is achieved and that therefore runaway growth should take place , along the lines suggested by stewart & wetherill ( 1988 ) .kokubo & ida ( 1995 ) used the harp-2 ( a smaller prototype of the grape-4 ) to simulate a system of two protoplanets and many planetesimals .they found that the separation between two planets tends to grow to roughly 5 ( the hill radius ) .they coined the term ` orbital repulsion ' for this phenomenon , and provided a qualitative explanation for its occurrence .kokubo & ida ( 1996a ) used the grape-4 to simulate planetary growth assuming perfect accretion , where any physical collision leads to coalescence .they started with 3000 equal - mass planetesimals .after 20,000 orbits , they found that the most massive particle had become 300 times heavier , while the average mass of the particles increased by only a factor of two .kokubo & ida ( 1996b ) extended these calculations .they showed that several protoplanets are formed and grow while keeping their mutual separations within the range 5 - 10 .their results strongly suggests that orbital repulsion has determined the present separation between the outer planets . the first scientific result obtained with the grape-4 was the demonstration of the existence of gravothermal oscillations in -body simulations . predicted more than ten years earlier by sugimoto & bettwieser ( 1983 ) , they were found by makino ( 1996a ) , and presented by him at the i.a.u .symposium 174 in tokyo , in august 1995 ( makino 1996b ) . using more than 32,000 particles, he was also able to confirm the semi - analytical predictions made by goodman ( 1987 ) .the calculation took about two months , using only one quarter of a full grape-4 , running at a speed of 50 gflops .we are currently exploring ways to couple stellar dynamics and stellar evolution in one code , in order to perform more realistic simulations of star cluster evolution .based on steller evolution recipes implemented by portegies zwart & verbunt ( 1996 ) , we have carried out a series of increasingly realistic approximations ( portegies zwart 1997a , b , c ) ; see our web site with a movie that shows a star cluster , as an evolving -body system side - to - side with its correspondingly evolving h - r diagram , at http://casc.physics.drexel.edu ebisuzaki ( 1991 ) used the grape-2 to simulate the merging of two galaxies , each with a central black hole , using up to particles .they found an increase in core radius , as a result of the heating of the central regions caused by the spiral - in of the two black holes .makino and ebisuzaki ( 1996 ) used the grape-4 to study hierarchical merging , in which the merger product of one pair of galaxies was used as a template for constructing progenitors for the next simulation of merging galaxies .they used more than 32,000 particles .they found the ratio between the core radius and the effective radius to converge to a value depending on the mass of the black holes .however , it turned out that 32k particles were not enough .makino ( 1997 ) performed a similar type of calculation with 256k particles , and found a core structure which was rather different from that obtained in the previous 32k runs .in particular , he found the volume density of stars to decrease in the vicinity of the black hole binary in the 256k runs , and ascribed this to the ` loss cone ' effect predicted by begelman ( 1980 ) .fukushige & makino ( 1997 ) used the grape-4 to simulate hierarchical clustering , using an order of magnitude more particles than in previous studies .they found that the central density profiles are always steeper than .they interpreted the observed shallower cusps as the result of the spiral - in of the central black holes from the progenitor galaxies , involved in the merging process .okumura ( 1991 ) used the grape-1 to investigate the structure of merger remnants formed from encounters between two plummer models on parabolic orbits , using 16,000 particles .they determined the non - dimensional rotation velocity , where denotes the maximum rotation velocity and is the velocity dispersion at the center .they found typical values of for merging at large initial periastron separations .their result is in good agreement with the observation of large ellipticals , which show a rather sharp cutoff in the distribution of around 0.6 .makino & hut ( 1997 ) used the grape-3a to simulate more than 500 galaxy encounters , in order to determine their merger rate as a function of incoming velocity , for a variety of galaxy models .they characterized the overall merger rate in a galaxy cluster by a single number , derived from their cross sections by an integration over encounter velocities in the limit of a constant density in velocity space .in addition , they provided detailed information concerning the reduction of the overall encounter rate through tidal effects from the cluster potential as well as from neighboring galaxies .in the grape-4 , once all pipelines are filled , each chip produces one new inter - particle interaction ( corresponding to floating - point operations ) every three clock cycles . for a clock speed of 30 mhz ,a peak chip speed of gflops is achieved .the grape-4 chips represent 1992 technology ( 1 m fabrication line width ) .even if no changes were made in the basic design , advances in fabrication technology would permit more transistors per chip and increased clock speed , enabling a 50100 mhz , 1030 gflops chip with 1996 ( 0.35 m line width ) technology , and a 100200 mhz , 50200 gflops chip with 1998 ( 0.25 m ) technology . based on these projected performance improvements , a total of grape-6 chips of 100 gflops each could be combined to achieve petaflops speeds by the year 2000 , for a total budget of 10 million dollars .we have recently completed an initial ` point design study ' of the feasibility of constructing such a system ( mcmillan 1996 ) .this study was funded by the nsf , in conjunction with nasa and darpa , as part of a program aimed at paving the way towards petaflops computing .while planning to build a hardwired petaflops - class computational engine , we are also investigating complementary avenues , based on the use of reconfigurable logic , in the form of field - programmable gate array ( fpga ) chips .the merging of custom lsi and reconfigurable logic will result in a unique capability in performance and generality , combining the extremely high throughput of special - purpose devices with the flexibility of reconfigurable systems .in many applications , gravity requires less than 99% of the computing power .although the remainder of the cpu time is typically dominated by just one secondary bottleneck , its nature varies greatly from problem to problem .it is not cost - effective to attempt to design custom chips for each new problem that arises . in these circumstances ,a fpga - based system can restore the balance , and guarantee scalability from the teraflops to the petaflops domain , while still retaining significant flexibility .astrophysical applications could include , for example , various forms of smooth particle hydrodynamics ( sph ) , for applications ranging from colliding stars to the formation of large - scale structure in the universe. an additional benefit of the construction of petaflops - class machines will be the availability of individual chips at reasonable prices , once the main machine has been designed and constructed .a typical grape-6 chip will run at gflops . a single board with 10 or more chips will already deliver a speed of 1 teraflops or more , for a total price that is likely to lie in the range of 10,000 20,000 dollars . hooking such a board up to a workstation will instantly change it into a top - of - the - line supercomputer .
recently , special - purpose computers have surpassed general - purpose computers in the speed with which large - scale stellar dynamics simulations can be performed . speeds up to a teraflops are now available , for simulations in a variety of fields , such as planetary formation , star cluster dynamics , galactic nuclei , galaxy interactions , galaxy formation , large scale structure , and gravitational lensing . future speed increases for special - purpose computers will be even more dramatic : a petaflops version , tentatively named the grape-6 , could be built within a few years , whereas general - purpose computers are expected to reach this speed somewhere in the 2010 - 2015 time frame . boards with a handful of chips from such a machine could be made available to individual astronomers . such a board , attached to a fast workstation , will then deliver teraflops speeds on a desktop , around the year 2000 . # 1*[#1 piet ] *
gestures are naturally performed by humans , produced as part of deliberate actions , signs or signals , or subconsciously revealing intentions or attitude . while they may involve the motion of all parts of the body , the studies of gestures usually focus on arms and hands which are essential in gesture communication .recognition of gestures has recently attracted increasing attention due to its indubitable importance in many applications such as human computer interaction ( hci ) , human robot interaction ( hri ) and assistive technologies for the handicapped and the elderly .gestures are one type of actions and many action recognition methods can be applied to gesture recognition .recognition of human actions from depth / skeleton data is one of the most active research topics in multimedia signal processing in recent years due to the advantages of depth information over conventional rgb video , e.g. being insensitive to illumination changes . since the first work of such a type reported in 2010 ,many methods have been proposed based on specifical hand - crafted feature descriptors extracted from depth / skeleton . with the recent development of deep learning ,a few methods have been developed based on convolutional neural networks ( convnets ) and recurrent neural networks ( rnns ) .however , it remains unclear how video could be effectively represented and fed to deep neural networks for classification .for example , one can conventionally consider a video as a sequence of still images with some form of temporal smoothness , or as a subspace of images or image features , or as the output of a neural network encoder .which one among these and other possibilities would result in the best representation in the context of gesture recognition is not well understood .inspired by the recent work in , this paper proposes for gesture recognition three simple , compact and effective representations of depth sequences which effectively decribe a short depth sequence with images .such representations make it possible to use a standard convnet architecture to learn suitable dynamic " features from the sequences by utilizing the convnet models trained from image data .consequently , it avoids training millions of parameters from scratch and is especially valuable in the cases that lack sufficient annotated training video data .for instance , the large - scale isolated gesture recognition challenge has on average only 144 video clips per class compared to 1200 images per class in imagenet .the proposed three representations are dynamic depth image ( ddi ) , dynamic depth normal image ( ddni ) and dynamic depth motion normal image ( ddmni ) .they are all constructed from a sequence of depth maps based on bidirectional rank pooling to encode the spatial ( i.e. posture ) and temporal ( i.e. motion ) information at different levels and are complementary to each other .experimental results have shown that the three representations can improve the recognition accuracy substantially .the rest of this paper is organized as follows .section ii briefly reviews the related works on gesture / action recognition based on depth and deep learning .details of the proposed method are described in section iii .experimental results are presented in section iv .section v concludes the paper .with microsoft kinect sensors researchers have developed methods for depth map - based action recognition .li et al . sampled points from a depth map to obtain a bag of 3d points to encode spatial information and employ an expandable graphical model to encode temporal information .yang et al . stacked differences between projected depth maps as a depth motion map ( dmm ) and then used hog to extract relevant features from the dmm .this method transforms the problem of action recognition from spatio - temporal space to spatial space . in ,a feature called histogram of oriented 4d normals ( hon4d ) was proposed ; surface normal is extended to 4d space and quantized by regular polychorons . following this method , yang and tian cluster hypersurface normals and form the polynormal which can be used to jointly capture the local motion and geometry information .super normal vector ( snv ) is generated by aggregating the low - level polynormals . in ,a fast binary range - sample feature was proposed based on a test statistic by carefully designing the sampling scheme to exclude most pixels that fall into the background and to incorporate spatio - temporal cues .exiting deep learning approach can be generally divided into four categories based on how the video is represented and fed to a deep neural network .the first category views a video either as a set of still images or as a short and smooth transition between similar frames , and each color channel of the images is fed to one channel of a convnet .although obviously suboptimal , considering the video as a bag of static frames performs reasonably well . the second category is to represent a video as a volume and extends convnets to a third , temporal dimension replacing 2d filters with 3d ones .so far , this approach has produced little benefits , probably due to the lack of annotated training data .the third category is to treat a video as a sequence of images and feed the sequence to a rnn .a rnn is typically considered as memory cells , which are sensitive to both short as well as long term patterns .it parses the video frames sequentially and encode the frame - level information in their memory .however , using rnns did not give an improvement over temporal pooling of convolutional features or over hand - crafted features .the last category is to represent a video in one or multiple compact images and adopt available trained convnet architectures for fine - tuning .this category has achieved state - of - the - art results of action recognition on many rgb and depth / skeleton datasets .the proposed method in this paper falls into the last category .the proposed method consists of three stages : construction of the three sets of dynamic images , convnets training and score fusion for classification , as illustrated in fig . [ fig : framework ] .details are presented in the rest of this section .the three sets of dynamic images , dynamic depth images ( ddis ) , dynamic depth normal images ( ddnis ) and dynamic depth motion normal images ( ddmnis ) are constructed from a sequence of depth maps through rank pooling .they aim to capture both posture and motion information for gesture recognition .let denote the frames in a sequence of depth maps , and be a representation or feature vector extracted from each individual frame .let be time average of these features up to time .the ranking function associates to each time a score , where is a vector of parameters .the function parameters are learned so that the scores reflect the rank of the frames in the video . in general , later times are associated with larger scores , .learning is formulated as a convex optimization problem using ranksvm : the first term in this objective function is the usual quadratic regular term used in svms .the second term is a hinge - loss soft - counting how many pairs are incorrectly ranked by the scoring function .note in particular that a pair is considered correctly ranked only if scores are separated by at least a unit margin , .optimizing the above equation defines a function that maps a sequence of depth video frames to a single vector . since this vector contains enough information to rank all the frames in the video , it aggregates information from all of them and can be used as a video descriptor .this process is called rank pooling . given a sequence of depth maps , the ranking pooling method described above is employed to generate a dynamic depth image ( ddi ) .the ddi is fed to the three channel of a convnet .different from the rank pooling is applied in a bidiretional way to convert one depth map sequence into two ddis . as shown in fig .[ fig : dis ] , ddis effectively capture the posture information , similar to key poses . in order to simultaneously exploit the posture and motion information in depth sequences , it is proposed to extract normals from depth maps and construct the so called ddnis ( dynamic depth normal images ) . for each depth map , the surface normal at each location is calculated .thus , three channels , referred to as a depth normal image ( dni ) , are generated from the calculated normals , where represents normal images for the three components respectively .the sequence of dnis goes through bidirectional rank pooling to generate two ddnis , one being from forward ranking pooling and the other from backward rank pooling . to minimise the interference of the background, it is assumed that the background in the histogram of depth maps occupies the last peak representing far distances .specifically , pixels whose depth values are greater than a threshold defined by the last peak of the depth histogram minus a fixed tolerance ( 0.1 was set in our experiments ) are considered as background and removed from the calculation of ddnis by re - setting their depth values to zero . through this simple process ,most of the background can be removed and has much contribution to the ddnis .samples of ddnis can be seen in fig .[ fig : dis ] .the purpose of construction of a ddmni is to further exploit the motion in depth maps .gaussian mixture models ( gmm ) is applied to depth sequences to detect moving foreground .the same process as the construction of a ddni ( but without using histogram - based foreground extraction ) is employed to the moving foreground .this process generates two ddmnis , which specifically capture the motion information as illustrated in fig .[ fig : dis ] . after the construction of ddis , ddnis and ddmnis , there are six dynamic images , as illustrated in fig .[ fig : dis ] , for each depth map sequence .six convnets were trained on the six channels individually .different layer configurations were used for the validation and testing sets provided by the challenge . for validation, the layer configuration of six convnets follows the one in .for testing , vgg-16 was adopted for fine - tuning .the implementation is derived from the publicly available caffe toolbox based on three nvidia tesla k40 gpu cards for both validation and testing .the training procedure for validation is similar to the one in .the network weights were learned using the mini - batch stochastic gradient descent with the momentum being set to 0.9 and weight decay being set to 0.0005 .all hidden weight layers use the rectification ( relu ) activation function . at each iteration, a mini - batch of 256 samples is constructed by sampling 256 shuffled training samples .all the images are resized to 256 256 .the learning rate was set to for fine - tuning with pre - trained models on ilsvrc-2012 , and then it is decreased according to a fixed schedule , which is kept the same for all training sets . for each convnetthe training undergoes 20k iterations and the learning rate decreases every 5k iterations . for all experiments ,the dropout regularisation ratio was set to 0.5 in order to reduce complex co - adaptations of neurons in the nets . for testing, the training procedure is similar to the one in .the network weights were learned using the mini - batch stochastic gradient descent with the momentum being set to 0.9 and weight decay being set to 0.0005 .all hidden weight layers use the rectification ( relu ) activation function . at each iteration ,a mini - batch of 32 samples was constructed by sampling 256 shuffled training samples .all the images are resized to 224 224 .the learning rate was set to for fine - tuning with pre - trained models on ilsvrc-2012 , and then it is decreased according to a fixed schedule , which is kept the same for all training sets . for each convnetthe training undergoes 50k iterations and the learning rate decreases every 20k iterations . for all experiments ,the dropout regularisation ratio was set to 0.9 in order to reduce complex co - adaptations of neurons in the nets . given a testing depth video sequence ( sample ), three pairs of dynamic images ( ddis , ddnis , ddmnis ) are generated and fed into six different trained convnets .for each image pair , multiply - score fusion was used .the score vectors outputted by the two pair convnets are multiplied in an element - wise way and then the resultant score vectors are normalized using norm .the three normalized score vectors are then multiplied in an element - wise fashion and the max score in the resultant vector is assigned as the probability of the test sequence being the recognized class .the index of this max score corresponds to the recognized class label .[ cols="^,^,^,^,^,^,^",options="header " , ]this paper presented three simple , compact yet effective representations of depth sequences for gesture recognition using convolutional neural networks .they are all based on bidirectional rank pooling method converting the depth sequences into images .such representations enables the use of existing convnets models directly on video data with fine - tuning without introducing large parameters to learn .the three representations represent the posture and motion in different levels and they are complementary to each other and improve the recognition accuracy largely .experimental results on chalearn lap isogd dataset verified the effectiveness of the proposed method .the authors would like to thank nvidia corporation for the donation of a tesla k40 gpu card used in this challenge .w. li , z. zhang , and z. liu , `` action recognition based on a bag of 3d points , '' in _ proc .ieee computer society conference on computer vision and pattern recognition workshops ( cvprw ) _ , 2010 , pp .j. wang , z. liu , y. wu , and j. yuan , `` mining actionlet ensemble for action recognition with depth cameras , '' in _ proc .ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2012 , pp .12901297 .x. yang , c. zhang , and y. tian , `` recognizing actions using depth motion maps - based histograms of oriented gradients , '' in _ proc .acm international conference on multimedia ( acm mm ) _ , 2012 , pp .10571060 .o. oreifej and z. liu , `` hon4d : histogram of oriented 4d normals for activity recognition from depth sequences , '' in _ proc .ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2013 , pp .716723 .m. a. gowayyed , m. torki , m. e. hussein , and m. el - saban , `` histogram of oriented displacements ( hod ) : describing trajectories of human joints for action recognition , '' in _ proc . international joint conference on artificial intelligence ( ijcai ) _, 2013 , pp . 13511357 .x. yang and y. tian , `` super normal vector for activity recognition using depth sequences , '' in _ proc .ieee international conference on computer vision and pattern recognition ( cvpr ) _, 2014 , pp . 804811 .h. rahmani , a. mahmood , d. q. huynh , and a. mian , `` hopc : histogram of oriented principal components of 3d pointclouds for action recognition , '' in _ proc .european conference on computer vision ( eccv ) _ , 2014 , pp .742757 .wang , w. li , p. ogunbona , z. gao , and h. zhang , `` mining mid - level features for action recognition based on effective skeleton representation , '' in _ proc . international conference on digital image computing : techniques and applications ( dicta ) _ , 2014 , pp .18 .p. wang , w. li , z. gao , c. tang , j. zhang , and p. o. ogunbona , `` convnets - based action recognition from depth maps through virtual cameras and pseudocoloring , '' in _ proc .acm international conference on multimedia ( acm mm ) _ , 2015 , pp .11191122 . p.wang , w. li , z. gao , j. zhang , c. tang , and p. ogunbona , `` action recognition from depth maps using deep convolutional neural networks , '' _ human - machine systems , ieee transactions on _ , vol .46 , no . 4 ,pp . 498509 , 2016 .p. wang , z. li , y. hou , and w. li , `` action recognition based on joint trajectory maps using convolutional neural networks , '' in _ proc .acm international conference on multimedia ( acm mm ) _ , 2016 , pp. 15 .y. hou , z. li , p. wang , and w. li , `` skeleton optical spectra based action recognition using convolutional neural networks , '' in _ circuits and systems for video technology , ieee transactions on _ , 2016 , pp .y. du , w. wang , and l. wang , `` hierarchical recurrent neural network for skeleton based action recognition , '' in _ proc .ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2015 , pp .11101118 .w. zhu , c. lan , j. xing , w. zeng , y. li , l. shen , and x. xie , `` co - occurrence feature learning for skeleton based action recognition using regularized deep lstm networks , '' in _ the 30th aaai conference on artificial intelligence ( aaai ) _ , 2016 .j. wan , s. z. li , y. zhao , s. zhou , i. guyon , and s. escalera , `` chalearn looking at people rgb - d isolated and continuous datasets for gesture recognition , '' in _ proc .ieee computer society conference on computer vision and pattern recognition workshops ( cvprw ) _ , 2016 , pp . 19 .w. li , z. zhang , and z. liu , `` expandable data - driven graphical modeling of human actions based on salient postures , '' _ circuits and systems for video technology , ieee transactions on _ , vol .18 , no . 11 , pp . 14991510 , 2008 .j. yue - hei ng , m. hausknecht , s. vijayanarasimhan , o. vinyals , r. monga , and g. toderici , `` beyond short snippets : deep networks for video classification , '' in _ proc .ieee conference on computer vision and pattern recognition ( cvpr ) _, 2015 , pp . 46944702 .s. ji , w. xu , m. yang , and k. yu , `` 3d convolutional neural networks for human action recognition , '' _ pattern analysis and machine intelligence , ieee transactions on _ , vol .35 , no . 1 ,pp . 221231 , 2013 .d. tran , l. bourdev , r. fergus , l. torresani , and m. paluri , `` learning spatiotemporal features with 3d convolutional networks , '' in _ proc .ieee international conference on computer vision ( iccv ) _ , 2015 , pp .44894497 .j. donahue , l. anne hendricks , s. guadarrama , m. rohrbach , s. venugopalan , k. saenko , and t. darrell , `` long - term recurrent convolutional networks for visual recognition and description , '' in _ proc .ieee conference on computer vision and pattern recognition ( cvpr ) _ , 2015 , pp .26252634 .a. krizhevsky , i. sutskever , and g. e. hinton , `` imagenet classification with deep convolutional neural networks , '' in _ proc .annual conference on neural information processing systems ( nips ) _ , 2012 , pp .11061114 .y. jia , e. shelhamer , j. donahue , s. karayev , j. long , r. b. girshick , s. guadarrama , and t. darrell , `` caffe : convolutional architecture for fast feature embedding . '' in _ proc .acm international conference on multimedia ( acm mm ) _ , 2014 , pp . 675678 .h. j. escalante , v. ponce - lpez , j. wan , m. a. riegler , b. chen , a. claps , s. escalera , i. guyon , x. bar , p. halvorsen , h. mller , and m. larson , `` chalearn joint contest on multimedia challenges beyond visual analysis : an overview , '' in _ proceedings of icprw _ , 2016 .j. wan , g. guo , and s. z. li , `` explore efficient local features from rgb - d data for one - shot learning gesture recognition , '' _ ieee transactions on pattern analysis and machine intelligence _38 , no . 8 , pp .16261639 , aug 2016 .
this paper proposes three simple , compact yet effective representations of depth sequences , referred to respectively as dynamic depth images ( ddi ) , dynamic depth normal images ( ddni ) and dynamic depth motion normal images ( ddmni ) . these dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial - temporal information . such image - based representations enable us to fine - tune the existing convnets models trained on image data for classification of depth sequences , without introducing large parameters to learn . upon the proposed representations , a convolutional neural networks ( convnets ) based method is developed for gesture recognition and evaluated on the large - scale isolated gesture recognition at the chalearn looking at people ( lap ) challenge 2016 . the method achieved 55.57% classification accuracy and ranked place in this challenge but was very close to the best performance even though we only used depth data . gesture recognition ; depth map sequences ; convolutional neural networks
the general goal of this paper is to examine broader ramifications of the nonlinear maxwell equations ( nm ) as introduced by me in 1992/93 and further developed in , , . to this end ,i first point out that the theory is considerably richer than that of the classical linear electromagnetism .in particular , i describe here several distinct types of both static and dynamic solutions on a spacetime of the form . on the technical side ,i have essentially avoided heavier analysis as the solutions are either obtained by means of elementary calculation , or are otherwise based on deeper analytic work described in .one should be aware that the possibilities opening in consequence of the introduction of these new structures have not been fully exploited in this paper , thus postponing many potential developments into the future .more precisely , in the ` dynamic ' part of the paper i display a solution in the form of a charge - carrying electromagnetic wave .it is a soliton type wave that transports charge with constant speed and without resistance .in addition , one notes existence of a specific to dimension four nonlinear fourier type transform an interesting structure whose role within the theory is twofold . on one hand, it can be used to find and analyze new solutions of the nonlinear maxwell equations . on the other hand, the transform defines an exotic duality a ( quadratic ) generalization of the ( linear ) hodge duality .consequences of this new duality for the four - geometry will be exploited in the future .the second set of results in this paper is focused around the question of existence and properties of static solutions . to this end ,i first examine the situation on the euclidean three - space . in particular , one takes note of the occurrence of global structures in the form of magnetic flux tubes as well as the so - called charge stripes .it is interesting from the point of view of geometry that these objects exist in general only on three - manifolds whose fundamental group is not finite .this is tied to the geometric fact that the nonlinear gauge theory at hand induces an additional structure on a taut codimension one foliation .these global aspects of static solutions prompt an assumption of topological point of view .accordingly , i sketch the possibility of constructing ` nonlinear cohomology ' that would account for a sort of ` flux tube ' invariant of a three manifold .the discussion here is based on two particular examples that i feel provide an optimal illustration of the underlying concept . the nonlinear maxwell equations , cf .( [ syst0]-[syst2 ] ) below , involve a vector potential that encodes the electric and magnetic fields in the usual way as well as an additional scalar . the function contains information , extractable in a certain simple canonical way , about the local value of the _ filling factor _ ( also known as the filling fraction ) .( the filling factor is defined as the number of quanta of the magnetic field per electron charge in the first landau level .it is then natural and effective to think of the electrons as forming in conjunction with the corresponding magnetic flux quanta composite particles either bosons or fermions , or laughlin particles depending on the actual value of the filling factor . )it is thus postulated that the filling fraction typically an input of a microscopic theory that is always assumed constant microlocally is allowed to slowly vary in the coarser scale .in fact , it was shown in that nm predict occurrence of phase changes that lead to formation of vortices in , and _ a fortiori _ in the magnetic field .this picture conforms with the well known analogy between the quantum hall effects and the high- superconductivity .an inquisitive reader might now point at the following seeming conundrum .the physical interpretation of as a filling factor requires the presence of two - dimensional geometric structures that endow us with a possibility of including the lowest landau level in the basic dictionary .thus , it may appear a priori puzzling , how we are going to retain this interpretation of in three or four spatial dimensions ?the answer is provided by the intrinsic structure of the nm themselves .on one hand , it is shown below that the filling factor variable may be completely factored out of the equations when viewed in the complete four dimensions of spacetime .needless to say , if one attempted to analyze such -free form in two dimensions the variable would reemerge without change as it is there encoded in the magnetic field , . on the other hand ,one notes that a remnant or a generalization of the filling factor interpretation carries over to three dimensions .namely , the nm in three dimensions imply existence of a codimension one foliation of the three - space associated with the static solutions . moreover , one notes here that the nm do not a priori introduce any restrictions as to the type of the resulting foliation in fact any regular foliation and even foliations with singularities introduced by degenerating leaves are admitted by the equations. however , as already mentioned above the existence of solutions of a special type , namely the flux - tube type , implies geometrical restrictions on the foliation and topological restrictions on the three - manifold . indeed , in this case foliation must be taut. it also seems reasonable to expect that the composite particle interpretation remains valid in this setting and the number of participating electrons in each leaf is again determined by , virtually leading to the notion of an _ effective _ landau level . in the last words of this sectioni would like to admit that , the subject matter at hand being both new and inherently interdisciplinary as well as by way of my own background and limitations , it is not always easy to pick the optimal terminology . realizing i will unavoidably fail to satisfy in this respect one group of readers or another, i can only ask the readership to be as tolerant as they can afford and hope that in the end substance will triumph over form .in what follows , in order to get around rather tedious algebra while not compromising our understanding of what is essentially involved , i present a shortcut style exposition of the necessary calculations .i believe that readers who are well familiar with differential geometry will find it easy to reinterpret this calculation in its natural invariant setting , while those who are less familiar with the abstract setting may in fact appreciate its absence here .consider the following system of equations the nonlinear maxwell equations ( nm ) in the form in which they have appeared in my previous papers . where is a real valued function and is the electromagnetic vector potential , so that the corresponding electromagnetic field is . here , is a physical constant with unit ] if the unit of length on is to be ] .let us say the corresponding laplace - beltrami operator on forms is then .calculation shows that the condition is equivalent to the system of equations ( [ syst0][syst2 ] ) , cf . .for the sake of discussion in this section , consider the nm on either a lorentzian or a riemannian four - manifold as the metric signature plays a secondary role .in particular , it is preferable to replace the -notation with the -notation .assume for the sake of simplicity that the second cohomology group of the manifold is trivial .omitting the constant , write the system one more time in the form since ( [ a1 ] ) implies , one has so that and the new form satisfies a _ dual _ system of equations this is a functional transform reminiscent of the fourier or the backlund transforms , notwithstanding the fact that all transforms are somewhat reminiscent of one another . in particular , the resulting dualistic perspective has the expected property that trivial solutions of one of the systems lead to more complex solutions of the dual system .to illustrate the idea , let me now present a few examples of dual solutions on with either the euclidean or the minkowski metric as specified in the discussion ._ example 1 ._ let the metric be euclidean and take , , and .equation ( [ a1 ] ) is automatically satisfied and ( [ a2 ] ) assumes the form so that for solves the problem .now and it staisfies equations ( [ tilda1]-[tilda2 ] ) ._ example 2 ._ departing for a while from the assumption of vanishing second cohomology , let us reinterpret the previous example on a four - torus assuming periodicity of coordinates with period .note that the first bundle is necessarily nontrivial as the cohomology class \neq 0 $ ] .let us allow the function drop its dependence on so that , say , , provided the ` right choice ' of has been made .now , is an exact form so the second bundle is topologically trivial ._ example 3 ._ consider and so that ( [ a1 ] ) is satisfied .let us now look at the metric with signature so that ( [ a2 ] ) means .the general solution of this equation is a standing wave with variable amplitude .this pattern is inherited by ( up to the sign again ) which satisfies ( [ tilda1]-[tilda2 ] ) ._ example 4 ._ let us for a change begin on the other side and take , say , and .again , the first equation ( [ a1 ] ) is automatically satisfied while ( [ a2 ] ) becomes .as explained in ( see also remarks at the end of section [ static ] below ) apart from the trivial constant solution , this problem also has a solution in the form of a vortex lattice . in the latter case satisfies ( [ a1]-[a2 ] ) and represents static magnetic flux tubes .i emphasize that only the vector potential and the filling fraction variable that appear in the first set of equations have physical interpretation .reassuringly , the presence of a nontrivial in examples _ 1 _ and _ 3 _ did not contribute anything unexpectedly strange to the constant electric and magnetic fields in these examples , while it ` introduced ' flux tubes in example _4_. although one could consider similar interpretation of the transformed vector potential , just as one can for any -connection , i feel this is uncalled for and would probably be unjustifiable at this point .nevertheless , the existence of the transform is a remarkable fact whose possible applications to four - manifolds will be explored more thoroughly in the future . in a way , this new duality is a generalization of the regular hodge - star duality that may be compared to the projective generalization of the euclidean reflection .this analogy may be justified in the following way .projective duality is induced by a fixed quadratic form . what is the nm analog of that object ?introduce notation .a direct calculation shows that ( [ a1]-[a2 ] ) may be written in the form of a system of quadratic equations this form of the equation has one other advantage .suppose one has found a solution of ( [ aa1]-[aa2 ] ) .one can now use gauge invariance of the equations in the following way .let be a solution of the equation the existence of a solution follows from the fredholm alternative when the metric is positive definite , and it amounts to solving a linear wave equation in a lorentzian metric .one can now replace with ( and denote the resulting form by again ) .in the new gauge , so that in fact satisfies the system that consists of ( [ aaa1 ] ) and ( [ aa2 ] ) is either quasilinear elliptic or hyperbolic , depending on the metric . solving the latter system may not be helpful at all in finding solutions of the original ( [ aa1]-[aa2 ] ) , since one can not guarantee that a solution satisfies the lorentz gauge condition .however , solutions of ( [ aa1]-[aa2 ] ) _ a fortiori _ satisfy ( [ aaa1 ] ) and ( [ aa2 ] ) so that in particular they will obey all a priori estimates on the solutions of , say , quasilinear hyperbolic systems .in particular , this point of view may justify the claim that the phenomena described in this paper shed some light on the complex nature of quasilinear systems of pde of certain types in general .i will now take full advantage of the ( [ first]-[last ] ) form of the nm .in analogy to the electromagnetic wave in vacuum , that one recalls is counted among the solutions of this system , one wants to look for a solution with . in the end i will check that the new solution of ( [ first]-[last ] ) in fact satisfies ( [ syst0]-[syst2 ] ) which is not a priori guarantied .make an ansatz where , and are a priori functions of that are smooth a.e .and neither one of them vanishes identically . as an immediate consequence, one obtains that ( [ first ] ) and ( [ second ] ) are equivalent to which implies on the other hand , ( [ third ] ) and ( [ fourth ] ) are equivalent to equations ( [ helper1 ] ) , ( [ helper2 ] ) and ( [ longy ] ) imply that is in fact constant using ( [ helper1 ] ) and ( [ helper2 ] ) again , one obtains in particular , and are not compactly supported . at this point ,the only condition left a priori unfulfilled is the vanishing divergence condition .thus , all equations ( [ helper1]-[longy ] ) above are satisfied iff there is a function such that defining the electric and magnetic fields by ( [ eb ] ) with , so that in particular , and choosing that satisfies the linear wave equation ( [ last ] ) , one obtains a solution of ( [ first]-[last ] ) .however , physical solutions must in addition satisfy the _ a priori _ more restrictive system ( [ syst0]-[syst2 ] ) .consider as given in ( [ fa ] ) .equation ( [ syst0 ] ) is satisfied automatically since it is equivalent to ( [ first]-[second ] ) . on the other hand , ( [ syst1 ] )becomes now , ( [ syst1_2 ] ) and ( [ syst1_3 ] ) imply via ( [ thebs ] ) that in particular , .thus , ( [ syst0]-[syst2 ] ) has been reduced to the following system of two equations : the first equation above admits three types of classical solutions .namely , observe that each solution effectively depends on one harmonic variable in the -domain either , or .thus , equation ( [ smallsyst2 ] ) is satisfied if for an arbitrary function of one variable .therefore , in view of ( [ thebs ] ) one obtains three types of solutions ( redefining ) = \left\{\begin{array}{lll } c(t+ez)/(r^2\ln{r^2})[-y , x ] \\c(t+ez)\sec{(k_1x+k_2y + \alpha)}[-k_2,k_1]\\ c(t+ez)\exp{(-k_1x - k_2y ) } [ -k_2,k_1 ] \end{array}\right.\ ] ] in correspondence with ( [ thef ] ) .since one is looking for strong solutions , one has the freedom to cut off pieces of the classical solutions ( by restricting the domain ) and to put them back together . in this way, one obtains solutions that are either continuous or have jump discontinuities but may be guarantied to remain bounded .last but not least , it is physically correct to interpret the divergence of the electric field as charge and as the electric current .one checks that for solutions as above the -component of current vanishes while the -component is equal to .more precisely , one obtains that piecewise in correspondence with ( [ thef ] ) and ( [ thebs ] ) .in addition to the piecewise smooth distribution of charge , one should include charge concentrated on singular surfaces where the electric field has jump discontinuities as indicated by the distributional derivative .therefore , charge is transported along the z - axis with the speed and without resistance as the vector of current is perpendicular to the electric field .charge is mostly concentrated along _charge stripes _ where the electric and magnetic fields have singularities .the net current depends on the particular choice of a ( strong ) solution . of course, the theory does not tell us how to solve the practical problem of electronics namely , how to create conditions for a particular function , constant and a desired mosaic of singularities to actually occur in a physical system .the classical maxwell equations admit static solutions of two types only : the uniform field solutions , and the unit charge or monolpole - type solutions , as well as superpositions of these fundamental types of solutions .as we will see below , the nonlinear theory encompasses a larger realm including the magnetic - flux - tube type and the charge - stripe type solutions .these additional configurations require nonlinearity and can not be superposed , which gives them more rigidity . in the next sectionwe will see what can be said about the variety of such solutions , while in this section i will only display a single example of this type .apart from the applicable goal , the idea is to present an example that possesses all the essential features of the general class of solutions yet the required calculation is free of more subtle geometric technicalities .time - independent solutions of the nm posses physical interpretation only if they satisfy the equations in the classical sense almost everywhere . assuming that all fields are independent of time ( [ first]-[last ] ) takes on the form under the assuption that a.e .adopt an ansatz that the integral surfaces of the planes perpendicular to the field are flat , say , one easily checks that equations ( [ second_st ] ) and ( [ fourth_st ] ) are satisfied .assume in addition that the electric field is potential , i.e. so that ( [ first_st ] ) is satisfied .remembering notation , one calculates directly that while and thus , equation ( [ third_st ] ) is equivalent to the following system of three equations and since one obtains from the first two equations while the third equation assumes the form at this point the nm have been reduced to the system of just two scalar equations ( [ last_st ] ) and ( [ voltage ] ) . denote and and assume in addition so that it now follows from ( [ last_st ] ) and ( [ voltage ] ) that the triplet and is a solution of the nm if only and satisfy a decoupled system of semi - linear elliptic equations at this point , i would like to emphasize one more time that in a field theory one looks for _ strong _ solutions , i.e. solutions that satisfy equations in the classical sense almost everywhere . typically , such solutions are smooth except for singularities supported on a union of closed submanifolds . furthermore , geometrically invariant derivatives of the resulting fields in the distributional sense signify charges . with this understood ,let us briefly turn attention to equation ( [ charge_stripe ] ) .one wants to avoid holding the reader hostage to the formal analysis of this elementary equation which might be somewhat distracting .thus , i have chosen to briefly describe the solutions qualitatively leaving aside technical details that can be easily reconstructed aside by the reader .first , one notes that if then a solution is concave , while for it will be convex for large values where . assuming formally that is a function of ( piecewise ) , one reduces ( [ charge_stripe ] ) to the first order equation thus , there are essentially two types of positive solutions , depending on the actual values of constants .the first type includes solutions that assume value at a certain point and increase monotonously to infinity as as well as the symmetric solutions defined between and some point , say again , where they reach .these solutions require and they asymptotically look like one can use both branches in order to put together a strong solution that forms a cusp or a jump discontinuity at .the second type consists of solutions that are concave , rise to the highest peak at , when , and fall off to on both sides in finite time while being differentiable in - between . selecting the constants and combining both types of solutions piecewise segment - by - segment one obtains strong solutions that in turn provide electric fields according to formula ( [ mag_and_el ] ) .since , with the exception of the trivial constant solution , there are no global smooth solutions , one concludes that either is constant or there exist charge stripes located at planes where has singularities .the distributional derivative is in each case equal to the dirac measure concentrated at as above and scaled by the size of the jump , and classical derivatives on both sides of the singularity . even in absence of a jump ,the charge will switch from negative to positive thus forming what can be amenably called a charge - stripe .an example of this is shown in _fig.1_. it is much more difficult to figure out solutions of the second equation .i refer the reader to for a more thorough analysis , while here i will just briefly summarize my previous findings .solutions of equation ( [ vortex ] ) correspond to critical points of the functional which is neither bounded below nor above , so that one is looking at the problem of existence of _ local _ extrema .the equation always admits a trivial constant solution .but , as it is shown in , it also possesses nontrivial vortex lattice solutions .more precisely , if is larger than a certain critical value then there is a nonconstant doubly periodic function which satisfies the finite difference version of ( [ vortex ] ) everywhere except at a periodic lattice of isolated points , one point per each cell . in this way , a lattice of flux tubes , cf ._ fig.2 _ , emerges as a solution of the nm . for the time being , the proof of this fact relies on finite - dimensionality essentially , and does not admit a direct generalization to the continuous - domain case. however , physical parameters , like and , are asymptotically independent of the density of discretization .thus , i conjecture existence of the continuous domain solutions that satisfy the equation a.e . in the classical sense and retain the particular vortex morphology .presently , the essential obstacle to proving this conjecture is lack of a regularity theory for the discrete vortex solutions .the proof in is carried out in the ( discretized ) torus setting .one believes that vortex type solutions exist on any closed ( orientable ) surface .every gauge theory comes equipped with an associated set of topological invariants usually characteristic classes of the bundles used to introduce the gauge field .articles , , teach us how such topological invariants may be manifested in an electronic system as observable quantum numbers .the nonlinear maxwell theory is naturally equipped with two kinds of topological invariants .on one hand , one has the first chern class of the original -bundle .additionally , we will see below that in the case of static solutions the nm give us an additional set of invariants defined directly by the foliated structure of the underlying three - manifold .( in the discussion below , i generally assume for the sake of simplicity that is a closed orientable manifold unless stated otherwise . ) in this section i will make an effort only to identify rather than exploit to the fullest the geometric and topological ramifications of this nonlinear theory of electromagnetism . to gain some initial impetus , let us be guided by the following question a question of this type is typical in algebraic topology where one is asking about global obstructions to the presence of certain algebraic factorization properties of analytic objects , like linear differential equations as it is the case for , say , the de - rham cohomology groups . in our case ,the equations are nonlinear , but the principle remains the same .the importance of these questions for practical issues of electromagnetism is twofold .first , one wants to know how big is the set of possible configurations especially in the absence of the superposition principle .secondly , i believe the topological invariants displayed below are directly on target in an effort to explain and describe the nature of certain rigid structures , like the quantum hall effects , that physically occur in electronic systems .first , it needs to be emphasized that the static field equations i want to consider , i.e. the equations that descend from the four - dimensional spacetime via time - freezing coefficients , are distinct from the equations ( [ syst0]-[syst2 ] ) considered directly on a three manifold .secondly , the equations ( [ first_st]-[last_st ] ) are only valid on a euclidean space .the geometry behind these equations is easier to identify when they are rewritten in an invariant form that can be considered on any three - manifold in a coordinate independent setting .fix a riemannian metric on with scalar product extended to include measuring differential forms .denote by and the forms dual to the magnetic and electric field vectors ; recall notation and put .the static nm assume the form equation ( [ dwedge ] ) is the familiar frobenious condition on integrability of the distribution of planes given by .one always assumes is nonsingular a.e . so that the distribution is _ a priori _ also defined a.e . for convenience , it is assumed throughout this section that the foliation determined by is smooth .( it is quite clear that for the flux - tube type solutions the distribution extends through the singular points and is defined everywhere . at this stage, however , it is hard to make a formal argument to this effect , hence the _ a priori _ assumption . )the condition of smoothness implies that the three - manifold must have vanishing euler characteristic . in particular , singular foliations ,some of which may be associated with other types of solutions of the nm , are excluded from the discussion below .it follows that there is a -form , known as the godbillon - vey form , such that this form is not defined uniquely .however , as is well known , and the godbillon - vey ( gv ) cohomology class {h^3(m)}\ ] ] is uniquely defined . on a three manifoldthis class can be evaluated by integration resulting in a godbillon - vey number this invariant poses many interesting questions that have not been fully resolved by geometers yet .below , i will justify two observations .first , the condition of existence of the magnetic flux - tube solutions imposes both local and global restrictions on the foliation .second , magnetic flux - tube solutions exist in topologically nontrivial situations with nonzero gv - number .this is formally summarized in the two propositions that follow .they are far from the most general statements that can be anticipated in this direction , but are also nontrivial enough to suggest a conjecture regarding _ quantization _ of the gv - number that i will formulate following proposition [ p1 ] .consider _ a priori _ a foliation given by locally .first , one introduces a local coordinate patch such that the foliation is given by the -planes and .in particular let denote the volume element on a leaf .one has .equation ( [ delta ] ) becomes .thus , there is a function such that a calculation analogous to that in the previous section shows that the whole system ( [ delta]-[phi_long ] ) is reduced to observe that in order to obtain factorization it is necessary and sufficient that i.e. a priori dependence of on is dropped . if that holds , the equations ( [ dele ] ) and ( [ philap ] ) can be decoupled with an additional ansatz .one also has that for a constant and conversely , if ( [ flux ] ) and ( [ split ] ) hold , then so must ( [ zindep ] ) and the mean curvature of a leaf vanishes .indeed , by definition this implies [ p1 ] for the existence of flux - tube type solutions in the sense of existence of factorization ( [ split ] ) and decoupling of equation ( [ flux])it is necessary that the foliation given by be taut , i.e. the mean curvature of leaves must vanish. in particular must be infinite ._ the first part has been shown above .the second part follows from a result of d. sullivan that he deduced from the result of novikov on the existence of a closed leaf that is a torus ( cf . , and for additional general material and references). in particular , there are no flux - tube type solutions of the nm that would conform with the reeb foliation .this is a practical issue since the reeb foliation exists on a solid torus , so that in principle it might be observed experimentally which would be inconsistent with the theory at hand .this fact is also interesting for another reason .namely , according to the celebrated theorem by thurston in each real number may be realized as the godbillon - vey number for a certain codimension one foliation on the three - sphere .the known proof of this result uses the reeb foliation in an essential way .i do not know if this fact is canonical , i.e. if the presence of the reeb foliation is necessary for the result to hold , but if it turns out to be so then excluding the reeb foliation from the game should result in a reduction of the range of the g - v number , possibly to a discrete subset of the real line .in such a case , the resulting set of the g - v numbers accompanying flux - tube type solutions of the nm would also be discrete .this is consistent with my expectation that these invariants must be related with ( both the integer and the fractional ) quantum hall effects .future research should bring a resolution of this problem .another observation is that the factorization given by ( [ split ] ) and ( [ flux ] ) does exist in topologically nontrivial situations .more precisely , i want to consider solutions of the nm on and its compact factors .these three - manifolds are equipped with interesting codimension one foliations known as the roussarie foliation .let the lie algebra be given by = y^+,\quad [ x , y^-]=-y^-,\quad [ y^+,y^-]=2x.\ ] ] pick a metric on in which the corresponding left - invariant vector fields , , and are orthonormal and let , , and be the corresponding dual -forms .one checks directly that so that the distribution is integrable and is the gv - form of the resulting foliation .in particular , one can introduce local coordinates such that , , .this foliation descends to compact factors of that can each be identified with total space of the unit tangent bundle of the hyperbolic riemann surface of genus that depends on our choice of the co - compact subgroup acting on by isometries . moreover , the gv - integrand is proportional to the natural volume form on the three - manifold . as a result of this ,the corresponding gv - numbers assume values in a discrete set .i want to look for solutions of the nm that satisfy the ansatz in particular , ( the frobenious ) equation ( [ dwedge ] ) is satisfied automatically .moreover , since , equation ( [ delta ] ) implies as before , one checks that ( [ stard ] ) implies ( [ dele ] ) as well as and . in consequence ,one again has ( [ split ] ) and assuming as before one obtains ( [ flux ] ) . in consequence, the following holds true .the roussarie folitions on and its compact factors satisfy the factorization condition for the existence of magnetic flux - tube type solutions in the sense that the tangent distribution can be expressed as a.e .and one can reduce the nm to the form ( [ split]-[flux ] ) . in a similar way one can obtain factorization ( [ split ] ) and ( [ flux ] ) for other foliations , like the natural foliation on say .it is natural to ask if the nm descend from a lagrangian functional depending on the two variables and , say , via the euler - lagrange calculus of variations .the answer is negative as one can easily see considering that in general a gradient must pass the second derivative test : a condition that can not be satisfied by the expressions in ( [ syst0]-[syst2 ] ) viewed as the gradient , say , of an unknown functional .this suggests that the nm may constitute just a part of a broader theory that would encompass additional physical fields . in other words , the equations ( [ syst0]-[syst2 ] )would have to be coupled to some other equations via additional fields .in addition , such coupling would have to induce only a very small perturbation of the present picture that one believes is essentially accurate .such possibilities may become more accessible in the future . among other ,perhaps related goals is that of deriving the nm equation directly from the microscopic principles .the well - known analogy between the quantum hall effects and high- superconductivity suggests that there should exist vortex lattices involving the so - called _ filling factor _( microlocally a constant scalar ) that plays a major role in the description of _ composite particles_. the nm describe exactly this type of a vortex - lattice .simulation and theory show that this system conforms with the experimentally observed physical facts .it stretches the domain of applicability of the maxwell theory to encompass phenomena such as the _ magnetic oscillations _ , _ magnetic vortices _ , _ charge stripes _ that occur in low - temperature electronic systems exposed to high magnetic fields .there are other systems of pde that admit vortex - lattice solutions and are conceptually connected with electromagnetism , like the well known ginzburg - landau equations valid within the framework of low type - ii superconductivity , or the chern - simons extension of these equations which , some researchers have suggested , may be more relevant to the fractional quantum hall effect and/or high- superconductivity , cf .the free variables of these equations are the so - called _ order parameter _ ( a section of a complex line - bundle ) and a -principal connection , both of them containing topological information . in the case of nm ,all the topological information is contained in one of the variables , i.e. the principal connection , while the other is a scalar function .an additional advantage of the nm is in that it remains meaningful in three - plus - one dimensions just as well as in the two - dimensional setting .i would also like to mention that recently other researchers have introduced nonlinear maxwell equations of another type in the context of the quantum hall effects , cf .the nm theory presented in this and the preceding articles of mine is of a different nature .finally , although this is far from my areas of expertise and the remark should be received as completely _ ad hoc _ , i would also like to mention that yet another context in which foliations come in touch with the quantum hall effect is that of noncommutative geometry , cf . .let me conclude with a question that may suggest yet another point of view .namely , is there a coalescence between the nonlinear pdes ( in the form of the nm ) and the ( quantum ) information theory ? as it was pointed out , construction of error correcting codes may unavoidably require manipulating quantum information at the topological level .anyhow , this is how i have understood the essential thought in . adoptingthis paradigm would strongly suggest that the effective language of quantum computation should be costructed at many levels , including that of the mesoscopic field theory in parallel with the language derrived from the basic principles as it is done now .future research will likely better clarify these issues .29 a. connes , noncommutative geometry , academic press , 1994 m. h. freedman , plenary talk at the _ mathematical challenges of the 21st century _ conference , los angeles , august , 2000 j. frhlich and b. pedrini , in : a. fokas , a. grigorian , t. kibble , b. zegarlinski , eds . , mathematical physics 2000 , imperial college press r. b. laughlin , phys .b 23 ( 1981 ) , 5632 - 5633 r. b. laughlin , phys . rev .b 27 ( 1983 ) , 3383 - 3389 r. b. laughlin , phys50 ( 1983 ) , 1395 - 1398 r. b. laughlin , science 242 ( 1988 ) , 525 - 533 s. novikov , trudy moskov .( 1965 ) , 248 - 278 , ams translation , trans .moscow math .( 1967 ) , 268 - 304 g. reeb , actualit sci .1183 , hermann , paris ( 1952 ) r. roussarie , ann .fourier 21 ( 1971 ) , 13 - 82 a. sowa , j. reine angew . math ., 514 ( 1999 ) , 1 - 8 a. sowa , physics letters a 228 ( 1997 ) , 347 - 350 a. sowa , cond - mat/9904204 d. sullivan , comment .54 ( 1979 ) , 218 - 223 w. thurston , bull .78 ( 1972 ) , 511 - 514 ph .tondeur , geometry of foliations , birkhuser verlag , 1997 r. e. prange , s. m. girvin , eds . , the quantum hall effect , springer - verlag , 1990 s. c. zhang , int . j. modb 6 , no . 1 ( 1992 ) , 25 - 58 _ fig.1 _ an example of a strong solution of ( [ charge_stripe ] ) . is a positive function , the electric field is given by formula ( [ mag_and_el ] ) .the resulting charge distribution is obtained by evaluating .( in general , is understood in the distributional sense ) .charge is concentrated along certain plains .this is the basic appearance of charge stripes intertwining concentrations of positive and negative charges .( one should compare this static picture with the description of moving charge stripes in section [ charge_wave ] . )
the goal of this paper is to sketch a broader outline of the mathematical structures present in the nonlinear maxwell theory in continuation of work previously presented in , and . in particular , i display new types of both dynamic and static solutions of the nonlinear maxwell equations ( nm ) . i point out how the resulting theory ties to the quantum mechanics of correlated electrons inasmuch as it provides a mesoscopic description of phenomena like nonresistive charge transport , static magnetic flux tubes , and charge stripes in a way consistent with both the phenomenology and the microscopic principles . in addition , i point at a bunch of geometric structures intrinsic for the theory . on one hand , the presence of these structures indicates that the equations at hand can be used as ` probing tools ' for purely geometric exploration of low - dimensional manifolds . on the other hand , global aspects of these structures are in my view prerequisite to incorporating ( quantum ) informational features of correlated electron systems within the framework of the nonlinear maxwell theory .
in a previous study , we established the theoretical background needed for the comparison of the output of measurement systems used in capturing data for the analysis of human motion .we developed our methodology for a direct application in case of the two versions of the microsoft kinect ( hereafter , simply ` kinect ' ) sensor , a low - cost , portable motion - sensing hardware device , developed by the microsoft corporation ( microsoft , usa ) as an accessory to the xbox video - game console ( 2010 ) .the sensor is a webcamera - type , add - on peripheral device , enabling the operation of xbox via gestures and spoken commands .the first upgrade of the sensor ( ` kinect for windows v2 ' ) , both hardware- and software - wise , tailored to the needs of xbox one , became available for general development and use in july 2014 . in ref . , we applied our methodology to a comparative study of the two kinect sensors and drew attention to significant differences in their output .previous attempts to validate the kinect sensor for various medical / health - relating applications have been discussed in ref .the present study is part of our research programme , investigating the possibility of involving ( either of ) the kinect sensors in the analysis of the motion of subjects walking or running ` in place ' , e.g. , on commercially - available treadmills .if successful , kinect could become an interesting alternative to marker - based systems ( mbss ) in capturing data for motion analysis , one with an incontestably high benefit - to - cost ratio .studied herein is the dependence of the evaluated lengths of eight bones of the subject s extremities on the kinematical variables pertaining to the viewing of these bones by the sensor .the bones are : humerus ( upper arm ) , ulna ( lower arm , forearm ) , femur ( upper leg ) , and tibia ( lower leg , shank ) .the evaluated lengths of the subject s left and the right extremities are separately analysed . ideally , the lengths of the bones of the subject s extremities should come out constant , irrespective of the orientation of these bones in space and of the viewing angle by the sensor . in reality ,a departure from constancy is inevitable , given that the kinect nodes represent centroids in the probability maps obtained from the captured data in each frame separately ; as such , the node - extraction process is subject to statistical fluctuations and is affected by different systematic effects in the three spatial directions . in the present work ,we first examine the variability of the evaluated bone lengths with the variation of two angles describing the orientation of the bones in space , to be denoted hereafter as and ; is the angle of the bone with the vertical , whereas is the angle between the bone projection on the ( anatomical ) transverse plane and the axis ( the direction associated with the depth in the images obtained with the sensors ) .subsequently , we investigate the dependence of the evaluated lengths of these bones on the inclination angle with respect to kinect s viewing direction .we pursue the determination of systematic effects in the kinect - captured data , placing emphasis on establishing the similarities and the differences in the behaviour of the two sensors ; in this respect , the present paper constitutes another comparative study of the two kinect sensors , albeit from a perspective different to the one of refs . .the material in the present paper has been organised in four sections . in section [sec : method ] , we provide the details needed in the evaluation and in the analysis of the bone lengths .the results of the study are contained in section [ sec : results ] .we discuss the implications of our findings in the last section .in the original sensor , the skeletal data ( ` stick figure ' ) of the output comprises time series of three - dimensional ( 3d ) vectors of spatial coordinates , i.e. , estimates of the ( ,, ) coordinates of the nodes which the sensor associates with the axial and appendicular parts of the human skeleton .while walking or running , the subject faces the kinect sensor . in coronal ( frontal )view of the subject ( sensor view ) , the kinect coordinate system is defined with the axis ( medial - lateral ) pointing to the left ( i.e. , to the right part of the body of the subject being viewed ) , the axis ( vertical ) upwards , and the axis ( anterior - posterior ) away from the sensor , see fig . [fig : cs ] . in the upgraded sensor ,five new nodes have been appended at the end of the list : one is a body node , whereas the remaining nodes pertain to the subject s hands . in both versions , parallel to the video image , kinect captures an infrared image , which enables the extraction of information on the depth .the sampling rate in the kinect output ( for the video and the skeletal data , in both versions of the sensor ) is hz . as the kinect output has already been detailed twice , in sections 2.1 of refs . , there is no need to further describe it here .the ` upper ' endpoint ( upper or proximal extremity ) of each bone will be identified by the subscript , the ` lower ' endpoint ( lower or distal extremity ) by .the upper and lower endpoints of the bones refer to the upright ( erect ) standing ( rest ) position of the subject ( standard anatomical position ) ; in this position , in all cases .the four bones pertaining to the upper extremities ( arms ) are defined on the basis of the kinect nodes shoulder_left , elbow_left , and wrist_left ( left side ) ; shoulder_right , elbow_right , and wrist_right ( right side ) .the four bones pertaining to the lower extremities ( legs ) are defined using the kinect nodes hip_left , knee_left , and ankle_left ( left side ) ; hip_right , knee_right , and ankle_right ( right side ) . denoting the bone length as , the angle is estimated via the expression evidently , ] domain . of relevance in the context of the present workis the angle at which the kinect sensor views a bone .the inclination angle is obtained from the data as follows . referring to fig .[ fig : cs ] , we define the position vectors pertaining to the bone endpoints as and .the unit vector , normal to the kse plane , is obtained via the expression : ( we have not found instances in the data involving parallel or antiparallel vectors and . ) the position vector of the midpoint m of the se segment is given by and the unit vector along that direction by the unit vector , on the kse plane , orthogonal to , is obtained by the expression the inclination is the angle between and ; evidently , as we are interested in the inclination of each bone , not in its orientation with respect to the unit vector , we will use as free variable .it is expected that the reliability of the kinect - evaluated bone lengths should increase with increasing .the data acquisition involved one male adult ( zhaw employee ) , with no known motion problems , walking and running on a commercially - available treadmill ( horizon laufband adventure 5 plus , johnson health tech .gmbh , germany ) .the placement of the treadmill in the laboratory of the institute of mechatronic systems ( school of engineering , zhaw ) , where the data acquisition took place , was such that the subject s motion be neither hindered nor influenced in any way by close - by objects .prior to the data - acquisition sessions , the kinect sensors were properly centred and aligned .the sensors were then left at the same position , untouched throughout the data acquisition .it is worth mentioning that , as we are interested in capturing the motion of the subject s lower - leg parts ( i.e. , of the ankle and foot nodes ) , the kinect sensors must be placed at such a height that the number of lost lower - leg signals be kept reasonably small .our past experience dictated that the kinect sensor should be placed close to the minimal height recommended by the manufacturer , namely around ft off the ( treadmill - belt ) floor . placing the sensor higher ( e.g. , around the midpoint of the recommended range of values , namely at ft off the treadmill - belt floor ) leads to many lost lower - leg signals (the ankle and foot nodes are not properly tracked ) , as the lower leg is not visible by the sensor during a sizeable fraction of the gait cycle , shortly after toe - off .the kinect sensor may lose track of the lower parts of the subject s extremities ( wrists , hands , ankles , and feet ) for two reasons : either due to the particularity of the motion of the extremity in relation to the position of the sensor ( e.g. , the identification of the elbows , wrists , and hands becomes problematic in some postures , where the kinect viewing angle of the ulnar bone is small ) or due to the obstruction of the extremities of the human body ( behind the subject ) for a fraction of the gait cycle . assuming that these instances remain rare ( e.g. , below about of the available data in each time series , i.e., no more than one lost frame in ) , the missing values may be reliably obtained ( interpolated ) from the well - determined ( tracked ) data .although , when normalised to the total number of available values , the untracked signals usually appear ` harmless ' , particular attention was paid in order to ensure that no node be significantly affected .five velocities were used in the data acquisition : walking - motion data were acquired at km / h ; running - motion data at , , , and km / h . at each velocity setting , the subject was given min to adjust his movements comfortably to the velocity of the treadmill belt .the kinect output spanned slightly over min at each velocity .the variation of the distance between the subject and the kinect sensors was monitored during the data acquisition ; it ranged between about and m , well within the limits of use of the sensors set by the manufacturer .the recording on the two measurement systems started simultaneously .at each sampled frame , the bone lengths were calculated from the kinect output and were histogrammed in ( , ) cells ; the same values were also histogrammed in bins of .twenty bins per angular direction were used in the former case , forty in the latter .averages , as well as the standard errors of the means , were calculated in all cells / bins containing at least ten entries ; cells / bins with fewer entries were ignored . techniques yielding accurate results for in - vivo measurements of the human long bones include conventional ( planar ) radiography , ct scanning ( e.g. , see ref . ) , raman spectroscopy , and ultrasonic scanning . unwilling or unable to use any of these techniques , we obtained static measurements of the subject s bone lengths with an mbs ( aicon 3d systems gmbh , germany ) .our system features two digital cameras , a control unit , and a high - end personal computer ( where the visualisation software is installed ) .the moveinspect technology hf reconstructs the 3d coordinates of the centres of markers ( adhesive targets , reflective balls ) which are simultaneously viewed by the two cameras ; the typical uncertainty in the determination of the 3d coordinates of each marker centre is below m , i.e. , negligible when compared to other uncertainties , namely to those pertaining to the placement of the markers and to skin motion .the bone lengths , not to be identified herein with the suprema of the distances of any two points belonging to the bones regarded as 3d objects , were defined as follows .* humerus : from the centre of the humeral greater tubercle ( ` coinciding ' with the kinect shoulder node ) to the humeral lateral epicondyle . as it is not straightforward to identify the centre of the humeral greater tubercles , flat markers were placed on the greater tuberosities and the resulting humeral - bone lengths were reduced ) of mm ( see end of section 3 of ref .we will assume that the same correction is also applied to the infrared image in the vertical ( ) direction ( and , albeit not relevant herein , also in the medial - lateral ( ) direction ) , in order to yield the ( and ) coordinates of the shoulder joints .therefore , when comparing it with the kinect - evaluated humeral - bone length , the mbs - evaluated length will be reduced by . ] by mm . *ulna : from just below the humeral medial epicondyle , on top of the humeral trochlea , to the ulnar styloid process .* femur : from the centre of the femoral head to just below the femoral medial condyle , on top of the medial meniscus . as the identification of the centres of the femoral heads , using a non - invasive anthropometric technique , is neither easy nor accurate , we obtained the hip positions by following the indirect approach of ref . , featuring four flat markers on the surface of the subject s body ( see subsection 2.2 of ref .the knee positions were identified in two ways : either using -mm reflective balls on top of the medial menisci and applying a correction ( lateral shift ) similar to the one applied in ref . or using plat markers on the patellae ( knee caps ) and applying a correction in depth ( half of the width of the subject s leg , at the knee level , in sagittal view ) ; the two results were found almost identical . *tibia : from just below the femoral medial condyle , on top of the medial meniscus , to the medial malleolus .all measurements were acquired in the upright position .the extracted values of the subject s bone lengths are listed in table [ tab : aicon ] .the average values , along with the rms ( root - mean - square ) values of the corresponding distributions , of the kinect - evaluated bone lengths are shown in table [ tab : lengths ] , separately for the two kinect sensors .a noticeable difference between the two sets of values occurs in the femoral - bone lengths , the values of which come out , in all cases , significantly smaller when using the data of the upgraded sensor ; this underestimation is equivalent to an effect between and standard deviations in the normal distribution .the quoted uncertainties in table [ tab : lengths ] are large , indicating that systematic effects come into play , thus suggesting further analysis of the bone lengths , in terms of the kinematical variables , , and .walking and running motions have different signatures ; the differences are both quantitative and qualitative : the ranges of motion in running are larger , generally expected to increase with increasing velocity ; qualitatively , the walking motion is characterised by extended elbow joints , the running motion by flexed ones .this last dissimilarity affects the detection of the elbows and of the wrists at several postures in running motion , inevitably introducing systematic effects in the evaluation of the humeral- and ulnar - bone lengths .our first study of the systematic effects in the evaluation of the bone lengths involved profile scatter plots in ( , ) cells . for convenience , the following definitions will be used in the short description of the results obtained in this part of the analysis .* forward ( ventral ) placement of a bone corresponds to a position of its lower - endpoint joint more proximal to the kinect sensor than the mid - coronal plane ; ` very forward ' placement refers to .* backward ( dorsal ) placement of a bone corresponds to a position of its lower - endpoint joint more distal to the kinect sensor than the mid - coronal plane ; ` very backward ' placement refers to . in general ,the walking motion is restricted to small values ; only in the case of the ulna does the motion extend to .the humeral - bone lengths came out -dependent ; systematically smaller values were extracted in forward placement , larger in backward . regarding the ulna ,the domain is somewhat enlarged in very forward placement , when the elbow joint is ( usually ) flexed . regarding the tibia ,a significant dependence of the evaluated length on was seen in very backward placement ; the effect was maximal around the most distal position of the ankle ( with respect to the kinect sensor ) , where the shank is not viewed sufficiently well by the sensor . compared to walking motion ,the domain of the values is significantly enhanced in running ( as expected ) .the evaluated humeral - bone lengths were ( again ) found -dependent .regarding the ulna , the values were large ( restriction of the motion to very forward placement ) .for the tibial bones , the evaluated lengths in very backward placement were found to be significantly larger than those obtained elsewhere .again , the largest bone lengths were extracted for large and values ( large dorsal elevation of the lower leg ) , where the tibial bone is not viewed sufficiently well by the kinect sensor . in all cases ,the relative minimax variation ( where and stand for corresponding maximal and minimal values , respectively ) of the extracted values of the bone lengths was large .the ranges ( over all velocities ) were : for the humerus , - ( original sensor ) and - ( upgraded sensor ) ; for the ulna , - ( original sensor ) and - ( upgraded sensor ) ; for the femur , - ( original sensor ) and - ( upgraded sensor ) ; and for the tibia , - ( original sensor ) and - ( upgraded sensor ) .overall , the matching of the results between the two sensors in the case of running motion was not satisfactory ; the values of pearson s correlation coefficient ( on the values of the eight bone lengths at fixed velocity ) showed a pronounced velocity dependence , ranging from at km / h to at km / h .the analysis described in the previous subsection is suggestive of the direction which the investigation must next turn to .in short , it appears plausible to assume that the goodness of the evaluation of the bone lengths does not depend separately on the angles and , but on another quantity , namely on the viewing angle of the particular bone by the kinect sensor .it is reasonable to expect that the accuracy in the evaluation of each bone length depends on the inclination of that bone with respect to the kinect viewing direction ; when a bone is viewed by the kinect sensor at almost right angle , its length should be evaluated more reliably .the appropriate free variable in this investigation , , is obtained via eq .( [ eq : eq07 ] ) .the results , obtained from the data analysis , are shown in figs .[ fig : lengthhumerusprofv1 ] , [ fig : lengthulnaprofv1 ] , [ fig : lengthfemurprofv1 ] , and [ fig : lengthtibiaprofv1 ] for the original sensor ; in figs . [ fig : lengthhumerusprofv2 ] , [ fig : lengthulnaprofv2 ] , [ fig : lengthfemurprofv2 ] , and [ fig : lengthtibiaprofv2 ] for the upgraded sensor .we now discuss these plots . * * humerus*. there can be little doubt that both kinect sensors systematically underestimate the humeral - bone length regardless of the type of the motion ( walking or running ) . only in the case in the walking - motion dataare the estimates close to the results of the static measurements .all humeral - bone lengths obtained for in the case of the running - motion data agree well , and are about - mm short of the results of the static measurements .this discrepancy is due to the misplacement of the arm joints during running , in particular of the elbows , which are kept in flexed position throughout the gait cycle .compared to the original sensor , the upgraded sensor generally yields slightly shorter humeral - bone lengths . * * ulna*. in the data obtained with the original sensor , some left / right asymmetry in the results is visible . the range of variation of the ulnar - bone length was found maximal ( about cm ) in the case of the original sensor , for the right ulna . regarding the walking - motion data for , the extracted values do not disagree with the results of the static measurements .* * femur*. the femoral - bone length , obtained with the original sensor , is seriously overestimated in all cases .the hip positions are mostly responsible for this discrepancy .( in ref . , we reported that the waveforms for the hips , obtained from the two versions of the sensor , do not match well . )* * tibia*. in all cases , both sensors seriously overestimate the tibial - bone lengths for small viewing angles and underestimate them for .the range of variation of the tibial - bone length was found sizeable , namely between and cm . on the other hand , a nearly monotonic behaviour is observed in figs .[ fig : lengthtibiaprofv1 ] and [ fig : lengthtibiaprofv2 ] , and the results from the walking- and the running - motion data generally appear to be consistent between the two sensors .it must be mentioned that the dependence of the bone lengths on is expected to be monotonic .visual inspection of figs .[ fig : lengthhumerusprofv1]-[fig : lengthtibiaprofv2 ] reveals that this is not always the case .one reason for the observed departure from a monotonic behaviour ( yet not the only one ) is that the inclination is estimated from the kinect - captured data ; systematic effects also affect this estimation .the results , reported in the present section , have also been checked against influences from ` cross - talk ' , which might be present in the output of the two kinect sensors .both sensors use reflected infrared light in order to yield information on the depth in the captured images ; one might thus argue that , in case of a simultaneous data acquisition , they distort one another s recording . to clarify this issue , data ( using the same subject and velocity settings )were acquired with the two measurement systems ` serially ' , first with the original sensor , subsequently with the upgraded one ; in both cases , the second sensor was switched off ( but was not removed from the mount ) .the differences to the results reported herein were inessential .our conclusion is that the aforementioned discrepancies can not be due to an interaction between the two sensors .the present paper addressed one use of the output of the two microsoft kinect ( hereafter , simply ` kinect ' ) sensors , namely the evaluation of the lengths of eight bones pertaining to the extremities of one subject walking and running on a treadmill : of the humerus ( upper arm ) , ulna ( lower arm , forearm ) , femur ( upper leg ) , and tibia ( lower leg , shank ) . for comparison ,static measurements of these bone lengths were obtained with a marker - based system .the evaluated lengths of the left and right parts of the subject s extremities have been separately analysed .the constancy of the lengths of these eight bones in terms of the variation of the two angles involved in the viewing , of eq .( [ eq : eq01 ] ) and of eq .( [ eq : eq04 ] ) , was examined .we have also investigated the dependence of the bone lengths on the inclination angle of eq .( [ eq : eq07 ] ) with respect to kinect s viewing direction .we pursued the analysis of systematic effects in the output , emphasising on the similarities and the differences between the two sensors ; in this respect , the present study is another comparative study of the two kinect sensors , albeit from a perspective different to that of refs . .the walking motion is characterised by extended elbow joints and small values in the case of the humerus and of the femur ; in the case of the ulna , the values reached about .the running motion is characterised by flexed elbow joints and ( compared to the walking motion ) extends more in .the motion of the ulnar bones is different in walking and running ; in the latter case , the ulnar motion extends in , rather than in .a major overestimation of the tibial - bone lengths occurs in very backward placement , where these bones can not be viewed sufficiently well by the kinect sensors .the analysis of the arm - bone lengths in terms of the inclination angle demonstrated that the results obtained with both sensors do not agree well with those of the static measurements ; agreement occurs only in the walking - motion data for , where the bones are viewed at almost right angle by the sensor .we advanced an argument providing an explanation of this effect : it is mainly due to the systematic misplacement of the elbow nodes in the running motion , where this joint is kept in flexed position throughout the gait cycle .regarding the leg bones , there is no doubt that the original kinect sensor seriously overestimates the femoral - bone length .both sensors overestimate the tibial - bone lengths for small viewing angles and underestimate them for large ; discrepancies in the evaluated length of these bones lie in the vicinity of - cm , see figs .[ fig : lengthtibiaprofv1 ] and [ fig : lengthtibiaprofv2 ] .the present work corroborates earlier results , obtained with the original sensor ( the upgraded sensor was not available at the time that study was conducted ) , that kinect can not be easily employed in medical / health - relating applications requiring high accuracy . in most cases ,the results obtained with the two sensors disagree with the static measurements and show a large range of variation within the gait cycle .the determination and application of corrections , needed in order to suppress these artefacts , comprises an interesting research subject .finally , we would like to comment on one misconception in the field of medical physics , namely that the results of studies using one subject , or only a few subjects , are not reliable . in our opinion, there are studies which call for statistics and studies in which statistics is superfluous .when comparing two measurement systems and the results ( for a few subjects ) come out sufficiently close , it does make sense to ` pursue statistics ' and obtain reliable estimates of averages , standard deviations , and ranges .on the contrary , there are occasions ( in particular , in the validation of measurement systems ) where serious discrepancies are found in the output already obtained from the first subject .unless one explains why the comparison of the output of the two measurement systems failed for that one subject , one subject is sufficient in invalidating the application ! even in case that the tested measurement system failed for the specific subject ( and that it does not fail for the general subject ) , its validation must be performed for every future subject separately , to guarantee the validity of the output _ on a case - by - case basis _ ; of course , this is not the essence of validations of measurement systems .the original idea of investigating the subject of the present study belongs to s. roth and m. regniet , who ( along with r. jain ) had conducted a similar analysis of data obtained with the original kinect sensor .malinowski , e. matsinos , s. roth , on using the microsoft kinect sensors in the analysis of human motion , arxiv:1412.2032 [ physics.med-ph ] .http://www.xbox.com/en-gb/kinect/ , http://www.microsoft.com/en-us/kinectforwindows/ m.j .malinowski , e. matsinos , comparative study of the two versions of the microsoft kinect sensor in regard to the analysis of human motion , arxiv:1504.00221 [ physics.med-ph ] . j. shotton , real - time human pose recognition in parts from single depth images , ieee conference on computer vision and pattern recognition ( cvpr ) , june 20 - 25 , 2011 .h. sievnen , peripheral quantitative computed tomography in human long bones : evaluation of in vitro and in vivo precision , j. bone miner .res . 13 ( 1998 ) 871 - 882 .p. matousek , noninvasive raman spectroscopy of human tissue in vivo , appl .spectrosc .60 ( 2006 ) 758 - 763 .j. litniewski , ultrasonic scanner for in vivo measurement of cancellous bone properties from backscattered data , ieee transactions on ultrasonics , ferroelectrics , and frequency control 59 ( 2012 ) 1470 - 1477 .http://www.aicon3d.de , http://aicon3d.com/products/moveinspect-technology/moveinspect-hf/at-a-glance.html r.b .davis , s. unpuu , d. tyburski , j.r .gage , a gait analysis data collection and reduction technique , hum .movement sci . 10 ( 1991 ) 575 - 587 .a. pfister , a.m. west , s. bronner , j.a .noah , comparative abilities of microsoft kinect and vicon 3d motion capture for gait analysis , j. med .technol . 38 ( 2014 ) 274 - 280 .the values of the subject s bone lengths ( in mm ) , obtained from a marker - based system ( see section [ sec : results ] ) . to account for incorrect placement of the markers , an uncertainty of mmis assumed in all cases , save for the femoral - bone lengths , where an overall uncertainty of mm is applicable ( linear combination of the placement uncertainty of mm and of a -mm uncertainty representing the systematic effects of ref . , as discussed in section 2.2 of ref .these values have been verified with a non - stretchable tape measure . km / h & km / h & km / h + + left humerus & & & & & + right humerus & & & & & + left ulna & & & & & + right ulna & & & & & + left femur & & & & & + right femur & & & & & + left tibia & & & & & + right tibia & & & & & + + left humerus & & & & & + right humerus & & & & & + left ulna & & & & & + right ulna & & & & & + left femur & & & & & + right femur & & & & & + left tibia & & & & & + right tibia & & & & & + the coordinate system of the kinect sensor .the endpoints of the specific bone are identified as s and e. the angles and define the orientation of the bone in space , whereas the angle pertains to the kinect view of the bone ; when kinect views the bone at right angle , .the ( focal point of the ) camera of the sensor appears in the figure as point k.,width=585 ] the profile histogram of the humeral - bone length ( separately for the left- and right - side bones ) in bins ; the data has been captured with the original kinect sensor . in walking motion , the swinging of the right arm of the subject used in our data acquisition ,is very small.,width=585 ] the profile histogram of the humeral - bone length ( separately for the left- and right - side bones ) in bins ; the data has been captured with the upgraded kinect sensor . in walking motion ,the swinging of the right arm of the subject used in our data acquisition , is very small.,width=585 ] the profile histogram of the ulnar - bone length ( separately for the left- and right - side bones ) in bins ; the data has been captured with the original kinect sensor . in walking motion , the swinging of the right arm of the subject used in our data acquisition , is very small.,width=585 ] the profile histogram of the ulnar - bone length ( separately for the left- and right - side bones ) in bins ; the data has been captured with the upgraded kinect sensor . in walking motion , the swinging of the right arm of the subject used in our data acquisition , is very small.,width=585 ]
the present study is part of a broader programme , exploring the possibility of involving the microsoft kinect sensor in the analysis of human motion . we examine the output obtained from the two available versions of this sensor in relation to the variability of the estimates of the lengths of eight bones belonging to the subject s extremities : of the humerus ( upper arm ) , ulna ( lower arm , forearm ) , femur ( upper leg ) , and tibia ( lower leg , shank ) . large systematic effects in the output of the two sensors have been observed . + _ pacs : _ 87.85.gj ; 07.07.df , biomechanics , motion analysis , treadmill , kinect - mail : evangelos[dot]matsinos[at]zhaw[dot]ch , evangelos[dot]matsinos[at]sunrise[dot]ch
plasma is a typical complex medium exhibiting a wide variety of nonlinear phenomena such as self oscillations , chaos , intermittency , etc .the fluctuations in the edge region of magnetically confined fusion devices have also been associated to nonlinear processes like self - organization and chaotic behaviour .interestingly it has also been shown that it is possible to have a coexistence of low dimensional chaos and stochastic behaviour .the goal of this paper is to present the analysis of floating potential fluctuations at the edge region of the sinp tokamak , wherein these discharges showed an enhanced emission of hard x rays signifying the loss of high energy runaway electrons .we have deployed several techniques , both statistical and spectral to determine the nature of stochasticity , and observe the presence of low dimensional chaos along with stochastic fractal processes similar to bak et al .wtmm technique which has been successfully used in other fields , has been used to estimate the multifractal spectrum , and the presence of chaos , probably for the first time , in a magnetically confined plasma .we have cross checked these results with other known techniques like nonlinear analysis , probability distribution function etc .section [ pap - sec - sinptokamak ] states briefly about the s.i.n.p .tokamak and in section [ pap - sec - exptresults ] we have presented the experimental results and and discussion and in section [ section : conclusion ] conclusion .the experiments were performed in the sinp tokamak ( ) which is a small iron core machine having a circular cross - section .in addition to the vertical magnetic field coils it also has an aluminium conducting shell ( and thickness ) , with four cuts in the toroidal direction and two in the poloidal direction respectively .detail of the sinp tokamak will be found in ref .the penetration time of the conducting shell with cuts ( ) , keeping constant the toroidal magnetic field at , toroidal electric field ( ) at , filling pressure at and at , and was varied from about to .the edge plasma fluctuations are measured using a set of langmuir and magnetic probes and the data were been acquired using nicolet data acquisition system with a sampling rate of 1 mhz . in the present work , we report on the analysis of the floating potential signals from an electrostatic langmuir probe , mounted from the bottom port of the toroidal chamber , at .an interesting behaviour of the plasma discharges was observed in the discharge duration as the equilibrium vertical magnetic field , was lowered .fig.[pap - fig - sch8sch14taudischvsbv ] shows that the discharge duration is almost constant upto and then increases for .this extension is also clear from the discharge current at [ [ figs [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](e ) ] as compare to current duration for [ figs [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](a ) ] .the extension in plasma current duration was observed after an initial fall to about half its peak value .the instant at which the current extension is observed to begin is denoted as point b ( pt .b ) [ fig [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1 ] ] . the horizontal shift in the plasma position ( ) has been shown in fig . [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](c ) and ( g ) where , implies an outward shift . ) vs. plot.,width=3 ] ) evolution ; ( b ) and ( f ) 3 by 3 nai(tl ) limiter bremsstrahlung bursts ; ( c ) and ( g)is the horizontal plasma positioning ; ( d ) and ( h ) electrostatic probe floating potential ( ) signals ; for and respectively.,width=3 ] fig [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](b ) shows that a few hard x - ray burst were observed at when no extension in the discharge current was observed . on the other hand , the extension of the discharge after pt .b is observed to be accompanied by enhanced hard x - ray ( hx ) bursts [ fig [ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](f ) ] which is indicative of loss of highly energetic particles from the edge .a characteristic feature of the period of extension in these range of discharges is the reduction in the electrostatic langmuir probe floating potential fluctuations ( ) [ fig .[ pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](h ) ] .this correlation between the reduction of fluctuations levels with the enhancement of bursts of runaway electrons , was a motivation to study these electrostatic floating potential fluctuations from a time - resolved statistical analysis point of view .it is generally accepted that if the power spectrum [ of a signal , obtained from fast fourier transform ( fft ) , decays as where , and are the the spectral index and frequency respectively , then the signal shows a stochastic fractal or self - similar behaviour .a typical plot of the power spectrum in log - log scale is shown in fig.[pap - fig - befaftlgfft512pts - grph ] , for the discharge at and it is clear that for the fluctuations before [ fig .[ pap - fig - befaftlgfft512pts - grph](a ) ] and after pt .b [ fig .[ pap - fig - befaftlgfft512pts - grph](a ) ] follow the power law behavior , which indicates the presence of stochastic fractal processes . as the power spectrum can not extract the information regarding the time - frequency simultaneously , the presence of sharp transitions and small scale features contained in the signal , we introduce more advanced techniques like wavelet analysis etc . *wavelet analysis : * wavelet analysis , provides a way of analyzing the local behaviour of functions and correct characterization of time series in the presence of non - stationarity like global or local trends or biases .one of the main aspects of the wavelet analysis is of great advantage is the ability to reveal the hierarchy of ( singular ) features , including the scaling behaviour .the wavelet transform of a function is then given by : where , is the signal and is an oscillating functions that decays rapidly with time and are termed as wavelets .s and are the scale and time respectively .[ pap - fig - lowbv - cwtanalysis - befaftptb - sch14 ] represents the time - frequency contour plot of the power spectrum ( (s,) ) obtained from the wavelet analysis [ eqn .[ eqn : wt ] ] , for the discharge at . for simplicity the scales in the y - axishave been converted to psuedo - frequency ( ) which has been estimated from the relation , where is center frequency of the analyzing wavelet and is the sampling period of the signal .the presence of chaos or periodicity can be studied using ridge plots obtained from the wavelet transform spectrum , which has been discussed in detail by chandre et al .[ pap - fig - lowbv - cwtanalysis - befaftptb - sch14](a ) shows the typical ridge plot at before pt b. it shows that the most of the power is concentrated almost at a constant time scale a. the connected horizontal ridges suggest that the electrostatic floating potential signals are quasi periodic with resonance transitions occurring at regular intervals . for the signal after pt .b , the power is concentrated in two or three modes simultaneously at any given instant of time [ [ pap - fig - lowbv - cwtanalysis - befaftptb - sch14](b ) ] indicating the presence of chaos . the singularity spectrum , vs. , where is the distribution of the singularity strength , has been estimated using wavelet transform modulus maxima ( wtmm ) method using the following canonical equations , where , the singularity spectrum for before and after pt b , for discharge at is shown in fig.[pap - fig - befaft - dhsel - grph ] .the spectrum seems to be slightly asymmetric before pt .b , whereas it is almost symmetric after pt .b. the symmetry gives one an indication of multiplicative process and hence the fluctuations in the extended phase is associated to some avalanche phenomena . )pts before [ _ solid square _ ] and after pt .b [ _ open circle _ ] for the various discharges of .,width=3 ] before [ _ black , solid square _ ] and after pt .b [ _ black , open circle _ ] and ( b ) before [ _ black , solid square _ ] and after pt .b [ _ orange , open circle _ ] for the various . , width=3 ] the characteristic of the signals can also be described by degree of the multifractality ( ) which is defined by the difference between the maximum singularity strength ( ) and the minimum singularity strength ( ) .[ pap - fig - befaftbeta - grph ] shows the range of for is for datasets after pt .b and it is for datasets before pt .b. the decreasing trend in for the extended phases indicates that the system has a tendency to go towards a stochastic state .we estimated the fractal dimension ( ) and correlation dimension ( ) from the singularity spectrum .fig.[pap - fig - df - dcorr - befaft - grph](a ) and fig.[pap - fig - df - dcorr - befaft - grph](b ) show that in the extended discharge and are in the range of and respectively , indicating the presence of complex nature in the signal .a crosscheck of the above results for the presence of chaos or complexity , can be made by estimating the correlation dimension ( ) and lyapunov exponent ( ) . and been estimated using the grassberger - procaccia techniques and the wolf algorithm respectively . and before and after pt .b have been presented in fig.[pap - fig - dcorr - lyap - befaft - grph](a ) and [ pap - fig - dcorr - lyap - befaft - grph](b ) respectively . from fig . [pap - fig - dcorr - lyap - befaft - grph](a ) , it is clear that correlation dimension obtained using multifractal analysis and grassberger - procaccia techniques are of same order . fig .[ pap - fig - dcorr - lyap - befaft - grph](b ) shows is more positive for after pt .b indicating chaos .though we have estimated these exponent using insufficient less of data points , the results agrees well with wavelet analysis . before [ _black , solid square _ ] and after pt .b [ _ orange , open circle _ ] and ( b ) before [ _ black , solid square _ ] and after pt .b [ _ orange , open circle _ ] for the various discharges of .,width=3 ] in order to validate our nonlinear analysis we did a surrogate analysis of the extended discharge regime of fig.[pap - maxhighlowbv - iphxlpdbdt - sch14 - 1](i ) .the surrogate data has been generated by phase shuffled surrogate method , in which the phases are randomized by shuffling the fourier phases , and hence the power spectrum ( linear structure ) is preserved , but the nonlinear structures are destroyed . has been estimated for both the original [ fig [ pap - fig - shuffle - dcorr - aft - grph](b)i ] and the corresponding surrogate data [ fig [ pap - fig - shuffle - dcorr - aft - grph](b)ii ] , shown in fig [ pap - fig - shuffle - dcorr - aft - grph](a ) by _ solid circle _ and _ open circle _ respectively .the for the original data saturates at higher m , whereas in the case of the surrogate data one finds keeps on increasing with m. hence the estimated and are from nonlinear effects in the system .probabilistic descriptions such as robability istribution unction [ pdf ] are at the heart of the characterization of turbulence .[ pap - fig - sch14_blp1_pdf_twobv](a ) and [ pap - fig - sch14_blp1_pdf_twobv](b)show the pdf at before and after pt .b respectively .the corresponding gaussian fitting is shown by dashed line .both plots show that the pdfs are non - gaussian in nature .skewness ( s ) and kurtosis ( k ) which are measure of nongaussianity are shown in fig .[ pap - fig - bef - skwkrt - grph ] , for the extended discharge which also indicate deviation from gaussianity . from above analysisit is clear that during the extended discharge phase neither the system is purely stochastic in nature nor chaotic , rather a mixture of both is present . andb [ ( b ) ) ] at . corresponding gaussian fittingis shown by dashed line.,width=3 ] and ( k-3 ) [ black , solid square ] ( a ) before and ( b ) after pt .b for various values of .the dotted vertical line demarcates either side of ,width=3 ] the observations of the enhanced energy levels in the hx bursts in the extended phase could be a result of loss of the high energy particles which are probably generated in this phase .the observations of hx bursts can be correlated to some growing modes in the db / dt signals , as during the time instants of , and ( fig.[pap - fig - sch14-iphxdbdt ] ) .subsequently , one could infer that some instability could be triggering the deconfinement of the particles , which are thereafter lost , through some stochastic process at the edge . in ka , ( b ) hx bursts in mev , ( c ) db / dt signals.,width=3 ] to cross - check whether the discharge could sustain any such beam - plasma interactions , we considered the conditions that need to be satisfied : , , and , where , , , , and are the electron cyclotron frequency , electron plasma frequency , electron collision frequency , beam velocity and the critical velocity respectively and , being the primary runaway flux generation factor and , being the critical electric field for runaway generation and . is the loop voltage .using the experimental results , for beam energy , , , and , we have ghz and ghz and khz and mhz . hence first and second condition are satisfied in the extended discharge phase . for the same experimental conditions , \approx 3\times10 ^ 7~m / s ] ( corresponding beam energy is 30 kev ) .thus in the current extension phase the more energetic electrons will satisfy the third condition .hence it could imply that the loss of runway electrons observed in the extended phase could be a result of the participation of the higher energy electrons in beam - plasma instabilities within the plasma column .the edge stochastic behaviour possibly leads to the ejection of these runaway electrons .in the extended discharges of the sinp tokamak , where enhanced hx emission were observed , we have shown the presence of combination of both of stochasticity and low dimensional chaos using various techniques like wavelet analysis and other nonlinear techniques like the estimation of correlation dimension , lyapunov exponent and pdf .one still needs to look into other edge fluctuation behaviour , such as the magnetic , density and temperature fluctuations , in order to understand the role of the stochastic behaviour with the discharge extension , using the wavelet transform especially the wtmm method .we would like to thank prof .b. sinha , director , sinp for his support in carrying out this work .we also thank members of plasma physics division , sinp for their help during the experiments .rn would like to acknowledge useful discussions with prof .budaev and prof .m. rajkovic and prof .finken and organizers for providing financial support to attend fusion plasmas-2007 workshop , julich germany .mn acknowledges discussions with prof .j. c. sprott on nonlinear analysis techniques .sigeti d. , horsthemke w. , _ physical review a _ , * 35 * , 2276 , ( 1987 ) .i. daubechies ( 1992 ) _ ten lectures on wavelets _ s.i.a.m .m. holschneider 1995 _ wavelets - an analysis tool _ oxford science publications. a. arneodo , e. bacry and j.f .muzy , 1995 , _ physica a _ , * 213 * , 232
stochasticity is one of the most extensively researched topics in laboratory and space plasmas since it has been successful in explaining the various anomalous processes like transport , particle heating , particle loss etc . since there is a growing need for better understanding of this nonlinear process , it has led to the development of new and more advanced data analysis techniques . in this paper we present an analysis of the floating potential fluctuations which show the existence of a stochastic multifractal process along with low dimensional chaos . this has been shown primarily by wavelet analysis , and cross checked using other nonlinear techniques .
rna hairpins are elementary structures found in many macromolecular assemblies .it is generally accepted that a deeper understanding of their dynamics is a critical step towards the elucidation of many biological processes , like the regulation of gene expression ; the catalytic activity in many reactions ; the ligand - binding specificity ; or the rna folding problem .dna and rna hairpins are also appealing model systems for their simplicity as they are amenable to exhaustive studies using a more physically - oriented approach , where theoretical models can be rigorously tested using simulations and experiments .many different and complementary biophysical methods have been used to study these structures .for example , using time - resolved nuclear magnetic resonance ( nmr ) spectroscopy and thermal denaturation experiments , kinetics and thermodynamics of bistable rna molecules were studied .recently , a photolabile caged rna was designed to stabilize one ground - state conformation and study the folding kinetics by nmr and cd spectroscopy under different conditions , including mg .laser temperature - jump experiments have also been used to characterize the folding kinetics of small rna hairpins at the ns and timescales . using coarse - grained go - like models , it was predicted that hairpins unfold in an all - or - none process in mechanical experiments , in agreement with experimental results . within the cell ,many dynamical processes involving transient melting events of dna and rna double strands are driven by the application of localized forces by molecular motors .therefore , single - molecule experiments are ideal to understand the thermodynamics and kinetics of macromolecules inside cells . as pointed out by hyeon _ , force - denaturation using single - molecule experiments are intrinsically different from thermally - induced denaturation : in bulk experiments where the unfolded state is accessed by raising the temperature or lowering the concentration of ions , the unfolded state is a high - entropy state while in mechanical pulling experiments the unfolding process is a transition from a low - entropy state to another low - entropy state .regions of the free energy landscape normally inaccessible by conventional methods are probed using mechanical experiments .consequently , pathways and rates of thermally - induced and mechanical unfolding processes are expected to be different . in a previous work we pulled an rna hairpin using optical tweezers to study the base - pairing thermodynamics , kinetics and mechanical properties at a fixed monovalent condition .a kinetic analysis was introduced to determine the location of the force - dependent kinetic barrier , the attempt rate , and the free energy of formation of the molecule .here we performed a systematic study by mechanically pulling the same rna hairpin at different monovalent cation concentrations and also at mixed ionic conditions containing different concentrations of mg cations .this is important because rnas are highly charged polyanions whose stability strongly depends on solvent ionic conditions .despite its biological significance , we have limited information about rna helix stability in mixed monovalent / multivalent ionic conditions .in fact , the thermodynamic parameters for secondary structural elements of rnas have only been derived at the fixed standard salt condition of 1 m [ na .here we derived numbers such as the persistence length describing the elastic response of ssrna and also the free energy of formation of an rna hairpin at different monovalent and mixed monovalent / mg conditions .our results are compatible with predictions obtained using the tightly bound ion ( tbi ) model for mixed ion solutions , that treats monovalent ions as ionic background and multivalent ions as responsible from ion - ion correlation effects , and which takes into account only non - sequence - specific electrostatic effects of ions on rna .our findings demonstrate the validity of the approximate rule by which the non - specific binding affinity of divalent cations is equal to that of monovalent cations taken around 100 fold concentration for small molecular constructs .the rna molecule was prepared as previously described . oligonucleotides cd4f( 5-aattcacacg cgagccataa tctcatctgg aaacagatgag atta tggctcgc acaca-3 ) and cd4r ( 5-agcttgtgt gcgagccata atctcatc tgtttccagat gagattatggc tcgcgtgtg-3 ) were annealed and cloned into the pbr322 dna plasmid ( genbank j01749 ) digested with ecori ( position 4360 ) and hindiii ( position 30 ) .the annealed oligonucleotides contain the sequence that codes for a modified version of cd4 - 42f class i hairpin that targets the mrna of the cd4 receptor of the human immunodeficiency virus . oligonucleotides t7_forward ( 5-taatacgactca ctatagg gactggtga gtactca accaagtc-3 ) and t7_reverse ( 5-ta ggaagc agcccagt agtagg-3 ) were used as primers to amplify by pcr a product of 1201 bp from the recombinant clone containing the cd4 insert .this amplicon contains the t7 rna polymerase promoter at one end , and was used as a template to synthesize an rna containing the rna hairpin ( 20bp stem sequence and tetraloop gaaa ) and the rna components of handles a ( 527 bp ) and b ( 599 bp ) .the dna components of handles a and b were obtained by pcr from the pbr322 vector ( positions 3836 - 1 for handle a and positions 31 - 629 for handle b ) .handle a was 3 biotinylated while handle b was tagged with a 5 digoxigenin .hybridization reactions were performed in a formamide - based buffer with a step - cool temperature program : denaturation at 85 for 10 min , followed by 1.5 h incubation at 62 , 1.5 h incubation at 52 , and finished with a cooling to 10 within 10 min .all experiments were performed using a dual - beam force measuring optical trap at 25 in buffers containing 100 mm tris.hcl ( ph 8.1 ) , 1 mm edta , and nacl concentrations of 0 , 100 , 500 , and 1000 mm , or in buffers containing 100 mm tris.hcl ( ph 8.1 ) and mgcl concentrations of 0.01 , 0.1 , 0.5 , 1 , 4 , and 10 mm . the monovalent cation concentration [ mon includes the contributions from [ na ions and dissociated [ tris ions . at 25 and ph 8.1 , about half of the tris molecules are protonated , therefore 100 mm tris buffer adds 50 mm to the total monovalent ion concentration .anti - digoxigenin polyclonal antibody - coated polystyrene microspheres ( ad beads ) of 3.0 - 3.4 m ( spherotech , libertyville , il ) were incubated at room temperature with the molecular construct for 20 min .the second attachment was achieved inside the microfluidics chamber using a single optically trapped ad bead previously incubated with the rna hairpin and a streptavidin - coated polystyrene microsphere ( sa bead ) of 2.0 - 2.9 m ( g. kisker gbr , products for biotechnologie ) positioned at the tip of a micropipette by suction ( fig .1a and 1b ) .tethered molecules were repeatedly pulled at two constant loading rates of 1.8 pn / s or 12.5 pn / s by moving up and down the optical trap along the vertical axis between fixed force limits and the resulting force - distance curves ( fdcs ) were recorded ( fig .2a ) . a pulling cycle consists of an unfolding process and a folding process . in the unfolding process, the tethered molecule is stretched from the minimum value of force , typically in the range of 5 - 10 pn , where it is always at its native folded state , up to the maximum value of force , typically in the range of 25 - 30 pn , where the molecule is always unfolded . in the folding processthe molecule is released from the higher force limit ( unfolded state ) up to the lower force limit ( native folded state ) .a minimum of two molecules ( different bead pairs ) were tested at each ionic condition , and a minimum of 100 cycles were recorded in each case ( detailed statistics are given in the supporting material , section s1 ) under applied force it is feasible to reduce the configurational space of an rna hairpin containing base pairs ( bps ) to a minimum set of partially unzipped rna structures . each configuration in this set contains adjacent opened bps in the beginning of the fork followed by closed bps , with .the folded state ( f ) is defined as the configuration in which ( all bps are formed ) , and the unfolded state ( u ) is the hairpin configuration in which ( all bps are dissociated ) .based on a simple calculation ( see supporting material , section s2 ) we conclude that fraying plays a rather minor role ( if any ) on the folding / unfolding kinetics of the sequence under study ( fig .1a ) and we do not include it in our analysis .the stability of each configuration with respect to the f conformation is given by , the free energy difference at a given force between the duplex containing closed bps and the completely closed configuration ( f state ) , in eq .[ eq : free_energy ] is the free energy difference at zero force between a hairpin in the partially unzipped configuration and a hairpin in the completely closed configuration ; is equal to the reversible work needed to stretch the ssrna strands of the hairpin in configuration ( opened bases ) from a random coiled state to a force - dependent end - to - end distance ; and is the contribution related to hairpin stem orientation . an estimation of at 1 m [ can be obtained by using the nearest - neighbor ( nn ) energy parameters widely employed to predict the stability of rna secondary structures .it is given by the sum of the stacking contributions of the duplex region , containing bps .the elastic term is given by the molecular extension of ssrna , , can be estimated using polymer theory ( see section 2.4 ) .finally , the last term in eq .[ eq : free_energy ] , , is equal to the free energy of orientation of a monomer of length along the force axis : \ ] ] where is the applied force , is the boltzmann constant , is the bath temperature , and is the diameter of a double stranded chain , taken equal to 2 nm . to model the elastic response of ssrna we employed both the interpolation formula for the inextensible worm like chain ( wlc ) model and the freely jointed chain ( fjc ) model , which give the equilibrium end - to - end distance of a polymer of contour length stretched at a given force .these models have been mainly tested for long polymers .however , several studies indicate that they are generally applicable when the contour length is larger than the persistence length .the inextensible wlc is given by : \ ] ] where is the boltzmann constant , is the bath temperature and is the persistence length .the fjc model is given by \ ] ] where is the kuhn length .there are other models , such as the thick chain , that are more general than the wlc or the fjc and that have been used to fit the elastic response of biopolymers . despite of their greater complexity, we do not expect a qualitative improvement of our results by using them .we applied kramers rate theory to study the kinetics of the transition between states f and u. the framework for understanding the effect of an external force on rupture rates was first introduced in and extended to the case where the loading force increases with time . the assumption that the transition state does not move under an applied force be relieved by considering that the effective barrier that must be crossed by a brownian particle is force - dependent , .the unfolding and folding rates can be obtained as the first passage rates over the effective barrier , [ kinetic_rates ] in eq . [ kinetic_rates ] , f was selected as the reference state and has been defined in eq .[ eq : free_energy ] . is the attempt rate for activated kinetics .the effective barrier can be obtained analytically from kramers rate theory ( kt ) ( detailed derivation provided in the supporting material , section s3 ) as \label{eq : theor_effbarrier}\ ] ] with .importantly , the location of the barrier along the reaction coordinate can be obtained from the first derivatives of with respect to force , [ eq : location_of_bef ] }{df}\label{eq : location_of_bef_subeq2}\ ] ] where and are the distances from the effective barrier to the f and u states , respectively .the force - dependent fragility parameter , lies in the range [ -1:1 ] and is a measure of the compliance of a molecule under the effect of tension .compliant structures deform considerably before the transition event and are characterized by positive values of , i.e. . in contrast , brittle structures are defined by negative values of , .a given sequence can display different fragilities at different force regimes , due to changes in the location of the transition state ( ts ) with force . from the measured transition rates ( see section 2.6 ) we can get estimators for the effective barrier for unfolding and folding using the expressions in eq . [ kinetic_rates ] : [ eq : barrier_method ] by comparing the experimental estimators of the kinetic barrier with the effective barrier as predicted by kramers rate theory ( eq .[ eq : theor_effbarrier ] ) we can extract the free energy of formation of the hairpin , the attempt rate and the parameters that characterize the elastic response of the ssrna . while always can be determined by doing this comparison ,there is a trade - off between the contributions of the elastic response of the ssrna and the free energy of formation of the hairpin .although this is not strictly true ( the stretching contribution term is force dependent whereas the free energy of formation term is not ) it holds to a very good degree .therefore , if only the free energy of formation of the hairpin is known _ a priori _ , then we can extract the elastic properties of the ssrna by matching eqs .[ eq : barrier_method]a and [ eq : barrier_method]b with eq .[ eq : theor_effbarrier ] . on the contrary, if we only know the elastic properties of the ssrna , then we can extract the free energy of formation of the hairpin ( see supporting material , section s4 ) .the molecular transitions during unfolding and folding can be identified as force rips in a force - distance curve ( fdc ) . in order to extract the unfolding and folding rates ( eq . [ eq : kinetic_rates_subeq1 ] and [ eq : kinetic_rates_subeq2 ] ) from experimentswe have collected the first rupture forces associated with the unfolding and folding parts of each pulling cycle ( fig .2a and 2b ) . by plotting the number of trajectories in which the molecule remained at the initial configuration ( f state during the stretching part and u state in the releasing part of the cycle ) as a function of force , divided by the total number of trajectories , we obtained experimental estimators for survival probabilities of the u and f states .moreover , we obtained an experimental estimator for the probability densities of unfolding and folding first rupture forces by doing normalized histograms of both datasets ( , where is the number of events in the range between and ) .the survival probabilities are related to by the following equations , [ eq : surv_probabilities ] if we assume a two - state transition , the time - evolution of the survival probabilities is described by the following master equations : [ eq : master eqs . ] with this assumption and the experimental estimators for survival probabilities and densities , it is possible to extract the transition rates from rupture force measurements using , with the pulling speed .it is interesting to experimentally measure the effect of salt on the free energy of formation of nucleic acid hairpins .however , uv absorbance experiments can not be carried out for this particular sequence because its melting temperature is too high to obtain reliable results ( see supporting material , section s5 ) .therefore , as mentioned in section 2.3 , the estimation of the free energy of formation of the rna hairpin at 1 m [ mon is obtained using the nn energy parameters proposed by . to introduce the effect of monovalent salt concentration [ mon we assume a sequence - independent correction ) ] that captures the effect of mg ions on the hairpin free energy of formation : ,[\mathrm{mg^{2+}}]}(0)=\delta g_{\rm n}^{1m}(0)-ng_{1}([\text{mon}^{+}])-ng_{2}([\mathrm{mg^{2+}}]).\label{eq : mixed_cond_correction}\ ] ] in what follows , unless stated otherwise , all monovalent and divalent salt concentrations ,[\mathrm{mg^{2+}}] ] , where [ mon is expressed in mm units .as we will see , there are experimental and theoretical evidences that support the logarithmic effect of monovalent ions to the stability of nucleic acid hairpins . using this correction ,the variation of with monovalent salt concentration depends strictly on the value of the constant . in order to derive from our data, we compared the estimators of obtained experimentally ( and in eq . [ eq : barrier_method ] ) with the theoretical prediction ( in eq .[ eq : theor_effbarrier ] ) at different values of . in fig .4a - d , we see the correspondence between theory and experiments at each monovalent ion concentration . for all salt concentrations , we found the best agreement at kcal / mol .this value agrees with the sequence - independent salt correction reported for dna duplex oligomers in melting experiments , kcal / mol , and in unzipping experiments of polymeric dna , kcal / mol .3d summarizes all the results . at a given forcewe see that the height of the kinetic barrier increases with salt concentration , which again indicates that salt increases kinetically the stability of the rna structure . in fig .5 we show the dependence of the measured of the rna hairpin on the monovalent ion concentration . as expected from earlier observations on dna and from the application of counterion condensation theory to interpret polyelectrolyte effects on equilibrium involvinghighly charged , locally rod - like polyelectrolytes , we observe an approximately linear dependence of rna duplex stability on the logarithm of monovalent salt concentration .interestingly , our data can also be well - described by the empirical expressions derived in , where the tbi model is used to predict the hairpin free energies at different ionic conditions ( see supporting material , section s7 ) . by deriving the effective barrier as a function of force we can measure the distance of the ts to the f and u states , and ( eqs . [ eq : location_of_bef_subeq1 ] and [ eq : location_of_bef_subeq2 ] ) , and the fragility of the molecule as a function of the applied force ( eq . [ eq : fragility ] ) .6 shows the two extreme cases with 50 and 1050 mm [ mon ( continuous and dashed lines respectively ) . in panela we observe that the location of the ts changes as a function of force .the same trend is observed for the fragility in panel b , where the experimentally measured points , the predicted force - dependent fragility ( black curves ) , and the expected values of the fragility for all possible locations of the ts along the stem of the hairpin are represented ( horizontal grid , right scale ) . at low forces , the ts is located near the loop , ( fig .6a , dark gray curves ) . at intermediate forces that depend on salt concentration the ts moves to the stem region ( fig .6a , gray curves ) . at large forcesthe ts has disappeared .these results are in agreement with previous findings using the same hairpin sequence .moreover , we see that at higher monovalent salt concentrations the locations of the different ts mediating unfolding and refolding are the same ( 18 , 6 or 0 for low , intermediate and high forces respectively ) but shifted to larger forces .these results agree with the hammond s postulate : at increasing [ mon the f state is increasingly stabilized while the ts is shifted towards the u state ; simultaneously , as force increases the ts approaches the f state .we have also performed pulling experiments in mixed monovalent / mg buffers , containing a fixed concentration of tris ions ( 50 mm ) and varying concentrations of mg(see materials and methods , section 2.2 ) .the rupture force distributions for all mixed monovalent / mg conditions can be found as supporting material ( section s6 ) .we found two regimes in the behavior of the average rupture forces for unfolding and folding processes along the range of [ mg ] experimentally explored .below 0.1 mm [ mg ] , there is no significant difference between control ( no mg added ) and magnesium - containing conditions ( figs . 7a and 7b ) .however , at higher magnesium concentrations , we found a linear dependence of average rupture forces with the logarithm of [ mg ] ( fig .7a and 7b ) .interestingly , owczarzy _ et al ._ have made a similar observation in dna melting experiments done in mixed monovalent / mg conditions .they found that the ratio }/[\text{mon}^{+}] ] at any mixed salt condition using eq .[ eq : mixed_cond_correction ] : )=\frac{1}{n}\left(\delta g_{\rm n}^{{\rm mfold}}(0)-\delta g_{\rm n}^{{\rm tbi}}(0)-nm\log([\text{mon}^{+}/1000])\right ) .\label{eq : g2}\ ] ] from this expression , we can extract the value of ,[\mathrm{mg^{2+}}]}(0) ] using eq .[ eq : barrier_method ] .estimators of obtained experimentally ( eqs . [ eq : barrier_method ] ) were compared with the expected ( eq . [ eq : theor_effbarrier ] ) profiles for different values of ( 0 , 0.1 , 0.2 , 0.3 , 0.4 and 0.5 from top to bottom ) .red ( green ) points are the experimental estimators at a pulling rate of 12.5 ( 1.8 ) pn / s .blue ( magenta ) points are the experimental estimators of at a pulling rate of 12.5 ( 1.8 ) pn / s .light blue lines are the profiles for values of not matched , and black lines are the experimental estimators of that match with experiments .application of the method to experiments done at 1050 mm ] ( * b * ) , 150 mm ] ( * d * ) .* figure 5 .free energy of formation of the rna hairpin as a function of [ mon . * main panel : free energy obtained experimentally ( squares ) , using the logarithmic dependence with salt concentration given by )$ ] ( dashed line ) and using the tbi model ( continuous line ) inset : persistence lengths obtained from the application of the thick chain model to published experimental data for poly - u rna stretching in buffers containing 5 , 10 , 50 , 100 , 300 , and 500 mm of [ na ( squares ) .we have included the value of the persistence length that we obtained in this study at 1050 mm [ mon ( empty circle ) .two different fits to data were done from eq .[ eq : persistence_length_debye ] : a fixed value ( red ) and as free parameter ( blue ) .* figure 6 .barrier location and mechanical fragility at 50 mm and 1050 mm [ mon . *( * a * ) force - dependence of the barrier position measured with respect to the f state , .continuous gray line is the wlc prediction of the molecular extension when or bps are unzipped at 50 mm [ mon , and dashed gray line corresponds to the wlc prediction when or bps are unzipped at 1050 mm [ mon .as seen , at an intermediate value of forces coincides with the ts for both ionic conditions .( * b * ) dependence of fragility with force .gray lines indicate the value of the fragility for different locations of the ts along the stem .continuous black lines are the theoretical prediction using kramers rate theory for data at 50 mm [ mon , and dashed black lines for data at 1050 mm [ mon .blue and green points are the experimental evaluation of and for folding and unfolding data collected at 50 mm [ mon .red and purple points are the experimental evaluation for folding and unfolding at 1050 mm [ mon .* figure 7 . kinetic analysis of experiments at varying [ mg ] . *( * a * ) experimental distribution of the unfolding rupture forces in buffers containing 0.00 mm ( red ) , 0.01 mm ( green ) , 0.1 mm ( blue ) , 0.5 mm ( magenta ) , 1 mm ( cyan ) , 4 mm ( orange ) , and 10 mm ( black ) of [ mg ] and 50 mm of monovalent cations .these experiments were done at a loading rate of 1.8 pn / s .( * b * ) average rupture forces and standard deviations obtained in experiments done at different [ mg ] and at loading rates of 1.8 pn / s ( red ) and 12.5 pn / s ( blue ) .full symbols refer to unfolding and empty symbols to folding .( * c * ) log - linear plot of the transition rates versus force .experiments were done at 0.00 mm [ mg ] for loading rates of 1.8 pn / s ( dark red ) and 12.5 pn / s ( red ) , at 0.01 mm [ mg ] for loading rates of 1.8 pn / s ( dark green ) and 12.5 pn / s ( green ) , at 0.1 mm [ mg ] for loading rates of 1.8 pn / s ( dark blue ) and 12.5 pn / s ( blue ) , at 0.5 mm [ mg ] for loading rates of 1.8 pn / s ( dark violet ) and 12.5 pn / s ( magenta ) , at 1 mm [ mg ] for loading rates of 1.8 pn / s ( dark cyan ) and 12.5 pn / s ( cyan ) , at 4 mm [ mg ] for loading rates of 1.8 pn / s ( dark orange ) and 12.5 pn / s ( orange ) , and at 10 mm [ mg ] for loading rates of 1.8 pn / s ( gray ) and 12.5 pn / s ( black ) .( * d * ) dependence of the effective barrier on force at different [ mg ] .color code as in ( * c * ) .* figure 8 .determination of the persistence length of ssrna at varying [ mg ] .* estimators of obtained experimentally were compared with the expected profiles for different values of ( 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 , 1.1 , 1.2 , 1.3 , 1.4 and 1.5 nm from top to bottom ) using eq .[ eq : theor_effbarrier ] and eqs .[ eq : barrier_method ] .red ( green ) points are the experimental estimators at a pulling rate of 12.5 ( 1.8 ) pn / s .blue ( magenta ) points are the experimental estimators of at a pulling rate of 12.5 ( 1.8 ) pn / s .light blue lines are the profiles for values of not matched , and black lines are the experimental estimators of that match the experiments .application of the method for experiments done at 0.01 mm [ mg ] ( * a * ) , 0.1 mm [ mg ] ( * b * ) , 0.5 mm [ mg ] ( * c * ) , 1 mm [ mg ] ( * d * ) , 4 mm [ mg ] ( * e * ) , and 10 mm [ mg ] ( * f * ) .* figure 9 .dependence of the persistence length on [ mg ] . * main panel : experimental persistence length versus [ mg ] .inset : dependence of the free energy of formation of the rna hairpin on [ mg ] with fixed 50 mm [ mon obtained using the tbi model .black points are the values of the free energy of formation that we used for our analysis .* figure 10 .barrier location and mechanical fragility at 0.01 mm and 10 mm [ mg ] . *( * a * ) force - dependence of the barrier position measured with respect to the f state , .continuous gray line is the wlc prediction of the molecular extension when or bps are unzipped at 0.01 mm [ mg ] , and dashed gray line corresponds to the wlc prediction when or bps are unzipped at 10 mm [ mg ] . at an intermediate range of force the ts coincides with for both ionic conditions .( * b * ) dependence of fragility at 0.01 mm and 10 mm [ mg ] .gray lines indicate the value of the fragility for different locations of the ts along the stem .continuous black lines are the theoretical prediction using kramers rate theory for data at 0.01 mm [ mg ] , and dashed black lines for data at 10 mm [ mg ] .blue and green points are the experimental evaluation of and for folding and unfolding data collected at 0.01 mm [ mg ] .red and purple points are the experimental evaluation for folding and unfolding at 10 mm [ mg ] .* figure 11 . comparison between [ mon and [ mg ] results . *( * a * ) free energy of formation of the rna hairpin at different salt conditions .magnesium concentrations have been multiplied by 100 along the horizontal axis .( * b * ) persistence length values for the ssrna hairpin at different salt conditions .magnesium concentrations have been multiplied by 100 along the horizontal axis .* parameters obtained from experiments at different [ mon .* ssrna persistence length , , and free energy of formation ( ) for the rna hairpin at different monovalent ion concentrations . [ cols="^,>,<,>,<,>,<",options="header " , ] in what follows is the concentration of monovalent salt and is the concentration of magnesium ions .both parameters are given in units of m. temperature is given in celsius .the empirical set of equations are : \\ x_1^h(x , y)=\frac{x}{x+(8.1 - 32.4/n ) ( 5.2-\log(x ) ) y } \\g_{1,2}(x , y)=-0.6 x_1^h(x , y ) ( 1-x_1^h(x , y ) ) \log(x ) \log((1/x_1^h(x , y)-1 ) x)/n\end{aligned}\ ] ] where is the free energy at any temperature and at any monovalent and magnesium ion concentration .there are two successful theories to account for the energetic interactions between ions in solution and nucleic acids : the poisson - boltzmann theory and the counterion condensation theory derived by manning .these theories are based on different mean field approaches and neglect any kind of correlations between the ions in the solution .more recently a new theory known as the tightly bound ion ( tbi ) model has been introduced , which accounts for the different modes of correlations between counterions . in fig .[ fig : manning ] we see the prediction provided by the manning theory and the tbi model to the free energy of formation of our rna hairpin as a function of the salt concentration .because correlations between monovalent ions are negligible , we see that both the manning theory and the tbi model give similar results under this condition ( fig .[ fig : manning]a ) .however , correlations between mg are important and the tbi model gives an improved prediction in this case ( fig .[ fig : manning]b ) .
rna duplex stability depends strongly on ionic conditions , and inside cells rnas are exposed to both monovalent and multivalent ions . despite recent advances , we do not have general methods to quantitatively account for the effects of monovalent and multivalent ions on rna stability , and the thermodynamic parameters for secondary structure prediction have only been derived at 1 m [ na . here , by mechanically unfolding and folding a 20 bp rna hairpin using optical tweezers , we study the rna thermodynamics and kinetics at different monovalent and mixed monovalent / mg salt conditions . we measure the unfolding and folding rupture forces and apply kramers theory to extract accurate information about the hairpin free energy landscape under tension at a wide range of ionic conditions . we obtain non - specific corrections for the free energy of formation of the rna hairpin and measure how the distance of the transition state to the folded state changes with force and ionic strength . we experimentally validate the tightly bound ion model and obtain values for the persistence length of ssrna . finally , we test the approximate rule by which the non - specific binding affinity of divalent cations at a given concentration is equivalent to that of monovalent cations taken at 100 fold that concentration for small molecular constructs .
turbulent convection is ubiquitous in stars and planets . in intermediate - mass stars like the sun, convection acts with radiation to transport the energy generated by fusion in the core to the surface where it is radiated into space . in short , convection enables the sun to shine. it also redistributes angular momentum , establishing differential rotation ( equator spinning about 30% faster than the polar regions ) and meridional circulations ( with poleward flow near the surface ) . furthermore , turbulent solar convection and the mean flows it establishes act to amplify and organize magnetic fields , giving rise to patterns of magnetic activity such as the 11-year sunspot cycle .other stars similarly exhibit magnetic activity that is highly correlated with the presence of surface convection and differential rotation .stars are hydromagnetic dynamos , generating vibrant , sometimes cyclic , magnetic activity from the kinetic energy of plasma motions .perhaps the biggest challenge in modeling solar and stellar convection is the vast range of spatial and temporal scales involved .solar observations reveal a network of convection cells on the surface of the sun known as granulation .each cell has a characteristic size of about 1000 km and a lifetime of 10 - 15 min .however , in order to account for the differential rotation and cyclic magnetic activity of the sun , larger - scale convective motions must also be present , occupying the bulk of the convection zone that extends from the surface down to 0.7 where is the solar radius .these so - called `` giant cells '' have characteristic length and time scales of order 100,000 and several weeks respectively .below the convective zone lies the convectively stable radiative zone where energy is transported by radiative diffusion .the interface between the convective and radiative zones is a thin internal boundary layer that poses its own modeling challenges .here convection overshoots into the stable interior , exciting internal gravity waves and establishing a layer of strong radial shear in the differential rotation that is known as the _ solar tachocline _ .another formidable modeling challenge is the geometry .high - mass stars ( , where is the solar mass ) are inverted suns , with convectively stable ( radiative ) envelopes surrounding convectively unstable cores .these are also expected to possess vigorous dynamo action but much of it is likely hidden from us , occurring deep below the surface where it can not be observed with present techniques .stellar lifetimes are anticorrelated with mass , so all high - mass stars are significantly younger than the sun .furthermore , stars spin down as they age due to torques exerted by magnetized stellar winds . thus , most high - mass stars spin much faster that the sun , as much as one to two orders of magnitude. the fastest rotators are significantly oblate .for example , the star regulus in the constellation of leo has an equatorial diameter that is more than 30% larger than its polar diameter .other types of stars and planets pose their own set of challenges .low - mass main sequence stars ( ) are convective throughout , from their core to their surface , so they require modeling strategies that can gracefully handle the coordinate singularity at the origin ( ) as well as the the small - scale convection established by the steep density stratification and strong radiative cooling in the surface layers .red giants have deep convective envelopes and dense , rapidly - rotating cores .jovian planets have both deep convective dynamos and shallow , electrically neutral atmospheric dynamics that drive strong zonal winds .all of these systems have the common property that they are highly turbulent .in other words , the reynolds number is very large , where and are velocity and length scales and is the kinematic viscosity of the plasma .for example , in the solar convection zone . furthermore , though all of these systems are strongly stratified in density ( compressible ) , most possess convective motions that are much slower than the sound speed .thus , the mach number .exceptions include surface convection in solar - like stars and low - mass stars and deep convection in the relatively cool red giants , where can approach unity .meeting this considerable list of challenges requires a flexible , accurate , robust , and efficient computational algorithm .it is within this context that we here introduce the compressible high - order unstructured spectral difference ( chorus ) code .chorus is the first numerical model of global solar , stellar , and planetary convection that uses an unstructured grid .this valuable feature allows chorus to avoid the spherical coordinate singularities at the origin ( ) and at the poles ( colatitude , ) that plague codes based on structured grids in spherical coordinates .such coordinate singularities compromise computational efficiency as well as accuracy , due to the grid convergence that can place a disproportionate number of points near the singularities and that can severely limit the time step through the courant - freidrichs - lewy ( cfl ) condition .thus , chorus can handle a wide range of global geometries , from the convective envelopes of solar - like stars and red giants to the convective cores of high - mass stars .the flexibility of the unstructured grid promotes maximal computational efficiency for capturing multi - scale nature of solar and stellar convection .as noted above , boundary layers play an essential role in the internal dynamics of stars and planets . in solar - like stars with convective envelopes ,much of the convective driving occurs in the surface layers , producing granulation that transitions to giant cells through a hierarchical merging of downflow plumes .meanwhile , the tachocline and overshoot region at the base of the convection zone play a crucial role in the solar dynamo .the unstructured grid of chorus will enable us to locally enhance the resolution in these regions in order to capture the essential dynamics .similarly , optimal placing of grid points will allow chorus to efficiently model other phenomena such as core - envelope coupling in red giants ( see above ) .furthermore , the unstructured grid is deformable .so , it can handle the oblateness of rapidly - rotating stars as well as other steady and time - dependent distortions arising from radial pulsation modes or tidal forcing by stellar or planetary companions .the spectral difference method ( sdm ) employed for chorus achieves high accuracy for a wide range of spatial scales , which is necessary to capture the highly turbulent nature of stellar and planetary convection .furthermore , its low intrinsic numerical dissipation enables it to be run in an inviscid mode ( no explicit viscosity ) , thus maximizing the effective for a given spatial resolution .one feature of chorus that is not optimal for modeling stars is the fully compressible nature of the governing equations .the dynamics of stellar and planetary interiors typically operates at low mach number such that acoustic time scales are orders of magnitude smaller than the time scales for convection or other large - scale instabilities .therefore , a fully compressible solver such as chorus is limited by the cfl constraint imposed by acoustic waves .many codes circumvent this problem by adopting anelastic or pseudo - compressible approximations that filter out sound waves .we choose instead to define idealized problems by scaling up the luminosity to achieve higher mach numbers while leaving other important dynamical measures such as the rossby number unchanged .this allows us to take advantage of the hyperbolic nature of the compressible equations , which is well suited for the sdm method and which promotes excellent scalability on massively parallel computing platforms .furthermore , the compressible nature of the equations will enable chorus to address problems that are inaccessible or at best challenging with anelastic and pseudo - compressible codes .these include high - mach number convection in red giants , the coupling of photospheric and deep convection , and the excitation of the radial and non - radial acoustic oscillations ( p - modes ) that form the basis of helio- and asteroseismology .currently chorus employs an explicit time - stepping method , which is not optimal for low mach number flow , especially on non - uniform grids .however , the sdm is well suited for split - timestepping and implicit time - stepping methods which we intend to implement in the future .the purpose of this paper is to introduce the chorus code , to describe its numerical algorithm , and to verify it by comparing it to the well - established anelastic spherical harmonic ( ash ) code .we begin with a discussion of the mathematical formulation and numerical algorithms of chorus in sections 2 and 3 respectively . in section 4we generalize the anelastic benchmark simulations of to provide initial conditions for compressible models like chorus .we then address boundary conditions in section 5 and the conservation of angular momentum in section 6 , which can be a challenge for global convection codes . in section 7we verify the chorus code by comparing its output to analogous ash simulations for two illustrative benchmark cases , representative of jovian planets and solar - like stars .we summarize our results and future plans in section 8 .we consider a spherical shell of ideal gas , bounded by an inner spherical surface at and an outer surface at where is the radius .we assume that the bulk of the mass is concentrated between both surfaces , and the gravity satisfies where is the gravitational constant , is the interior mass and is the radial unit vector . consider a reference frame that is uniformly rotating about the axis with angular speed where is the unit vector in direction . in this rotating frame, the effect of coriolis force is added to the momentum conservation equations .the centrifugal forces are negligible as they have much less contribution in comparison with the gravity .however , the chorus code can handle oblate spheroids in rapidly - rotating objects and we will consider centrifugal force effect in the future .the resulting system of hydrodynamic equations is where , , , and are time , pressure , temperature , density and velocity vector respectively . is the total energy per unit volume and is defined as where is the ratio of the specific heats . is the viscous stress tensor for a newtonian fluid .the term in eq.([eqn : energy ] ) represents the viscous heating .the diffusive flux is generally treated in the form of where is the entropy diffusion coefficient , is the specific entropy , is the thermal diffusivity ( thermal conductivity and radiative conductivity ) , and is the specific heat at constant pressure . the entropy diffusion is a popular way of parameterizing the energy flux due to unresolved , subgrid - scale convective motions which tend to mix entropy . at the bottom of the convection zone , is generally prescribed with a constant value which acts as the energy source of convection . the last term in eq.([eqn : energy ] ) is the work done by buoyancy .+ the governing equations for fully compressible model can be written in a conservative form as where is the vector of conserved variables , is the combination of the coriolis force term and the gravitational force term , and , , are the total fluxes including both inviscid and viscous flux vectors in local cartesian coordinates ( transforming to an arbitrary geometry will be discussed in next section ) . we write these as , , and , where + in eq.([eqn : ql])-([eqn : fgh_vis_2 ] ) , , , and are the velocity components in the , , and directions respectively . the viscous stress tensor components in eq.([eqn : fgh_vis_1 ] ) and ( [ eqn : fgh_vis_2 ] ) can be written in the following form where is the dynamic viscosity and is the kinematic viscosity .the equations of motion ( 4)-(9 ) are solved using a spectral difference method ( sdm ) .the sdm is similar to the multi - domain staggered method originally proposed by kopriva and his colleagues .liu et al first formulated the sdm for wave equations by extending the multi - domain staggered method to triangular elements .the sdm was then employed by wang et al for inviscid compressible euler equations on simplex elements and viscous compressible flow on unstructured grids .the sdm is simpler than the traditional discontinuous galerkin ( dg ) method since dg deals with the weak form of the equations and involves volume and surface integrals .the weak form is developed by integrating the product of a test function with the compressible navier - stokes equations .for the discontinuous galerkin method , the integration is performed over the spatial coordinates of each finite element of the computational domain .the sdm is similar to the quadrature - free nodal discontinuous galerkin method .the sdm of chorus is designed for unstructured meshes of all hexahedral elements .this spectral difference approach employs high - order polynomials within each hexahedral element locally .in particular , we employ the roots of legendre polynomials plus two end points for locating flux points .the stability for linear advection of this type of sdm has been proven by jameson .overall , high - order sdm is simple to formulate and chorus is very suitable for massively parallel processing .the computational domain is divided into a collection of non - overlapping hexahedral elements as illustrated in fig.[fig : mesh ] .these elements share similarities with the control volumes in the finite volume method . to achieve an efficient implementation , all hexahedral elements in the physical domain transformed to a standard cube .this mapping is achieved through a jacobian matrix .\ ] ] the governing equations in conservative form in the physical domain as described by eq.([eqn : govn_conserve ] ) are then transformed into the computational domain .the transformed equations are written as where and using the determinant of .the transformed flux components can be written as a combination of the physical flux components as in the two - dimensional standard element as illustrated in fig.[fig : thirdorder ] , two sets of points are defined , namely the solution and flux points for the sdm .a total of 9 solution points and 24 flux points are employed for the third - order sdm in 2d .a more detailed description of the sdm for quadrilateral elements can be found in .the unknown conserved variables are stored at the solution points , while flux components , , and are stored at the flux points in corresponding directions . . in order to construct a degree ( n-1 ) polynomial in each coordinate direction ,n solution points are required ( thus n is defined as the order the scheme ) .the solution points in 1d are chosen to be chebyshev - gauss - quadrature points defined by , \quad s=1,2,\cdots , n.\ ] ] the flux points are selected to be the legendre - gauss - quadrature points plus the two end points , 0 and 1 .choosing and , we can determine the higher - degree legendre polynomials as the locations of these legendre - gauss quadrature points for the n - th order sdm are the roots of equation plus two end points . using the solutions at n solution points , a degree ( n-1 )polynomial can be built using the following lagrange basis defined as similarly , using the ( n+1 ) fluxes at the flux points , a degree n polynomial can be built for the flux using a similar lagrange basis defined as the reconstructed solution for the conserved variables in the standard element is just the tensor products of the three one - dimensional polynomials , i.e. , similarly , the reconstructed flux polynomials take the following forms : the flux polynomials are element - wise continuous , but discontinuous across element interfaces . for computing the inviscid fluxes , an approximate riemann solver employed to compute a common flux at interfaces and to ensure conservation and stability . here the rusanov flux treatment for the direction is formulated as , where is the interface normal direction , is the fluid velocity normal to the interface and is the speed of sound .if the normal direction of the cell interface is mapped to either the or direction , the riemann fluxes can be formulated similarly . for calculating the viscous fluxes ,a simple averaging procedure is used for evaluating fluxes at interfaces .this procedure is similar to the br1 scheme . for future implementation of implicit time stepping methods, we can extend the chorus code to use br2 scheme .the number of degrees of freedom ( dofs ) for chorus simulations in this paper is computed as where is the total number of elements in the spherical shell and is the order of the scheme , which is equal to the number of solution points in each direction within one standard element . the chorus code is written in fortran 90 and efficient parallel performance is achieved by using the message passing interface ( mpi ) for interprocessor communication .the parmetis package is used to partition the unstructured mesh by means of a graph partitioning method .the parallel scalability of the chorus code is shown in fig.[fig : scalability ] using the yellowstone high - performance computing cluster at the national center for atmospheric research ( ncar ) , which is built on ibm s idataplex architecture with intel sandy bridge processors .these numerical experiments demonstrate strong scaling , with denoting the execution time of the sequential chorus code and denoting the execution time of the parallel chorus code with processors .two sets of simulations using the 4th order sdm are shown .the total numbers of elements in physical domain for test1 and test2 are 294,912 and 1,105,920 respectively , which correspond to 18,874,368 and 70,778,880 dofs in the 4th - order sdm simulations . .though the equations that chorus solves are general , we are intersted in simulating global - scale convection in stellar and planetary interiors for reasons discussed in section 1 .thermal convection is a classical fluid instability in the sense that it can develop from a static equilibrium state that satisfies certain instability criteria .the first is the schwarzschild criterion which requires a negative ( superadiabatic ) radial entropy gradient .the second is that the buoyancy force must be sufficiently strong to overcome viscous and thermal diffusion .this is typically quantified in terms of the rayleigh number , which must exceed a critical value in order for convection to ensue .though linear theory is generally concerned with static , equilibrium initial conditions , numerical simulations can tolerate initial conditions that are not in equilibrium .however , it is still desirable to initiate nonlinear simulations with states that are close to equilibrium in order to mitigate violent initial transients and minimize nonlinear ( dynamic ) equilibration times . in the sections that followwe describe how we set up the initial conditions for spherical shells of convection .chorus can also handle convective cores and fully convective geometries but we defer these applications to future papers .note that these initial conditions are static relative to the uniform rotation of the coordinate system ( ) .the differential rotation , meridional circulation , and convective motions are in no way imposed ; they are zero initially .after specifying the static initial conditions , chorus automatically introduces random thermal perturbations through the non - axisymmetric distribution of unstructured grid points to excite the convection which in turn establishes the mean flows . note also that the stratification in a stellar or planetary convection zone is nearly hydrostatic .convection will modify this but only slightly .so , the initial conditions not only excite the convection but they also establish the basic background stratification including crucial simulation properties such as the density contrast across the convection zone . to establish the static initial conditions we first consider a steady state in the absence of motions ( , ) .then the governing equations ( [ eqn : mass_convervation ] ) - ( [ eqn : energy ] ) reduce to the equation of hydrostatic balance , and the equation of thermal energy balance , where the subscript denotes the initial state .these are supplemented with the ideal gas law where is the specific gas constant , and the equation for specific entropy , though we solve the equations in dimensional form , it is useful to define several nondimensional numbers that characterize the parameter regime of the solution : here is the rayleigh number , is the fluid prandtl number , is the ekman number , is the density ratio across the layer , with and as the the densities at the inner and outer boundaries , , is the aspect ratio , and is the entropy difference across the layer , averaged over latitude and longitude .the hydrostatic balance equation ( [ eqn : hydrostatic ] ) , along with the constitutive equations ( [ eqn : gas_relation ] ) and ( [ eqn : entropy ] ) , can be satisfied by introducing a polytropic stratification as described by jones et al . : where and the subscripts , , and refer to the bottom , top , and middle of the layer respectively .once the dimensionless numbers together with other physical input values are determined , the profiles of , , , and can be evaluated .it follows from ( [ eqn : entropy ] ) that setting yields an adiabatic stratification ( ) .we do so here so that the subscript denotes a polytropic , adiabatic stratification .this provides an excellent first approximation to a stellar or planetary convection zone which is very nearly adiabatic due to the high efficiency of the convection . though they can not be strictly adiabatic because they must satisfy the schwarzschild criterion ( ) ,they are _ nearly adiabatic _ in the sense that the bar denotes an average over latitude and longitude . equation ( [ eqn : epsilon ] ) is the basis of the so - called _anelastic approximation _ in which the equations of motion are derived as perturbations about a static , often ( but not necessarily ) adiabatic , reference state .although we do not employ the anelastic approximation here , an adiabatic , polytropic reference provides a useful starting point for setting up the initial conditions .however , the process can not end here because this polytropic solution does not in general satisfy eq.([eqn : thermal_equilibrium ] ) and , since it does not satisfy the schwarzschild criterion , it will not excite convection . in anelastic systems , the superadiabatic component of the entropy gradient ( ) is assumed to be small so it can be specified indepedently of the adiabatic reference state .this amounts to setting , , and , and then solving equation ( [ eqn : thermal_equilibrium ] ) for : where is the luminosity .this procedure breaks down in fully compressible systems because equation ( [ eqn : entropy ] ) will only be satisfied to lowest order in .this is a high price to pay merely to satisfy equation ( [ eqn : thermal_equilibrium ] ) which should have little bearing on the final dynamical equilibrium achieved after the onset of convection .it is more essential to satisfy the constitutive equations ( [ eqn : gas_relation ] ) and ( [ eqn : entropy ] ) precisely , together with the hydrostatic balance equation ( [ eqn : hydrostatic ] ) to avoid a rapid initial restratification .we achieve this by introducing an extra step in the initialization procedure .as in anelastic systems , we compute the polytropic , adiabatic stratification as described in section [ sec : polytrope ] and we calculate as defined in eq.([eqn : target_entropy ] ) . however , unlike anelastic systems , we treat as a target entropy gradient and then solve equations ( [ eqn : hydrostatic ] ) , ( [ eqn : gas_relation ] ) , and ( [ eqn : entropy ] ) precisely using a separate finite difference code . in particular , we solve the following two equations for and using and as an initial guess the temperature is then given by ( [ eqn : gas_relation ] ) .this process produces a superadiabatic , hydrostatic , spherically - symmetric initial state that satisfies equations ( [ eqn : hydrostatic ] ) , ( [ eqn : gas_relation ] ) , and ( [ eqn : entropy ] ) .the entropy gradient will be equal to but the thermal energy balance equation ( [ eqn : thermal_equilibrium ] ) will only be satisfied to lowest order in . an alternative approach would be to solve all four equations ( [ eqn : hydrostatic])-([eqn : entropy ] ) simultaneously for the four unknowns , , , and .our _ almost flux balance _ approach is much easier to implement and provides an effective way to initiate convection .the inner and outer boundaries are assumed to be impenetrable and free of viscous stresses where are the velocity components in spherical coordinates . in addition , a constant heat flux ) is imposed at the bottom boundary and the temperature is fixed at the top boundary .the present chorus code employs 20 nodes including 8 corner points and 12 mid - edge points for each hexahedral element in the physical domain . a precise treatment of the curved top and bottom boundaries of the spherical shells is then assured by the iso - parametric mapping procedure mentioned in section 3 . for calculating inviscid fluxes on element interfaces ,an approximate riemann solver is generally used .however , exact inviscid fluxes are employed on top and bottom boundaries by using the fact that is precisely zero on spherical shell boundaries .a careful transformation between cartesian and spherical coordinate systems is conducted for computing viscous fluxes on two boundaries in order to ensure the stress - free conditions .the equations of motion ( 1)-(3 ) express the conservation of mass , energy , and linear momentum .the conservation of angular momentum follows from these equations and the impenetrable , stress - free boundary conditions discussed in section [ sec : bcs ] .these are hyperbolic equations and we express them in conservative form when implementing the numerical algorithm , as disscussed in section [ sec : algorithm ] .this , together with the spectral accuracy within the elements and the approximate riemann solver employed at cell edges , ensures that the mass , energy , and linear momentum are well conserved as the simulation proceeds .however , we do not explicitly solve a conservation equation for angular momentum .numerical errors including both truncation and round - off errors can result in a small change in angular momentum over each time step .though these changes may be small , even a highly accurate algorithm can accummulate errors over thousands or millions of time steps that can compromise the validity of a simulation .this can be an issue in particular for unstructured grid codes like ours that solve the conservation equations in cartesian geometries that are mapped to conform to the spherical boundaries .however , conservation of angular momentum can be violated even in highly accurate pseudo - spectral simulations , as reported by jones et al . .for this reason , we introduce an angular momentum correction scheme similar to one of the schemes described by jones et al .two correction steps are taken in the chorus code to maintain constant angular momentum over long simulation intervals including : 1 .calculate three cartesian components of angular momentum explicitly , namely .2 . add a commensurate rigid body with rotating rate to remove the angular momentum discrepancy . in step 1 ,three cartesian components are evaluated over the full spherical shell as note that all three components are initially zero relative to the rotating coordinate system .the introduced rigid body rotating rate in step 2 is determined by where are the moment of inertia of the the spherical shell in the , and directions respectively , and , , and .once is obtained , the cartesian velocity components , , and will be updated at each solution point using where function we note that this correction procedure applies equally well in the case of an oblate , rapidly - rotating star .the total moment of inertia in each direction is numerically calculated by summing up the moment of inertia in each element of the unstructured grid so it works regardless of the shape of the star . this correction procedure is computationally expensive if performed at every time step .thus , in all chorus simulations , we only do correction every 5000 time steps .fig.[fig : omega ] shows the time evolution of , and from a representative chorus simulation .this demonstrates the high accuracy of the numerical algorithm since the relative angular momentum error never exceeds even at this 5000-step correction interval .furthermore , it demonstrates that the cumulative long - term errors in the angular momentum components are well controlled .we verify the chorus code by comparing its results to the well - established anelastic spherical harmonic ( ash ) code .however , we acknowledge that this comparison is not ideal since the two codes solve different equations . as mentioned in section [ sec : polytrope ] , the equations of motion in the anelastic approximation are obtained by linearizing the fully compressible equations ( 1)-(3 ) about a hydrostatic , spherically - symmetric reference state .so we would only expect chorus and ash to agree in the limit , where is the normalized radial entropy gradient defined in eq .( [ eqn : epsilon ] ) .a thorough comparison between the two systems would involve a linear and nonlinear analysis demonstrating convergence as .this lies outside the scope of the present paper . herewe focus on defining two benchmark simulations , patterned after the gaseous atmosphere of jupiter and the convective envelope of the sun , and compare the results from chorus and ash . despite subtle differences in the model equations and substantial differences in the numerical method, we demonstrate good agreement between the two codes .this serves to verify chorus and to pave the way for future applications .ash is a multi - purpose code designed to simulate the hydrodynamics ( hd ) and magnetohydrodynamics ( mhd ) of solar and stellar interiors in global spherical geometries .it was first developed over 15 years ago and has remained at the leading edge of the field ever since , continually improving in its physical sophistication and parallel efficiency on high - performance computing platforms .ash results have appeared in over 100 publications with applications ranging from convection and dynamo action in solar - like stars and fully - convective low - mass stars , to core convection and dynamo action in massive stars , to mhd instabilities and stably - stratified turbulence , to the generation of and transport by internal gravity waves , to tachocline confinement , to flux emergence , to the hd and mhd of red giants .ash is based on the anelastic approximation ( sec .[ sec : polytrope ] ) and uses a poloidal - toroidal decomposition of the mass flux to ensure that the anelastic form of the mass continuity equation is satisfied identically ( ) .it is a pseudo - spectral code that uses triangularly - truncated spherical harmonic basis functions in the horizontal dimensions .although earlier versions of ash employed chebyshev basis functions in the radial dimension , the version presented here uses a centered , fourth - order finite difference scheme that was introduced to improve parallel efficiency . the radial grid is uniformly spaced for the simulations presented here and the boundary conditions are as specified in section [ sec : bcs ] .time stepping is accomplished using an explicit adams - bashforth scheme for the nonlinear terms and a semi - implicit crank - nicolson scheme for linear terms , both second order .ash is one of four global anelastic codes that were validated using a series of three carefully defined benchmark simulations presented by jones et al. .all three benchmarks had shell geometries and dimensional parameters that were chosen to represent deep convection in jupiter s extended atmosphere but they differed in their degree of magnetism ( one was non - magnetic ) and turbulent intensity ( spanning laminar and turbulent dynamo solutions ) . in order to concisely represent the effective parameter space ,the benchmark simulations were specified through a series of non - dimensional parameters as defined in eq.([eqn : numbers ] ) , namely , , , and .these benchmarks made use of a hydrostatic , adiabatic , polytropic reference state as described in sec .[ sec : polytrope ] , specified through the additional nondimensional parameters and ( the polytropic index ) .all four codes used similar numerical methods ( pseudospectral , spherical harmonic ) and agreed to within a few percent for a variety of different metrics of physical quantities .the benchmarks we define here are inspired by the anelastic convection - driven dynamo bechmarks of jones et al .however , some modifications to the anelastic benchmarks are necessary in order to ensure that they are consistent with the fully compressible equations solved by chorus .we already discussed one example of this when defining the initial conditions in sec . [sec : almost ] .another significant modification of the anelastic benchmarks that we introduce here concerns the mach number .an implicit requirement of the anelastic approximation is that the mach number of the flow is much less than unity , where is the sound speed .this is well justified in stellar convection zones where is typically less than . in a fully compressible codethis places a severe constraint on the allowable time step permitted by the courant - freidrichs - lewy ( cfl ) condition where is some measure of the minimum grid spacing .anelastic codes are not subject to this constraint . if the mach number is low , the cfl constraint imposed by the sound speed is much more stringent than that imposed by the flow field . in the futurewe will mitigate this constraint through the use of implicit and local time stepping . herewe address it by defining benchmark problems for which the mach number is low ( justifying the anelastic approximation in ash ) but not too low ( mitigating the cfl constraints of chorus ) .the cfl constraint arising from the sound speed can be a major issue for global , rotating convection simulations where the equilibration time scale is much longer than the dynamical time scale .if one neglects structural stellar evolution , the longest time scale in the system is the thermal relaxation timescale , which can exceed years in stars ; by comparison the dynamical time scale is of order one month .however , a more relevant time scale for equilibration of the convection is the thermal diffusion time scale , which is days for the jupiter benchmark and days for the solar benchmark ( days and days respectively ) . before proceeding to the simulation results ,we first define several important metrics that provide a means to compare chorus and ash . these include the mean kinetic energy relative to the rotating reference frame and its mean - flow components , namely the the differential rotation ( drke ) and the meridional circulation ( mcke ) : where denotes the volume of the computational domain and angular brackets denote averages over longitude .the growth rate of the kinetic energy is defined as from eq.([eqn : energy ] ) , four components of the energy flux are involved in transporting energy in the radial direction , namely the enthalpy flux , kinetic energy flux , radiative flux and entropy flux . in a statistically steady state , these four fluxes together must account for the full luminosity imposed at the bottom boundary : where and , , and denote the mean density , temperature and entropy , averaged over horizontal surfaces . on each horizontal surface ,the mean mach number is defined as where and the mean sound speed is the deep , extended outer atmosphere of jupiter is thought to be convectively unstable .this has motivated substantial work on the internal dynamics of giant planets and inspired the parameter regimes chosen for the anelastic benchmarks of jones et al .our first benchmark for comparing chorus and ash is similar to the hydrodynamic benchmark of jones et al . apart from the thermal boundary conditions . whereas we impose a fixed heat flux on the lower boundary and a fixed temperature on the upper ( sec .[ sec : bcs ] ) , jones et al .fix the specific entropy on both boundaries . note that this means that the rayleigh number , defined in terms of in eq .( [ eqn : numbers ] ) , does change somewhat in our simulations as the convection modifies the entropy stratification .this is in contrast to jones et al . where it was held fixed .the parameters for this case are specified in table [ tab : parameter_jupiter ] .the value of listed is the initial value , before convection ensues .the number of dofs used for the chorus simulation is about a factor of five larger than that used for the ash simulation , as indicated in table [ tab : computational_effort ] .both simulations use the same boundary conditions and initial conditions , apart from the random perturbations needed to excite convection which are generated independently by each code .l dimensionless parameters + , , , , , + + defining physical input values + = 7 , = 1.76 , = 1.9 , = 1.1 + = 3.503 , = 6.67 , + + derived physical input values + = 2.45 , = 4.55 , = 3.64364 , + = 3.64364 , = 1.5 + + other thermodynamic quantities + = 7.014464 , = 1.0509 + .the evolution of the kinetic energy densities for both chorus and ash simulations are illustrated in fig.[fig : ke_density_jupiter ] .as mentioned above , both simulations start with the same background stratification but with different random perturbations .the small amplitude of the initial perturbations ensures that each simulation begins in the linear regime . for each simulationthere is an initial adjustment period before the flow locks on to the fastest - growing eigenmode which then grows exponentially .the initial establishment period is different for each simulation , lasting roughly 8 days for the chorus simulation and 2 days for the ash simulation .however , this is to be expected from the different mix of random perturbations .a meaningful comparison can only be made between the two codes after they reach the exponential growth phase , which is followed by nonlinear saturation and a subsequent equilibrium .in other words , the two features of fig . [fig : ke_density_jupiter ] that should be compared are the slope of the line in the linear growth phase and the value of the kinetic energy density after each simulation has saturated and equilibrated . this first point of comparison , namely the growth rate ,is highlighted in fig.[fig : growth_jupiter ] .the peak values achieved by each simulation in the linear regime reflect the growth rate of the preferred linear eigenmode and agree to within about 2% ; for chorus and for ash .we define the nonlinear saturation time as the time at which the growth rate first crosses zero after the exponential growth phase .again , the saturation time is different between the two simualtions because of the random nature of the initial conditions .thus , in order to compare the two cases in the nonlinear regime we define a sampling time to be 10 days after the saturation time , .this is days for the chorus simulation and days for the ash simulation .averaged between ( ) days and days , for the chorus simulation and for the ash simulation .the difference is about .another point of comparison is the relative magnitude of the mean flows , as quantified by the drke and mcke defined in sec .[ sec : metrics ] . at the sampling time ,drke / ke and mcke / ke for the chorus simulation while drke / ke and mcke / ke for the ash simulation .this corresponds to a difference of about 16% and 5% for the dr and mc respectively .small differences between the two codes of order several percent are to be expected due to differences between the compressible and anelastic equations . furthermore , the relatively large difference in the drke / ke likely comes about because the simulations are not fully equilibrated .we address these issue further in the following section ( sec .[ sec : mach ] ) . as mentioned in secs .[ sec : polytrope ] and [ sec : ash ] , the comparisons between ash and chorus are only meaningful if the stratification is nearly adiabatic ( ) and the mach number is small . these conditions are required for the validity of the anelastic approximation . fig .[ fig : ma_jupiter ] demonstrates that these conditions are met , but that departures are significant .in particular , we expect that the anelastic and compressible equations are only equivalent to lowest order in , which reaches a value as high as 0.07 in the upper convection zone ( fig.[fig : epsilon_jupiter ] ) . in fig.[fig :ma_jupiter ] , the mean mach number is minimum at the bottom and increases with radius and reaches the maximum ( about 0.012 ) at the top in both the chorus and ash simulations . as defined in eq.([eqn : epsilon ] ) , is proportional to the mean entropy gradient .identical initial entropy gradients for the chorus and ash simulations implies that they have the same initial degrees of adiabaticity .when convection is present , the associated energy flux leads to a redistribution of entropy , tending to smooth out the entropy gradient .this is the reason that in fig.[fig : epsilon_jupiter ] becomes smaller than the initial near .the location corresponds to where the efficiency of the convection peaks .this is demonstrated in fig.[fig : flux_jupiter ] which shows the components of the energy flux defined in sec .[ sec : metrics ] .the _ almost flux balance _ initialization described in sec .[ sec : almost ] establishes an entropy stratification that carries most of the energy flux through entropy diffusion , with a slight over - luminosity of about 7% in the upper convection zone .once convection is established , it carries roughly 15% of this flux , flattening the entropy gradient and reducing .the kinetic energy flux for this case is negligible .the maximum values of at for the chorus and ash simulations are 0.1536 and 0.1484 respectively , showing the difference of . .in a nonlinear equilibrium state , the sum of the normalized fluxes in fig . [fig : flux_jupiter ] should be unity .this is clearly not the case ; as mentioned above both simulations are over - luminous by about 7% due to the initial entropy stratification .this will eventually subside but the process is slow , occurring gradually over a time scale that is longer than the thermal diffusion time scale of days but shorter than the thermal relaxation time scale of days ( see sec .[ sec : ash ] ) .this is demonstrated in fig.[fig : flux_1800 ] which shows the flux balance in the ash simulation after 1800 days .given the greater computational cost of chorus ( sec .[ sec : performance ] ) , and the satisfactory agreement at the sampling time ( within the expected order ) , we choose not to run the chorus simulation to full equilibration . .the structure of the convection at the sampling time is illustrated in fig .[ fig : vr_jupiter ] .though this is well into the nonlinear regime , both simulations are dominated by a series of columnar convective rolls approximately aligned with the rotation axis but sheared slightly in the prograde direction at low latitudes by the differential rotation .these are the well - known ` banana cells ' characteristics of convection in rotating spherical shells and most apparent for laminar parameter regimes .though this is well into the nonlinear regime , they reflect the preferred linear eigenmodes and are well described by a single sectoral spherical harmonic mode with , where and are the spherical harmonic degree and order . the degree and order and also be interpreted as the total wavenumber and the longitudinal wavenumber respectively .close scrutiny of fig .[ fig : vr_jupiter ] reveals that the two simulations exhibit a slightly different mode structure , with chorus selecting an mode and ash selecting an mode .this is demonstrated more quantitatively in fig.[fig : spectrum_jupiter ] which shows the spherical harmonic spectra of the velocity field on the horizontal surface as a function of spherical harmonic degree ( summed over ) at the sampling time . in the chorus simulation , the spectra of radial velocity , meridional velocity , and zonal velocity peak at and its higher harmonics , , and . in comparison , the velocity spectra in the ash simulation peak at , , and .this level of agreement is consistent with the expected accuracy of the anelastic and compressible systems . as discussed by jones et al . , the linear growth rates of the and modes in this hydrodynamic benchmark are the same to order and even a single anelastic code may choose one or the other depending on the random initial perturbations .the laminar nature of this benchmark highlights these small differences ; in more turbulent parameter regimes where the rayleigh number far exceeds the critical value , a broad spectrum of modes is excited and the results are less sensitive to the details of the initial conditions and the linear eigenmodes .this is demonstrated by the solar benchmark described in sec .[ sec : solarbm ] , which is in a more turbulent parameter regime and which exhibits closer agreement between the velocity spectra . .the mean ( averaged ) flows for the jupiter benchmark are shown in fig .[ fig : mcdr_jupiter ] .all averages span 2 days ( about 4.8 rotation periods ) , starting at the sampling time .the meridional circulation is expressed in terms of a stream function , , defined as and the differential rotation is expressed in terms of the angular velocity we define thermal variations and by averaging over longitude and time and then subtracting the spherically - symmetric component ( ) in order to highlight variations relative to the mean stratification . the chorus and ash results in fig .[ fig : mcdr_jupiter ] correspond closely with a few notable exceptions . near the equator in the upper convection zone , in fig.[fig : mcdr_jupiter](_d _ ) is somewhat smaller than that in fig.[fig : mcdr_jupiter](_h _ ) .this is also reflected by the lower drke / ke noted in sec .[ sec : growth ] .as mentioned there , this discrepancy may in part be because the simulations are not strictly in equilibrium at the sampling time ( fig .[ fig : flux_jupiter ] ) .we would expect the correspondence to improve if we were to run chorus for several thousand days , giving the mean flows ample time to equilibrate along with the stratification .the wiggles in the plot for chorus ( fig .[ fig : mcdr_jupiter](_b _ ) ) can be attributed to two factors .first , unlike ash , the relevant state variable in chorus is the total energy .the specific entropy must be obtained from by subtracting out contributions from the kinetic energy and the mean stratification to obtain the pressure variations , from which is computed ( using also the density variation ) .this residual nature of along with the post - processing step of interpolating the chorus results onto a structured , spherical grid both contribute numerical errors . in ash , by contrast , is a state variable .the correspondence of figs .[ fig : mcdr_jupiter](_b _ ) and ( _ f _ ) despite these numerical errors is a testament to the accuracy of chorus . .having gained confidence in simulating the jupiter benchmark , we now look into defining a benchmark that has more in common with the sun .this serves two purposes .first , it helps verify the chorus code by probing a different region in parameter space , this one more turbulent .second , it makes explicit contact with solar and stellar convection which is the primary application we had in mind when developing chorus . the most significant differences between the solar and the jupiter benchmarks include the rayleigh number , the density stratification , and the radiative heat flux .the value of is about four times larger in the solar benchmark .this , combined with the larger value of implies that the flow is more turbulent and less rotationally constrained ( larger rossby number ) . in the middle of the convection zone , the rossby number for the sun benchmark and it is larger than the jupiter benchmark . as in the sun , the radiative flux ( which was set to zero in the jupiter benchmark ) carries energy through the bottom boundary , extending into the lower convection zone ( see fig .[ fig : flux_sun ] ) .this allows us to set the radial entropy gradient to zero at the lower boundary , which is what marks the base of the convection zone in the sun .another significant difference is the aspect ratio , for which we use a solar - like value of . as mentioned in secs .[ sec : intro ] and [ sec : ash ] , a challenge in simulating solar convection with a compressible code like chorus is the low value of the mach number .we address this challenge by scaling up the luminosity by a factor of 1000 relative to the actual luminosity of the sun . according to the mixing - length theory of convection ,the rms velocity should scale as .thus , to preserve a solar - like value of the rossby number ( important for achieving reasonable mean flows ) , this suggests that we would need to scale up the rotation rate by a factor of approximately 10 relative to the actual sun .we then chose values of of and to give practical values for and .table [ tab : parameter_sun ] gives the parameters for the solar benchmark . the radiative flux is parameterized by expressing the radiative diffusion as , where , , , and .the parameter is chosen so that on the bottom boundary .as for the jupiter benchmark , we define a sampling time that is 10 days after the nonlinear saturation time .the sampling times are days and days for chorus and ash respectively .the number of dofs used for the chorus simulation is about a factor of 6.8 larger than that used for the ash simulation , as indicated in table [ tab : computational_effort ] .l dimensionless parameters + = 2.447 , = 3 , = 0.736762 , = 1,428,567 , = 1 , = 1.5 + + defining physical input values + = 6.61 , = 8.1 , = 1.98891 , = 0.21 + = 1.4 , = 6.67 + + derived physical input values + = 4.87 , = 1.74 ,=6.0 , + = 6.0 , = 5/3 + + other thermodynamic quantities + = 3.846 , = 3.5 + the volume - averaged kinetic energy densities for both the chorus and ash simulations are plotted in fig.[fig : sun](a ) . as in the jupiter benchmark , the random initial conditions lead to differences in the initial transients but once the preferred linear eigenmode begins to grow the two simulations exhibit similar growth rates and nonlinear saturation levels .averaged between ( ) and days , for the chorus simulation and for the ash simulation .the difference is about . in the linear regime, they grow very fast and the ranges of a well - defined linear unstable regime are narrow as shown in fig.[fig : sun](b ) for both chorus and ash simulations .thus , only their maximum growth rates in that regime are compared .a difference of only is present between of the chorus simulation and of the ash simulation . at the sampling time , drke / ke and mcke / ke for the chorus simulation while drke / ke and mcke / ke for the ash simulation ( a discrepancy of about 9% and 19% respectively ) . as in the jupiter benchmark , this relatively large difference in mean flows is likely attributed to the immature state of the simulations , which has not yet achieved flux balance by the sampling time ( sec .[ sec : machsun ] ) . .as in the jupiter benchmark , a low mach number is also achieved in this solar benchmark , with the maximum mach number around 0.017 and 0.015 for the chorus simulation and the ash simulation , respectively , as shown in fig.[fig : ma_sun ] .the value of also peaks in the upper convection zone ( fig .[ fig : epsilon_sun ] ) . reflecting the flatter entropy gradient at the sampling time , becomes smaller where the convective efficiency is largest , near .the chorus and ash simulations also have similar flux balances as shown in fig.[fig : flux_sun ] .both are over - luminous ( ) due mainly to the large diffusive and entropy fluxes and , which have not yet equilibrated .this imbalance subsides by days as verified by an extended ash simulation .however , as with the jupiter benchmark , we have not run chorus to full equilibration because of the computational expense . increasing the luminosityfurther will mitigate this relaxation time and we plan to exploit this for future production runs . the radiative flux carries energy through the bottom boundary and dominates the heat transport in the lower convection zone while the entropy flux carries energy through the top boundary and the upper convection zone .the enthalpy flux gradually increases towards the top until peaks near , and then drops down to zero rapidly as it approaches the impenetrable top boundary . at , and from the chorus simulation while and from the ash simulation . .the structure of the convection in the solar benchmark is illustrated in fig.[fig : vr_sun ] .one would not expect the instantaneous flow field in a highly nonlinear simulation to correspond exactly between two independent realizations but the qualitative agreement is promising .this qualitative agreement is confirmed quantitatively by comparing the velocity spectra in fig.[fig : spectrum_sun ] .the radial velocity spectrum peaks at ( fig.[fig : spectrum_sun](_a _ ) ) and the meridional velocity spectrum exhibits substantial power in two wavenumber bands , namely and ( fig.[fig : spectrum_sun](_b _ ) ) . for the zonal velocity spectra ( fig.[fig : spectrum_sun](_c _ ) ) , three modes , and are prominent in the low ( ) range . after peaking at , the high- power decreases exponentially in amplitude .some of the small discrepancies between the two curves in each plot can likely be attributed to random temporal variations that would cancel out with some temporal averaging . the meridional circulation from the chorus simulation ( see fig.[fig : mcdr_sun](a ) ) and the ash simulation ( see fig.[fig : mcdr_sun](e ) ) exhibit similar flow patterns with most circulations concentrating between the low latitudes and middle latitudes , outside the so - called _ tangent cylinder _ , namely the cylindrical surface aligned with the rotation axis and tangent to the base of the convection zone .the specific entropy perturbation are shown in fig.[fig : mcdr_sun](b ) for the chorus simulation and in fig.[fig : mcdr_sun](f ) for the ash simulation . in both simulations , the contours of symmetric about the equator . by comparing the contours of in fig.[fig : mcdr_sun](c ) for the chorus simulation and fig.[fig : mcdr_sun](g ) for the ash simulation , a good agreement is achieved . for both simulations, they also have similar differential rotation profiles as shown in fig.[fig : mcdr_sun](d ) and fig.[fig : mcdr_sun](h ) .some of the small discrepancies can likely be attributed to the flux imbalances shown in fig.[fig : flux_sun ] , causing mean flows to vary slowly as the simulations equilibrate from different random initial conditions and nonlinear saturation states .residual random temporal fluctuations may also be present despite the ( short ) time average , particularly for the meridional circulation which is a relatively weak flow with large fluctuations . .as demonstrated in sec .[ sec : algorithm ] , the chorus code achieves excellent scalability out to 12k cores .this is strong scaling for intermediate - resolution simulations .we expect higher - resolution simulations to scale even better to tens of thousands of cores . in this sectionwe consider chorus s performance relative to the anelastic code ash .the computational efforts are summarized in table [ tab : computational_effort ] . the resolution in ashis expressed as where , , and are the number of grid points in the radial , latitudinal , and longitudinal directions respectively . however , due to pseudo - spectral de - aliasing and the symmtery of the spherical harmonics , the effective number of dofs for ash is , where is the maximum spherical harmonic mode . for both benchmarks , and .the total number of core hours needed to run 10 days is much larger for chorus than it is for ash ; by more than three orders of magnitude for the jupiter benchmark and by a factor of 75 for the solar benchmark .much of this is due to the smaller time step required by the compressible scheme and the higher number of degrees of freedom used in running the chorus benchmarks .furthermore , the ash simulations were run on only 71 cores whereas the chorus runs typically employed several thousand.thus , imperfect scaling is a factor , as is a difference in the computational platform .the ash simulations were run with the intel xeon e5 - 2680v2 ( ivy bridge ) cores on nasa s plieades machine ( 2.8 ghz clock speed , 3.2gb / core memory ) whereas the chorus simulations were run with the intel xeon e5 - 2670 ( sandy bridge ) cores on ncar s yellowstone machine ( 2.6 ghz clock speed , 2 gb / core memory ) .furthermore , chorus uses a fourth - order accurate five - stage explicit runge - kutta method whereas ash uses a simpler second - order mixed adams - bashforth / crank - nicolson time stepping .this also contributes to the larger number of core hours per time step used by chorus ( table [ tab : computational_effort ] ) .though ash out - performs chorus for these simple benchmark problems , it must be remembered that such problems are ideal for pseudo - spectral codes ; relatively low resolution , laminar runs dominated by a limited number of spherical harmonic modes .the real potential of chorus will be realized for high - resolution , turbulent , multi - scale convection where its superior scalability and variable mesh refinement will prove invaluable .it can also be used for studying physical phenomena such as core convection and oblate stars that are challenging or even inaccessible to codes that use structured , spherical grids .furthermore , there is much potential for improvement in the efficiency of chorus ; we intend to implement an implicit time marching scheme and a p - multigrid method as well as local time stepping in the future , and to optimize the numerical algorithm for higher performance on heterogeneous ( cpu / gpu ) architectures that are very suitable for data structures of the sdm .llllc & & + code & chorus & ash & chorus & ash + resolution & 294,912 elements & & 307,200 elements & + dofs & 18,874,368 & 3,750,030 & 19,660,800 & 2,907,000 + time step ( s ) & 1.5 & 533 & 4 & 20 + + core hours per time step & & & & + + number of iteration + required to run 10 days & 576,000 & 1,621 & 216,000 & 43,200 + + number of core hours + needed to run 10 days & 43,380 & 11.4 & 17,352 & 230 +we have developed a novel high - order spectral difference code , chorus , to simulate stellar and planetary convection in global spherical geometries . to our knowledge, the chorus code is the first stellar convection code that employs an unstructured grid , giving it unique potential to simulate challenging physical phenomena such as core convection in high and low - mass stars , oblate distortions of rapidly - rotating stars , and multi - scale , hierarchical convection in solar - like stars .the chorus code is fully compressible , which gives it advantages and disadvantages over codes that employ the anelastic approximation . on the one hand, the hyperbolic nature of the compressible equations promotes more efficient parallel scalability over the ( elliptical ) anelastic equations .indeed , we demonstrated that the chorus code does achieve excellent strong scalability for intermediate - size problems extending to 12,000 cores .we expect even better scalability for higher - resolution problems .furthermore , the fully compressible equations are required to accurately capture the small - scale surface convection in solar - like and less massive stars where mach numbers approach unity and where the anelastic approximation breaks down . on the other hand , the cfl constraint imposed by acoustic waves places strict limits on the allowable time step for simulating deep convection in most stars and planets , where the mach number is much less than unity .we intend to address this constraint in the future by implementing implicit and local time stepping schemes .to verify the chorus code , we defined two benchmark simulations designed to bridge the gap between fully compressible and anelastic systems .this allowed us to compare chorus results to the well - established ash code which employs the anelastic approximation .the two benchmark cases were formulated to simulate convection in the jupiter and sun .metrics of physical quantities sensitive to the convective driving and structure such as the linear growth rate and the total ke agree to lowest order in , the stratification parameter upon which the validity of the anelastic approximation is based .mean flows exhibit larger variations ( 5 - 20% ) that may be attributed to flux imbalances at the sampling time .better agreement between the two codes could likely be achieved by running the simulations longer but full equilibration would require at least an order of magnitude more computing time .we did not believe this computational expense was warranted given the good agreement at the sampling time .the level of agreement is remarkable considering that the chorus and ash codes not only solve different equations ( compressible versus anelastic ) but also they employ dramatically different numerical algorithms .thus , we consider the chorus code verified .future applications of chorus will focus on the applications discussed in 1 , namely the interaction of convection , differential rotation , and oblateness in rapidly - rotating stars , core convection in high and low - mass stars , hierarchical convection in solar - like stars , and the excitation of radial and non - radial acoustic pulsations within the context of asteroseismology .we have already implemented a deformable grid algorithm which we are now using to model oblate stars .the next step also include further development of chorus as a large - eddy simulation ( les ) code , with modeling of subgrid - scale ( sgs ) motions based either on explicit turbulence models or on the implicit numerical dissipation intrinsic to the sdm method .junfeng wang is funded by an newkirk graduate fellowship by the national center for atmospherical center ( ncar ) .chunlei liang would like to acknowledge the faculty start up grant from the george washington university .ncar is sponsored by the national science foundation .s. h. saar , the activity cycles and surface differential rotation of single dwarfs , in : m. dikpati , t. arentoft , i. g. hernandez , c. lindsey , f. hill ( eds . ) , solar - stellar dynamos as revealed by helio- and asteroseismology : gong 2008/soho 21 , asp conference series , vol . 416 , asp , san francisco , ca , 2010 , pp . 375384 .d. r. soderblom , b. f. jones , d. fischer , rotational studies of late - type stars .m34 ( ngc 1039 ) and the evolution of angular momentum and activity in young solar - type stars , astrophys .j. 563 ( 2001 ) 334340 .h. a. mcalister , t. a. ten brummelaar , d. r. gies , w. huang , w. g. bagnuolo jr . , m. a. shure , j. sturmann , l. sturmann , n. h. turner , s. f. taylor , d. h. berger , e. k. baines , e. grundstrom , c. ogden , first results from the chara array .i. an interferometric and spectroscopic study of the fast rotator leonis ( regulus ) , astrophys .j. 628 ( 2005 ) 439452 .m. s. miesch , j. r. elliott , j. toomre , t. c. clune , g. a. glatzmaier , p. a. gilman , three - dimensional spherical simulations of solar convection : differential rotation and pattern evolution achieved with lamiar and turbulent states , astrophys .j. 532 ( 2000 ) 593615 .z. j. wang , y. sun , c. liang , y. liu , extension of the sd method to viscous flow on unstructured grids , in : h. deconinck , e. dick ( eds . ) , computational fluid dynamics 2006 , springer berlin heidelberg , 2006 , pp .119124 .f. bassi , s. rebay , a high - order accurate discontinuous finite element method for the numerical solution of the compressible navier stokes equations , journal of computational physics 131 ( 1997 ) 267279 . m. s. miesch , p. a. gilman , m. dikpati , nonlinear evolution of global magneto - shear instabilities in a three - dimensional thin - shell model of the solar tachocline , astrophys .j. suppl .( 2007 ) 337361 .
we present a novel and powerful compressible high - order unstructured spectral - difference ( chorus ) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets . the computational geometries are treated as rotating spherical shells filled with stratified gas . the hydrodynamic equations are discretized by a robust and efficient high - order spectral difference method ( sdm ) on unstructured meshes . the computational stencil of the spectral difference method is compact and advantageous for parallel processing . chorus demonstrates excellent parallel performance for all test cases reported in this paper , scaling up to 12,000 cores on the yellowstone high - performance computing cluster at ncar . the code is verified by defining two benchmark cases for global convection in jupiter and the sun . chorus results are compared with results from the ash code and good agreement is found . the chorus code creates new opportunities for simulating such varied phenomena as multi - scale solar convection , core convection , and convection in rapidly - rotating , oblate stars . spectral difference method , high - order , unstructured grid , astrophysical fluid dynamics
many social and natural systems have been well described by complex networks .complex networks with excitable local dynamics have attracted particularly great attention for their wide applications , such as epidemic spreads , chemical reactions , biological tissues , among which neural networks are typical examples .complexity of network structures and excitability of local dynamics are two major characteristics of neural networks .oscillations in these networks determine rich and important physiological functions , such as visual perception , olfaction , cognitive processes , sleep and arousal .therefore , oscillations in neural networks and other excitable networks have been studied extensively .problems of pattern formation in these excitable systems call for further investigation , because early works on pattern formation focused on patterns in regular media .it is natural to ask what pattern formation looks like in complex networks , and whether there are some common rules in different types of networks .it s very recently , turing patterns in large random networks have been discussed by hiroya nakao _ _ et al.__ . in the present paperwe study another type of pattern , self - sustained oscillatory patterns in complex networks consisting of excitable nodes , which are important in physics , chemistry and biology .since each excitable node can not oscillate individually , there must exist some delicate structures supporting the self - sustained oscillations .so far , some concepts , such as recurrent excitation , central pattern generators , have been proposed to describe these structures . however ,if networks consist of large numbers of nodes and random interactions , it s difficult to detect these structures . in previous papers, we proposed a method of dominant phase advanced driving ( dpad ) to study the phase relationship between different nodes based on oscillatory data .oscillation sources for self - sustained oscillations are identified successfully .however , the topological effects on the dynamics are not fully understood .the interplay between the topological connectivity and the network dynamics has become one of the central topics under investigation .the present paper is to explore the mechanism of pattern formation in oscillatory excitable networks and unveil the topological dependence of the oscillations .this paper is organized as follows .section ii introduces the excitable networks of br model .simulation results are provided in section iii , where center nodes and target waves are identified . in section iv, the skeletons of different oscillations are displayed to unveil the topological effects on network dynamics . in section v results in previous sections are extended to networks with different sizes and degrees .section vi gives extensions to excitable scale - free networks .networks with fitzhugh - nagumo model as local dynamics are also discussed .the conclusions are given in the last section vii .we consider complex networks consisting of excitable nodes .the network dynamics is described as follows : br model is adopted as local dynamics , where parameters are properly set so that each node possesses excitable local dynamics .the adjacency matrix is defined by if node is connected with node and otherwise .coupling represents the total interaction on a given node from all its neighbor nodes .this form of coupling is used to ensure that any excited node can excite its rest neighbor nodes with proper values of and .other forms of coupling , which have similar effects , are also feasible , such as diffusive coupling .this type of interaction has been widely used in neural models and other excitable networks . during the simulations , different types of networksare generated and the connections between different nodes are bidirectional and symmetric .for simplicity , we study at first homogeneous random networks with an identical degree , i.e. each node interacts with an equal number of nodes randomly chosen .meanwhile , we assume that all nodes have identical parameters so that any heterogeneity in network patterns is not due to the topological inhomogeneity , but results from the self - organization in nonlinear dynamics . in the present paper we focus on the self - sustained periodic oscillations .the homogeneous random network studied is displayed in fig.1(a ) . with the parameters given ,the system has a large probability ( about ) to approach periodic oscillations from random initial conditions .moreover , different initial conditions approach different oscillations in most cases . for instance , we observed different oscillations within tests , and the other samples reached the rest state .the evolution of average signals for three different oscillations a , b , c is displayed in fig .these three oscillations have different periods ( , , with being periods of oscillations a , b , c , respectively ) . in figs .1(c ) , ( d ) and ( e ) spatial snapshots of oscillations a , b and c are plotted , respectively .all these patterns have seemingly random phase distributions , in which the structures supporting the oscillations are deeply hidden .we start our analysis from local dynamics of excitable networks .because each node is an excitable system , the individual node will stay at the rest state forever without perturbation .since there is no external pacemaker in the network , there must be some loops to support the self - sustained oscillations , where nodes can be repeatedly excited in sequence .therefore , it is natural to conclude that the topological loop structure of complex networks is crucial for the network oscillations .however , in complex networks there are extremely large numbers of loop sets ( for the network in fig .1(a ) with nodes and interactions , there are loop sets ) .a crucial question is which loop set plays the essential role for a given oscillation . because nodes in the network are excited in sequence , all waves propagate forward along the shortest paths .the loops dominating the oscillations must obey this shortest path " rule , which means the source loops should be as short as possible .furthermore , due to the existence of the refractory period , these loops must also be sufficiently large to maintain the recurrent excitation . here , the problem remained is how to reveal these shortest loops .we study the above loop problem by making perturbation to each oscillation and observe the system s response .a few nodes randomly chosen are removed from the network at each test .( here removing a node means discarding all interactions of this node . ) in most cases the oscillation is robust .however , we find in surprise that the oscillation is crucially sensitive to some specific nodes .these specific nodes for a given oscillation are defined as key nodes , among which a minimum number of nodes can be removed to suppress the oscillation . in figs.1(c ) , ( d ) and ( e ) different key nodes for oscillations a , b and c are displayed with large squares , respectively . both oscillations a andb can be suppressed by just removing one key node , as shown in fig.1(f ) .however , we can never suppress oscillation c by removing any single node .there are two pairs of key nodes displayed in fig .in order to terminate oscillation c ( see also fig.1(f ) ) , we have to remove two key nodes simultaneously , one from the pair ( ) and the other from the pair ( ) .the diverse behavior displayed in figs .1(c ) , ( d ) and ( e ) indicates that even though the parameter distributions and the node degrees are homogeneous in the network , the dynamical patterns have delicate and heterogeneous self - organized structures where different nodes play significantly different roles in the oscillations .we find further that all key nodes for these oscillations appear in directly interacted pairs . in each pair onenode drives the other , i.e. , , and , .( the bidirectional link between nodes and is denoted by an arrowed link , if the interaction from node is favorable for exciting node from the rest state . ) considering the crucial influence of key nodes on the oscillations , we suggest that the function of the driven nodes is to excite the whole network , while the function of the driving ones is to keep their partners oscillating .thus these driven nodes ( for a , for b , and for c ) are regarded as center nodes for the oscillations while the driving ones are regarded as the drivers of the center nodes .an oscillation with centers is called -center oscillation . both oscillationsa and b are one - center oscillations , while oscillation c is a two - center oscillation ..[table1 ] number of center nodes for different oscillations in homogeneous random networks ( hrns ) .parameters ( ) are set the same as fig . 1 , except constant ( for hrns with and for other networks ) .one thousand different networks are investigated with random initial conditions for the statistics in each column . [ cols="^,^,^,^,^,^,^ " , ] the existence of key nodes and center nodes is general for periodic oscillations in excitable complex networks .we investigated eq .( 1 ) with random initial conditions for different networks and sampled stable periodic oscillations .the transient time for each oscillation depends on the network size . when the network size increases , the transient will be prolonged . moreover , the transient time is also effected by the type of the pattern . generally speaking ,the more center nodes the pattern has , the longer the transient needs to be .when the oscillation reached stability , center nodes were identified .numbers of center nodes for most oscillations are listed in table i. for other oscillations remained , we did not make a further search , because identifying more than four center nodes is very computationally consuming .anyway , we find that most oscillations have self - organized structures with an extremely small number of center nodes .thus the features of oscillations a , b , and c can be identified as the typical behavior of self - sustained oscillations in excitable complex networks .because of the significant effects of center nodes on oscillations , we expected that the source loops of the oscillations must be around the center nodes .further study confirmed the expectation .we identified that there are just some well - organized loop structures around the center nodes to maintain the self - sustained oscillations .two principles are proposed for pattern formation in a given network oscillation .\(i ) waves propagate forward from center nodes to the whole network along the shortest paths .\(ii ) the shortest loops passing through both center nodes and their drivers play the role of oscillation sources and dominate the oscillation behavior . with these two principles we can clearly reveal oscillation sources ,illustrate wave propagation paths and unveil the topological effects on the oscillations . based on the first principle, we can demonstrate the oscillatory pattern for each oscillation according to a simple placing rule as follows . at first , place each center node at a certain position .second , if there is only one center node , locate all the other nodes around this center according to the distances ( shortest paths ) from it .however , if there are two synchronous centers , two clusters of nodes will exist , each around a center .the other nodes should select the cluster with the nearest " center node before the rearrangement . during the cluster selectionif a node has the same distances from both centers , it can be included to either cluster .this simple placing rule transforms all random patterns into well - behaved target waves. similar operation can be applied to oscillations with more centers .snapshots of oscillation a , b , and c in new order are displayed in figs .2(a ) , ( b ) and ( c ) , respectively , which are exactly the same as those in figs.1(c ) , ( d ) and ( e ) . in these figures surprisingly well - ordered target waves are observed , one - center target waves for oscillations a and b , and two - center target waves for oscillation c , which are in sharp contrast with the random phase distributions in figs .1(c ) , ( d ) and ( e ) .all nodes are driven by waves emitting from center nodes and the importance of the center nodes are demonstrated clearly .the recurrent excitation of the center nodes via the driving key nodes is the reason why the center nodes can keep oscillating to excite the whole network .it is instructive to observe these self - sustained target waves in oscillatory random networks and demonstrate how these waves self - organize . on the basis of these target patterns ,different oscillation sensitivities observed in fig . 1 can be understood .first , due to the one - center target structure of figs .2(a ) and 2(b ) , we can definitely terminate the oscillation by removing a center node ( for a , for b ) , since the center node is the only wave source .the oscillation can also be suppressed by removing the driving node ( for a , for b ) because the driving node is the only driver of the center node , without which the center can no longer oscillate sustainedly .second , since oscillation c has a structure of two - center target waves , removing any single key node can not destroy the oscillation sources completely . both target centers ( or their drivers ) should be removed simultaneously to suppress this two - center oscillation .similarly , in order to terminate an oscillation with centers , centers ( drivers ) should be removed simultaneously . in fig .1(f ) effective suppression of given oscillations is displayed .however , how to create a given oscillation with high efficiency is still not clear .excitable networks , such as that in fig .1(a ) , have a huge number of attractors , each of which has a small basin of attraction .if we try to reach a given oscillation by random initial conditions we may need thousands or even millions of tests which are computationally consuming and practically unreasonable .however , when the center nodes and their drivers are identified we can recover a given oscillation with high efficiency by manipulating only a very few nodes . to create oscillation a ( b ) from the all - rest state we only need to initially stimulate single center node ( ) while the interaction from the center node to its driving key node ( ) is blocked during the initial excitation period of the center nodes .we find that the excitation activities propagate away from the center node , and then come back via the driving key node to reexcite the center node .then the system evolves autonomously to target pattern a ( b ) via the self - organized excitation propagation in the network .generally speaking , we can recover any given -center target pattern by initially stimulating centers with the interactions from these centers to their drivers blocked during the initial excitation periods of center nodes . in the following paper , this excitation procedure is briefly called -center node excitation , without additional remarks on the interaction modulations . in fig .2(d ) we present the evolution generated by one - node - excitation with the solid ( dash ) curve , which recovers oscillation a ( b ) asymptotically . in order to recover oscillation c , both center nodes and should be excited simultaneously .the creation of oscillation c is shown by the dotted curve in fig.2(d ) .based on principle ( ii ) , we can construct a skeleton and reveal the oscillation source for each oscillation by analyzing the network topology .the skeleton of a given oscillation means a subnetwork consisting of some short topological loops passing through both the center nodes and their drivers .topological effects on a network oscillation can be well unveiled based on the skeleton . in figs.3(a ) , ( b ) and ( c ) skeletons of oscillations a , b and c are displayed , respectively . in fig .3(a ) we display all topological loops with length , passing through the pair of key nodes ( , ) . in fig .3(b ) the skeleton of oscillation b is plotted , consisting of loops with length passing through the key node pair ( , ) .an interesting difference between figs .3(a ) and 3(b ) is that the shortest loop in fig .3(a ) ( ) is much smaller than that in fig .3(b ) ( ) . in fig .3(c ) we show the skeleton of oscillation c consisting of loops with length , passing through the pair of nodes ( , ) or ( , ) .the skeleton supporting oscillation c consists of two clusters with .furthermore , for each oscillation under investigation the shortest loops displayed in the skeleton always have successive driving relationship . in figs .3(a)-(c ) , these successive driving shortest loops are indicated by arrows .these driving loops , supporting self - sustained oscillations of the center nodes , are regarded as the oscillation generators . the phenomenon in fig .1(b ) that oscillations a , b and c have different periods ( , , ) can be understood from these oscillation generators .it has been known that a pulse can circulate along a one - dimensional ( 1d ) loop consisting of excitable nodes .the period of the oscillation increases as the loop s length increases .since the shortest loop in each skeleton dominates the oscillation , we have the conclusion for . that s the reason why different oscillations may have different periods . the structures of skeletons in figs .3(a ) , ( b ) and ( c ) are greatly simplified in contrast with the original complex network in fig .they contain much less number of nodes , and reduce the original high - dimensional complex structures to various sets of 1d loops .it is importance to find these small skeletons which indicate many essential features of the network oscillations .we can efficiently modulate the oscillations just by analyzing these simple skeletons . in the following discussions the oscillation period is taken as a measurable quantity to demonstrate the oscillation modulations .at first we modify oscillation a by removing node . this operation changes oscillation a to oscillation . based on fig .3(a ) we can predict the network evolution after the modulation .first , although the shortest -node loop is destroyed , there are still some other loops containing center and its driver . the oscillation will be maintained .second , the new shortest loop among the remaining loops will emerge as a dynamical loop , which guarantees the recurrent excitation of center and maintains the network oscillation . because the length of the new shortest loop ( ) is , we expect that the modified oscillation must have a larger period .our predictions are confirmed .the skeleton of oscillation is shown in fig .3(d ) where the right loop ( marked by the arrowed loop ) actually emerges as the oscillation generator . andthe period of oscillation is indeed larger than that of oscillation a ( ) .similar operations are applied to oscillation b. oscillation is obtained by removing single node from oscillation b. analyzing the skeleton in fig .3(b ) we expect that this operation must prolong the original period to ( ) , for the new shortest loop has a length . in fig .3(e ) the skeleton of oscillation is displayed as expected .then we find . in fig .3(f ) periods of 1d oscillatory loops with different sizes are displayed with white squares in the solid curve .periods of network oscillations a , b , and are also displayed with red ( dark ) circles .both sets of periods coincide well .it demonstrates that simplified skeletons indicate essential features of complicated patterns , and the shortest loops passing through both the center nodes and their drivers indeed dominate the dynamics of the network oscillations. the modulation diversity can be much richer for oscillations with more centers .different modulations are applied to oscillation . in the subsequent paragraphs , responses of oscillation c to the removal of different nodes , ( i )node , ( ii ) node , ( iii ) nodes and , will be studied .we find that all simulations of the network oscillations fully coincide with predictions from the simple skeleton in fig .\(i ) if center node is removed , the network oscillation must change , i.e. from oscillation c to . analyzing the skeleton of oscillation c, the right sub - skeleton must be destroyed by removing its center while the left sub - skeleton is remained intact to support oscillation . since only left target center works , the original two - center target pattern must be transformed to a one - center target pattern , and the nodes in the original right cluster must move to the left cluster .the left cluster will grow from the boundary with nodes migrating from the destroyed cluster .we present the target pattern of oscillation in fig.4(a ) by simulation , and find a pattern the same as we predicted . in fig .4(b ) we plot the skeleton of oscillation which is nothing but the left sub - skeleton in fig .( ii ) if center node is removed from oscillation c , the left sub - skeleton in fig.3(c ) is destroyed .the resulting oscillation is denoted by .we expect that node will work as the only center and the shortest loop in right sub - skeleton will work as the oscillation generator . in figs.4(c ) and 4(d ) we observe that all predictions are fully confirmed .( iii ) if two nodes are removed simultaneously , node from the left cluster and node from the right one , oscillation is generated .we predict from fig .3(c ) that the two - center target pattern should be maintained ( since the functions of two centers are preserved ) and the skeleton of can be deduced from fig.3(c ) with the shortest loops in both clusters destroyed .then we stimulate oscillation and plot a snapshot by arranging the nodes in order .two - center target waves are verified in fig .the skeleton of oscillation is displayed in fig .4(f ) . both figs . 4(e ) and 4(f )fully confirm the above predictions .meanwhile , the above operations have adjusted the oscillation periods . in case ( i ) since the oscillation generator of remains the same as that of oscillation c , the resulting period should remain approximately the same as .for oscillation in case ( ii ) , the new oscillation generator ( the arrowed shortest loop ) in fig .4(d ) has length , and then period should increase to about by comparing with 1d loop data in fig .4(g ) . since loop nodes and are removed , oscillation has the shortest source loops of length ( arrowed loops in the left cluster in fig .thus period should be close to ( see fig .4(g ) ) , which is considerably larger than .4(g ) numerical results of the modulated networks are compared with those of 1d loops .both sets of data agree well with each other .it is amazing that by removing node we dramatically change the oscillation pattern while keeping the period almost unchanged .in contrast , by removing two nodes ( , ) we keep the two - center target pattern while largely slowing down the oscillation .all these seemingly strange responses can be well explained with the skeleton in fig.3(c ) .so far , we discussed oscillations a , b , c in the given network fig.1(a ) in detail .however , oscillation patterns in a complex network are much more abundant . the choice of key nodes and related source loops depends on initial conditions , because the basins of attraction of different attractors may be very complicated in nonlinear dynamic systems . for a homogeneous random network all nodes are topologically equivalent and each node may play a role of a center node or the driver of the center node .the only condition is that the shortest loops passing through both the center node and the driver must be large enough to guarantee the recurrent excitation .till now we focused on eq .( 1 ) with and .all characteristics observed in this particular case can be extended to networks with different sizes and degrees .here we study another example of eq .( 1 ) with and .the network structure is displayed in fig .5(a ) . with a certain initial conditionwe observe an oscillation with a snapshot shown in fig .this oscillatory pattern has key nodes , which are displayed with squares in fig .four centers ( , , , ) are identified .removing four key nodes simultaneously from four different sets , i.e. one node from each set , we can suppress this oscillation .this process is displayed by the solid curve in fig .different from oscillations a , b and c , in fig .5(b ) only three sets of key nodes appear in pairs ( such as ( , ) , ( , ) , ( , ) ) , while key node appears without any partner .the reason is following .since each node has a degree , a center node may have a single dynamical driver ( such as , , ) , or multiple drivers ( such as node in fig.5(d ) , having two drivers and ) .if a center node has only one driver , the driver node also becomes a key node for controlling the center node .however , when the center node has multiple drivers , removing one of these drivers can not terminate the function of the center .thus this center node does not have a partner node for the oscillation suppression .similar to fig .2(d ) we can generate oscillatory pattern in fig .5(b ) from the all - rest state by initially stimulating the four centers ( , , , ) with interactions from these centers to their drivers blocked during the initial excitation periods of the center nodes .time evolution of this oscillation generation is shown in fig .5(c ) with the dotted curve . in fig .5(d ) we show exactly the same snapshot as that in fig.5(b ) with all nodes rearranged in four clusters according to their distances from different centers , i.e. , each node chooses the cluster with the nearest " center node andthen it is placed in the selected cluster according to the distance from the center .different sizes of four clusters result from the asynchronous excitation of different centers .if an oscillatory pattern has multiple centers , each center emits excitation waves and controls a cluster of nodes .a node will belong to the cluster if the excitation wave from the center reaches this node first in comparison with the other centers .therefore , if all centers have synchronous excitation any given node is controlled by the nearest center as we did in fig .if multiple centers are not synchronous , i.e. , they are excited at different times , the measurement of the distance should be modified by counting the excitation time differences of various centers . in case of oscillation , four centers are excited at slightly different times .specifically , in each round node is excited first , nodes and have a single - step delay ( one - step here means , with being the oscillation period and being the number of nodes in a single wave length ) , while node has a two - step delay. then the nearest " center means the center node with the shortest distance among ( , , , ) , while ( , , , ) being the actual topological distances from centers ( , , , ) to the given node .this method of distance measurement is applied to all the patterns where more than one centers exist . with this arrangementwe find that the seemly random phase distribution in fig .5(b ) is actually a well - behaved four - center target wave pattern .all the modulations to oscillation c shown in fig . 4 can be applied to oscillation in fig .for instance , by removing center we can transform the original four - center target waves to three - center waves with centers , and .all nodes migrating between different clusters after the modulations are also displayed by triangles in fig .5(e ) . in figs .5(f ) and 5(g ) we removed two center nodes ( , ) and three center nodes ( , , ) , respectively .two - center and one - center target patterns are found , where all the remaining centers emit target waves .all these modulation results show the generality of two principles .in the previous discussions we consider only homogeneous random networks where all nodes have the same degree . both principles ( i ) and( ii ) can be extended to erds - rnyi ( er ) networks and scale - free ( sf ) networks which are inhomogeneous in topological structures .results in these networks are similar .it has been known that functional networks of the human brain exhibit scale - free properties . in fig .6(a ) we present an example of sf network with , .the size of each node is proportional to the nature logarithm of its degree . for this networkwe perform tests from different random initial conditions and find self - sustained periodic oscillations . among these oscillatory patterns we identify oscillations with single center , oscillations with two centers and oscillation with three centers .the statistics for different networks is also listed in table i. these results confirm that the existence of a small number of center nodes is also popular in inhomogeneous networks . in figs .6(b ) and ( c ) we present two snapshots of different oscillations ( one - center oscillation and two - center oscillation ) from different initial conditions .the phase distributions seem complicated and random .however , some key nodes and center nodes for the oscillations are also identified ( one pair of key nodes for oscillation and two pairs for oscillation ) .the given oscillations can be suppressed ( fig .6(d ) ) and created ( fig .6(e ) ) by simply modulating the center nodes . in figs. 6(f ) and ( g ) we plot exactly the same snapshots as those in figs .6(b ) and ( c ) , respectively . with the placing rule, the random phase distributions of oscillations and can be rearranged to well - behaved one - center target waves ( fig .6(f ) ) and two - center target waves ( fig .6(g ) ) , respectively .the skeleton of oscillation is shown in fig .6(h ) , based on which we can make oscillation modulations as we did in fig .the only difference is that due to the highly heterogeneity , there are many short loops passing through the pair of key nodes . in the skeleton shown in fig .6(h ) , only the shortest loops with are demonstrated .destroying any of the shortest loop will not significantly change the period of the oscillation , for the remaining shortest loops still have a length .so far our investigation has been performed in networks with br model as local dynamics .actually , the principles can be also applied to other excitable systems .here we study fitzhugh - nagumo ( fhn ) model which has been used for describing the dynamics of neural cells .complex networks of fhn nodes with diffusive couplings are described as follows , in fig .7(a ) we show a homogeneous random network under investigation with , . in figs .7(b)-7(h ) we do the same as figs .6(b)-6(h ) , respectively , with model eq .( 2 ) and network fig .7(a ) considered .apart from the skeleton ( fig .7(h ) ) of the one - center oscillation , the skeleton of the two - center oscillation is also demonstrated in fig .two clusters of loops are displayed .we find that all conclusions derived from figs .2 - 6 are also applicable to fig . 7 , though the local dynamics and the coupling form are considerably different from those in eq .( 1 ) . moreover , the conclusions do not depend on the specific parameters given in eqs .( 1 ) and ( 2 ) . when connective nodes are excited in sequence , principles ( i ) and ( ii )are applicable .in this paper we have studied pattern formation in oscillatory complex networks consisting of excitable nodes .well - organized structures , including center nodes and skeletons , are revealed for seemingly random patterns .two simple principles are proposed : well - behaved target waves are demonstrated propagating from center nodes along the shortest paths ; the short loops passing through both the center nodes and their drivers dominate the network oscillations .the existence of target waves with certain centers in random networks may provide prospective insights into pattern formation in complex networks .moreover , the discovery of skeletons will improve the understanding of crucial topological effects on the network dynamics .based on the mechanism revealed , we are able to suppress , create and modulate the oscillatory patterns by manipulating a few nodes .all the modulations can be predicted by analyzing the skeletons .our surprising and useful findings are applicable to homogeneous random networks with different sizes and degrees , inhomogeneous networks and networks with different excitable models , such as fhn model . in the present paper we considered periodic self - sustained oscillations in excitable complex networks .the extensions to nonperiodic and even chaotic oscillations will be our future work . the ideas and methods in the present work are expected to be applicable to wild fields where oscillatory behavior of excitable complex networks is involved , especially for neural systems .though at present we do not consider some specific processes of neural systems , we do hope that our results may have useful impact on the investigation of complicated neural functions , since oscillatory behavior , excitable dynamics and complexity of interactions are crucially important for the functions of neural systems .this work was supported by the national natural science foundation of china under grant no .10975015 , the national basic research program of china ( 973 program ) ( 2007cb814800 ) and the science foundation of baoji university of arts and sciences under grant no . zk1048 .
oscillatory dynamics of complex networks has recently attracted great attention . in this paper we study pattern formation in oscillatory complex networks consisting of excitable nodes . we find that there exist a few center nodes and small skeletons for most oscillations . complicated and seemingly random oscillatory patterns can be viewed as well - organized target waves propagating from center nodes along the shortest paths , and the shortest loops passing through both the center nodes and their driver nodes play the role of oscillation sources . analyzing simple skeletons we are able to understand and predict various essential properties of the oscillations and effectively modulate the oscillations . these methods and results will give insights into pattern formation in complex networks , and provide suggestive ideas for studying and controlling oscillations in neural networks .
there has been much interest in the study of the collective dynamics of chaotic systems subjected to global interactions .such systems arise naturally in the description of arrays of josephson junctions , charge density waves , multimode lasers , neural dynamics , evolutionary , chemical and social networks .the globally coupled map ( gcm ) lattice constitutes a prototype model for such global - coupling dynamics .it has recently been argued that gcm systems yield universal classes of collective phenomena .specifically , a gcm system can exhibit a variety of collective behaviors such as clustering ( i.e. , the formation of differentiated subsets of synchronized elements in the network ) ; non - statistical properties in the fluctuations of the mean field of the ensemble ; global quasiperiodic motion ; and different collective phases depending on the parameters of the system .it has been shown that a gcm system is closely related to a single map subjected to an external drive and that this analogy may be used to describe the emergence of clusters in gcm systems in geometrical terms .in particular , the phenomenon of clustering is relevant as it can provide a simple mechanism for segregation , ordering and onset of differentiation of elements in many physical and biological systems .in addition to gcm systems , dynamical clustering has also been found in a globally coupled rssler oscillators , neural networks , and coupled biochemical reactions .the interest in this phenomenon has recently grown , since dynamical clusters have been observed experimentally in an array of electrochemical oscillators interacting through a global coupling . in this paper , we investigate the process of cluster formation in general globally coupled map systems by focusing on the dynamics of their global coupling functions . in most studies on gcm systems , the mean field of the network has been used as the global coupling function . here , we study gcm systems subjected to different global coupling functions and show how they can be analyzed under a common framework .we investigate how the distribution of elements among a few clusters and their periodicities depend on the functional form of the global coupling .section ii contains a description of the dynamics of different global coupling functions in gcm systems and a calculation of the possible periodicities and cluster sizes when two clusters emerge in these systems .the driven map analogy is employed in sec .iii to interpret the clustering behavior of gcm systems . in sec .iv the dynamical properties of periodic clusters in systems exhibiting a constant global coupling are predicted ; and for a particular family of global coupling functions , the stability condition for these clustered states is derived in an appendix .conclusions are presented in sec . v.consider a general globally coupled map system where gives the state of the element at discrete time ; is the size of the system ; is the coupling parameter ; describes the ( nonlinear ) local dynamics , which in the present article is chosen to be the quadratic map ; and is the global coupling function . we shall consider a general class of global coupling functions of variables such that , ; that is , is assumed to be invariant to argument permutationsthis property of the coupling function ensures that , at any time , each element of the globally coupled system is subjected to the same influence of the coupling term .some examples of coupling functions belonging to this class are the first two examples correspond to forward and backward mean field coupling , respectively , and they have been widely used in gcm studies .the third global coupling function is the usual dispersion or mean square deviation of variables , and it may describe systems whose elements do not interact when they are synchronized .this kind of global interaction might be relevant in some biological or social systems where the members of a community are driven by their deviations from the mean behavior .the last example is the geometric mean .this type of multiplicative coupling occurs , for instance , in a system of sequential amplifiers where the gain of element is a function of the magnitude of its state , and is proportional to the total gain of the system .many statistical functions of variables share the property of invariance under argument permutations and they could as well be taken as global coupling functions in gcm systems given by eq.([gcm ] ) .for some range of its parameters the gcm system in eq.([gcm ] ) reaches an asymptotic collective behavior characterized by the segregation of the elements into clusters , each exhibiting a period , where the cluster has a number of elements , with .the fraction of elements in the cluster is .the evolution of the cluster may be described by a variable which gives the common state of the elements belonging to this cluster at time . the periodic orbit adopted by the state of the clustercan be expressed as a sequence of values ] of the respective coupling functions are shown in fig .2(a ) as varies , giving rise to a curve in each case .note that each function possesses period - two orbits only for a limited range of the fraction .figure 2(b ) is a magnification of fig .2(a ) which shows that the dynamics of the backward and the mean field coupling functions become equal for ; i.e. , when the two clusters have equal sizes . in this case , both coupling functions reach the constant value .notice that the dispersion coupling function , , only displays states with ; that is , even when the two clusters in period two may have different sizes , this particular global coupling always reaches a stationary value .the curves for the other coupling functions are symmetrical with respect to the diagonal in fig .2(b ) , which they cross for . on the diagonal ,the coupling functions are constant and the two clusters evolve out of phase with respect to each other ( sec .note also that the different global coupling functions perform a period - two motion only on a restricted region of the plane .it will be shown in sec .iii that period - two orbits of any permutable coupling function will fall within the dashed contour in fig .2 . in general , a coupling function of a gcm system in a collective state of two clusters can reach various asymptotic periodic orbits for appropriate initial conditions . each fig .3(a ) to 3(d ) shows the regions on the space of parameters for which a coupling function of a gcm in a two - cluster state displays different periodic motions .the local parameter is fixed at .figures 3(a ) and 3(b ) correspond to the backward and forward mean field coupling , respectively .note the very different distributions of periodic regions for the coupling functions in figs .3(a ) and 3(b ) .it should be noticed that , besides the collective periodic states for two clusters shown in figs .3(a)-3(d ) , there can exist other states in a gcm system consisting of more than two periodic clusters for the same values of the parameters and , but corresponding to different initial conditions .the inverse problem of determining the global coupling function in experimental systems is relevant since in general the specific functional form of the acting coupling is not known .this can be a complicated problem because , in addition , the exact form of the local dynamics may not be extracted in most situations .however , if the local dynamics is known some insight on the function of a globally coupled system can be gained within the framework presented here .for example , in the case of a dynamical system showing two period - two clusters with partition , the resulting asymptotic orbit ] can be drawn as a function of on the plane , and compared with curves ] .the analogy between a gcm system and a driven map arises because in the former system ( eq . ( [ gcm ] ) ) all the elements are affected by the global coupling function in exactly the same way at all times , and therefore the behavior of any element in the gcm is equivalent to the behavior of a single driven map ( eq . ( [ driven ] ) ) with and initial condition .additionally , if a gcm system reaches a clustered , periodic collective state , its corresponding coupling function follows in general a periodic motion .thus the associated driven map ( eq . ( [ driven ] ) ) with a periodic drive display a behavior similar to that of an element belonging to a periodic cluster in the gcm system .in particular , periodic drives resulting in periodic orbits of in eq .( [ driven ] ) may be employed to predict the emergence of clustered , periodic states in a gcm ( eq .( [ gcm ] ) ) , regardless of the specific functional form of the global coupling and without doing direct simulations on the entire gcm system .the driven map is multistable ; i.e. , there can exist several attractors for the same parameter values and . specifically , for a given periodic drive ] .the correspondence between a gcm system ( eq .( [ gcm ] ) ) in a state of clusters with period and its associated driven map ( eq . ( [ driven ] ) ) can be established when and . using this analogy , the main features in fig .2 can now be explained . in terms of a driven map subjected to a period - two drive ] and ] of permutable coupling functions in gcm systems given by eq .( [ gcm ] ) with and displaying two clusters in period two will fall on this bounded region of the plane .equivalently , a collective state of two clusters in period two can emerge in a gcm system only if its global coupling function has an orbit ] and the parameters and .thus for fixed and , and a given , we have .the main point is that , an equivalence between a gcm system eq .( [ gcm ] ) in a two - cluster , period - two state , and an associated driven map , eq .( [ driven ] ) , with a period - two drive occurs when the following conditions are fulfilled eqs .( [ teta1])-([teta2 ] ) constitute a set of two nonlinear equations for and , for a given .the solution ] and cluster orbits =[\bar{s}_1(1),\bar{s}_2(1)] ] .the succession of solutions ] to eqs .( [ teta1])-([teta2 ] ) only for an interval of .therefore , the curves in fig .2 can , in principle , be calculated _ a priori _ by using an associated driven map and just the specific functional form of in each case .the range of possible cluster sizes , described by the values of the fraction for which exist solutions to eqs .( [ teta1])-([teta2 ] ) , can also be predicted by this method .similarly , the regions of period two in figs .3(a)-3(d ) can be obtained by varying the parameter and calculating the interval of for which eqs .( [ teta1])-([teta2 ] ) have solutions .another simple clustered collective state in gcm systems occurs when the coupling function remains constant in time , i.e. , .this behavior may take place in a gcm system with a permutable coupling function when clusters , each having elements and period , are evolving with shifted phases in order to yield a constant value for .that is , if the periodic orbits of identical - size clusters are cyclically permuting in time , the resulting becomes constant .for those collective states , the behavior of any of such clusters in the gcm system can be emulated by an associated driven map subjected to a constant forcing . in the case of a gcm displaying two equal size clusters in period two , this situation corresponds to the intersection of with the diagonal in fig .the cluster orbits are then related as =[a , b] ] . on the other hand ,the associated driven map with has a unique asymptotic period - two orbit $ ] on a range of , where and are functions of .the associated coupling function also simplifies in such case . for the reduced, two - cluster couplings in eqs ( [ cluster1])-([cluster2 ] ) , the corresponding associated coupling functions become \\\theta_{\delta x}(\alpha,\beta ) & = & \frac{1}{2}|\alpha-\beta| \\ \label{rteta2 } \theta_{\bar{x}}(\alpha,\beta ) & = & |\alpha|^{1/2}|\beta|^{1/2}. \end{aligned}\ ] ] then eqs .( [ teta1])-([teta2 ] ) with reduce to the single equation which can be seen as an equation for , for given values of the parameters and .the solution of eq .( [ redteta ] ) provides a complete description of the gcm state since then and .figure 4 shows the bifurcation diagram of , eq .( [ driven ] ) , as a function of the constant drive up to period two , with fixed parameters and .the fixed point region in this diagram corresponds to one stationary cluster ( i.e. , a synchronized collective state ) in the gcm , eq .( [ gcm ] ) , with constant .the period - two window corresponds to the values and adopted by the driven map on this range of .once and are known from the bifurcation diagram , the function associated to any global coupling function in a gcm can be readily obtained , assuming that the gcm is in a state of two equal size clusters , evolving out of phase with respect to each other . in figure 4 ,the functions associated to the four global couplings in eqs.([coupling1])-([coupling4 ] ) with fixed are shown as function of . as stated above ,the solutions to eq .( [ redteta ] ) correspond to states in gcm systems with a coupling function reaching a stationary value .thus , the intersections of the curves with the diagonal in fig .4 give all the possible states of gcm systems that maintain a constant , either with one stationary cluster ( if the intersection occurs on the fixed point window of the bifurcation diagram of ) or with two clusters in period two ( if the intersection occurs on the period two window of the diagram ) .note that both the backward and the forward mean field couplings have the same two - cluster , period - two solution , but these couplings have different functional dependence on the constant drive .the coincidence of the couplings and for was already seen in fig .similarly , the geometric mean coupling gives only one two - cluster , period - two solution at .in contrast , the dispersion global coupling , , has three intersections with the diagonal : one corresponds to the synchronized stationary state in the associated gcm , with , and the other two correspond to different clustered states of the gcm , each consisting of two equal size clusters in period two , with and , respectively .all the states predicted by the intersections of the different with the diagonal in fig .4 , except one , are readily found in simulations on the corresponding gcm systems for appropriated initial conditions in each case .actually , for a gcm with the coupling , the predicted two - cluster , period - two state with is unstable : it is never achieved in simulations on the gcm system , even when the initial conditions are chosen very close to that state .what is observed , instead , is the evolution of the gcm system towards either the stationary one cluster state with or the two - cluster , period - two state with .thus , in addition to being predicted by the solutions of the equation , the observed clustered states of a gcm displaying constant coupling must be stable , which implies some stability condition on the solutions. it can be shown ( see appendix ) that for coupling functions satisfying , the condition at the intersection with the diagonal implies that the corresponding solution is unstable .this is the case of the global coupling .note that the solution at is the only one for which in fig .4 and therefore it is unstable , independently of the cluster fraction . for the coupling ,a stability analysis of states consisting of two or three clusters in period three has been performed by shimada and kikuchi .however , the simple criterium for instability does not apply for . constant coupling functions may also occur in gcm systems with different cluster sizes ; that is the case of a gcm possessing dispersion global coupling and displaying two clusters with any partition , as seen in fig .figure 5 shows the associated function with fixed as a function of the constant drive for several values of the fraction .there exist a critical fraction bellow which only one solution corresponding to one stationary cluster , i.e. , synchronization , can appear in the gcm system . above this critical fraction , two states ,each consisting of two clusters in period two , are additionally predicted by the solutions of .these solutions emerge as a pair : one solution is always unstable since that is actually observed in simulations on the corresponding gcm system . for the fraction , there is a two - cluster , period - two solution that is marginally stable .most studies on gcm lattices and other globally coupled systems have assumed mean field coupling .however , other forms of global coupling may be relevant in some situations .we have analyzed , in a general framework , the clustering behavior in gcm systems subjected to permutable global coupling functions by considering the dynamics of the coupling functions .we have shown that different gcm systems can be represented by the orbits of their coupling functions on a common space .for simplicity , only collective states in gcm systems consisting of two clusters in period two were considered .we have shown that the functional form of the global coupling in a gcm system determines the periodicity of its motion and the possible distributions of elements among the clusters .the existence of a well defined interval of possible partitions among two clusters , out of which no clusters emerge in the system , has been observed experimentally . in experimental or natural situations where clustering occurs , the specific functional form of the coupling is in general unknown. the present study may be useful to obtain insight into the acting global coupling function in practical situations .we have employed a previously introduced analogy between a gcm system and a single externally driven map in order to give a unified interpretation of the observed clustering behavior of the gcm systems considered in this article .a periodically driven map with local periodic windows can display multiple asymptotic periodic responses which are similar to cluster orbits in a gcm system with permutable .this analogy implies that dynamical clustering can occur in any gcm system with a permutable coupling function and periodic windows in the local dynamics .the presence of windows of stable periodic orbits in the local map is essential for the emergence of clusters .in fact , no clustering is observed in a gcm system if the local maps do not have periodic windows ; what is observed instead is synchronization or nontrivial collective behavior , i.e. , an ordered temporal evolution of statistical quantities coexisting with local chaos .the associated coupling function derived from the driven map analogy is particularly simple to use in the prediction of clustered states in gcm systems with two equal size clusters and exhibiting constant global coupling .the associated coupling function can be directly constructed from the bifurcation diagram of the steadily driven map .the cluster states are obtained from the solutions of eq .( [ redteta ] ) and can be represented graphically in a simple way .although eq .( [ redteta ] ) has been used for the case of two clusters in period two , it can also be applied to find gcm states consisting of equal size clusters in period .in addition , the associated coupling function carries information about the stability of the predicted two - cluster states . in particular , for the family of coupling function satisfying property ( [ even ] ) , the stability condition of clustered states in a gcm with constant is directly given by the slope .the example of a gcm system with dispersion coupling function reveals that a constant coupling can also be maintained by clusters of different sizes .our method based on eq .( [ redteta ] ) also predicts successfully the cluster states in these situations .the driven map analogy suggests that the emergence of clusters should be a common phenomenon which can be expected in various dynamical systems formed by globally interacting elements possessing stable periodic orbits on some parameter range of their individual dynamics .the examples presented here show that progress in the understanding of the collective behavior of globally coupled systems can be achieved by investigating their relation to a driven oscillator .this work has been supported by consejo de desarrollo cientfico , humanstico y tecnolgico of the universidad de los andes , mrida , venezuela .consider a general gcm system with any global permutable coupling function .suppose that the system reaches a state consisting of two clusters .then the dynamics of the system reduces to two coupled maps eqs .( [ map1])-([map2 ] ) , i.e. , for the local dynamics , one gets where and consider now the dispersion coupling function , eq .( [ coupling3 ] ) .this coupling belongs to the family of functions of variables where is any even function of its argument .it can be straightforwardly shown that this family of functions possesses the property therefore , in a two cluster state , any in this family of global coupling functions satisfies if the two clusters evolve out of phase with respect to each other , and additionally the gcm has a coupling with property ( [ even ] ) , then the two eigenvalues of the matrix in eq .( [ jacob ] ) become identical and their value is the stability criterion of this state is given by the modulus of the eigenvalue ; that is , ( ) implies that the state is unstable ( stable ) .the values and are , respectively , the values of and at the intersection of the function with the diagonal in fig .4 . let us analyze the relationship between the eigenvalue and the derivative at the intersection points with the diagonal in fig . 4 or fig .5 . in general , where and since then since has the same functional form as , then also satisfies property ( [ prop2 ] ) , that is and therefore let be a value of corresponding to the intersection of with the diagonal in fig .4 . then and ; and , which gives using the fact that from eq .( [ prop2 ] ) , the eigenvalue becomes eqs.([sig1 ] ) and ( [ sig2 ] ) with give the values and , respectively .then , substitution of these values and from eq.([dt ] ) in eq .( [ l2 ] ) yields therefore , the condition implies that , and thus the two - cluster , period - two solution with given by the intersection of with the diagonal in fig .4 is unstable .similarly , the solutions of for the different curves in fig . 5 for which , are unstable .p. hadley and k. wiesenfeld , _ phys .lett . _ * 62 * , 1335 ( 1989 ) .k. wiesenfeld , c. bracikowski , g. james , and r. roy , _ phys .lett . _ * 65 * , 1749 ( 1990 ) .s. h. strogatz , c. m. marcus , r. m. westervelt , and r. e. mirollo , _ physica d _ * 36 * , 23 ( 1989 ) .n. nakagawa and y. kuramoto , _ physica d _ * 75 * , 74 ( 1994 ) .k. kaneko , _ complexity _ * 3 * , 53 ( 1998 ) .k. kaneko , _ physica d _ * 41 * , 137 ( 1990 ) .k. kaneko and i. tsuda , _ complex systems : chaos and beyond _( springer , berlin , 2000 ) .k. kaneko , _ physica d _ * 86 * , 158 ( 1995 ) .k. kaneko , _ physica d _ * 54 * , 5 ( 1991 ) .a. pikovsky and j. kurths , _ physica d _ * 76 * , 411 ( 1994 ) .a. parravano and m. g. cosenza , _ phys .e _ * 58 * , 1665 ( 1998 ) .d. h. zanette and a. s. mikhailov , _ phys . rev .* 57 * , 276 ( 1998 ) .d. h. zanette and a. s. mikhailov , _ phys .e _ * 58 * , 872 ( 1998 ) .furusawa and k. kaneko , _ phys .lett . _ * 84 * , 6130 ( 2000 ) .w. wang , i. z. kiss , and j. l. hudson , _ chaos _ * 10 * , 248 , ( 2000 ) .a. parravano and m. g. cosenza , _ int .j. bifurcations chaos _ * 9 * , 2311 ( 1999 ) .t. shimada and k. kikuchi , _ phys .e _ * 62 * , 3489 ( 2000 ) .m. g. cosenza and j. gonzalez , prog .phys . * 100 * , 21 ( 1998 ) .
it is shown how different globally coupled map systems can be analyzed under a common framework by focusing on the dynamics of their respective global coupling functions . we investigate how the functional form of the coupling determines the formation of clusters in a globally coupled map system and the resulting periodicity of the global interaction . the allowed distributions of elements among periodic clusters is also found to depend on the functional form of the coupling . through the analogy between globally coupled maps and a single driven map , the clustering behavior of the former systems can be characterized . by using this analogy , the dynamics of periodic clusters in systems displaying a constant global coupling are predicted ; and for a particular family of coupling functions , it is shown that the stability condition of these clustered states can straightforwardly be derived .
for three - dimensional ( 3d ) autonomous hyperbolic type of chaotic systems , a commonly accepted criterion for proving the existence of chaos is due to ilnikov [ 1 - 4 ] , which has a slight extension recently [ 5 ] .chaos in the ilnikov type of 3d autonomous quadratic dynamical systems may be classified into four subclasses [ 6 ] : chaos of the ilnikov homoclinic - orbit type ; chaos of the ilnikov heteroclinic - orbit type ; chaos of the hybrid type with both ilnikov homoclinic and heteroclinic orbits ; chaos of other types . in this classification ,a system is required to have a saddle - focus type of equilibrium , which belongs to the hyperbolic type at large .notice that although most chaotic systems are of hyperbolic type , there are still many others that are not so . fornon - hyperbolic type of chaos , saddle - focus equilibrium typically does not exist in the systems , as can be seen from table i which includes several non - hyperbolic chaotic systems found by sprott [ 7 - 10 ] .more recently , yang and chen also found a chaotic system with one saddle and two stable node - foci [ 11 ] and , moreover , an unusual 3d autonomous quadratic lorenz - like chaotic system with only two stable node - foci [ 12 ] .in fact , similar examples can be easily found from the literature .c c c c + systems & equations & equilibria & eigenvalues + + sprott & & & + case d & & & + & & & + + sprott & & & + case e & & & + & & & + + sprott & & & + case i & & & + & & & + + sprott & & & + case j & & & + & & & + + sprott & & & + case l & & & + & & & + + sprott & & & + case n & & & + & & & + + sprott & & & + case r & & & + & & & + in this paper , we report a very surprising finding of a simple 3d autonomous chaotic system that has only one equilibrium and , furthermore , this equilibrium is a stable node - focus .for such a system , one almost surely would expect asymptotically convergent behaviors or , at best , would not anticipate chaos per se . from tablei , one may observe that the sprott d and e systems also have only one equilibrium , but nevertheless this equilibrium is not stable .from this point of view , it is easy to understand and indeed easy to prove that the new system will not be topologically equivalent to the sprott systems .the mechanism of generating the new system is simple and intuitive . to start with ,let us first review some of the sprott chaotic systems listed in table i , namely those with only one equilibrium .one can easily see that systems i ,j , l , n and r all have only one saddle - focus equilibrium , while systems d and e both degenerate in the sense that their jacobian eigenvalues at the equilibria consist of one conjugate pair of pure imaginary numbers and one real number .clearly , the corresponding equilibria are not stable .it is also easy to imagine that a tiny perturbation to the system may be able to change such a degenerate equilibrium to a stable one .therefore , we added a simple constant control parameter to an aforementioned sprott chaotic system , trying to change the stability of its single equilibrium to a stable one while preserving its chaotic dynamics . as a result, we obtained the following new system : when , it is the sprott e system ; when , however , the stability of the single equilibrium is fundamentally different , as can be verified and compared between the results shown in table i and table ii , respectively .c c c c + systems & equations & equilibria & eigenvalues + + new system & & & + a=-0.005 & & & + & & & + + new system & & & + a=0.006 & & & + & & & + + new system & & & + a=0.022 & & & + & & & + + new system & & & + a=0.030 & & & + & & & + + new system & & & + a=0.050 & & & + & & & + to better understand the new system ( [ wangeq ] ) , and more importantly to demonstrate that this new system is indeed chaotic , some basic properties of the system are briefly analyzed next .the system ( [ wangeq ] ) possesses only one equilibrium : linearizing the system at the equilibrium gives the jacobian matrix = \left [ \begin { array}{ccc } 0&-16\,a&\frac{1}{16}\\ \noalign{\medskip}\frac{1}{2}&-1&0 \\ \noalign{\medskip}-4&0&0\end { array } \right].\ ] ] by solving the characteristic equation , one obtains the jacobian eigenvalues , as shown in table ii for some chosen values of the parameter . to verify the chaoticity of system ( [ wangeq ] ) , its lyapunov exponents and lyapunov dimension are calculated .the lyapunov exponents are denoted by , , and ordered as .a system is considered chaotic if with .the lyapunov dimension is defined by where is the largest integer satisfying and .[ lya ] shows the dependence of the largest lyapunov exponent of system ( [ wangeq ] ) on the parameter . from fig .[ lya ] , it is clear that the largest lyapunov exponent decreases as the parameter increases from to .when , the system equilibrium is of the regular saddle - focus type ; this case of the chaotic system has been studied before therefore will not be discussed here .when , the equilibrium degenerates .it is precisely the sprott e system listed in table i ( see fig . [ 0003d ] ) .the ilnikov homoclinic criterion might be applied to this system to show the existence of chaos , however , but it involves somewhat subtle mathematical arguments . in this degenerate case ,the positive largest lyapunov exponent of the system ( see table ii ) still indicates the existence of chaos . in the time domain ,[ 000 ] ( top part ) shows an apparently chaotic waveform of ; while in the frequency domain , fig . [ 000 ] ( bottom part ) shows an apparently continuous broadband spectrum .these all prove that the sprott e system , or the new system ( [ wangeq ] ) with , is indeed chaotic .when , the stability of the equilibrium is fundamentally different from that of the sprott e system . in this case ,the equilibrium becomes a node - focus ( see table ii ) .the ilnikov homoclinic criterion is therefore inapplicable to this case .take as an example .numerical calculation of the lyapunov exponents gives , and , indicating the existence of chaos .in the time domain , fig . [ 006 ] ( top part ) shows an apparently chaotic waveform ; while in the frequency domain , fig .[ 006 ] ( bottom part ) shows an apparently continuous broadband spectrum .these all prove that the new system ( [ wangeq ] ) with is indeed chaotic .[ bif ] shows a bifurcation diagram versus the parameter , demonstrating a period - doubling route to chaos .[ bif2 ] also demonstrates the gradual evolving dynamical process as is continuously varied .both figures indicate that although the equilibrium is changed from an unstable saddle - focus to a stable node - focus , the chaotic dynamics survive in a relative narrow range of the parameter .all the above numerical results are summarized in table iii .c c c c + parameters & eigenvalues & lyapunov exponents & fractal dimensions + + & & & + & & & + & & & + + & & & + & & & + & & & + + & & & + & & & + & & & + + & & & + & & & + & & & + + & & & + & & & + & & & + + & & & + & & & + & & & + , ( b ) , ( c ) , ( d ) .,title="fig : " ] ( a ) , ( b ) , ( c ) , ( d ) .,title="fig : " ] ( b ) , ( b ) , ( c ) , ( d ) .,title="fig : " ] ( c ) , ( b ) , ( c ) , ( d ) .,title="fig : " ] ( d )this paper has reported the finding of a simple three - dimensional autonomous chaotic system which , very surprisingly , has only one stable node - focus equilibrium .the discovery of this new system is striking , because with a single stable equilibrium in a 3d autonomous quadratic system , one typically would anticipate non - chaotic and even asymptotically converging behaviors . yet , unexpectedly , this system is chaotic . although the new system is non - hyperbolic type , therefore the ilnikov homoclinic criterion is not applicable , it has been verified to be chaotic in the sense of having a positive largest lyapunov exponent , a fractional dimension , a continuous frequency spectrum , and a period - doubling route to chaos .
if you are given a simple three - dimensional autonomous quadratic system that has only one stable equilibrium , what would you predict its dynamics to be , stable or periodic ? will it be surprising if you are shown that such a system is actually chaotic ? although chaos theory for three - dimensional autonomous systems has been intensively and extensively studied since the time of lorenz in the 1960s , and the theory has become quite mature today , it seems that no one would anticipate a possibility of finding a three - dimensional autonomous quadratic chaotic system with only one stable equilibrium . the discovery of the new system , to be reported in this letter , is indeed striking because for a three - dimensional autonomous quadratic system with a single stable node - focus equilibrium , one typically would anticipate non - chaotic and even asymptotically converging behaviors . although the new system is of non - hyperbolic type , therefore the familiar ilnikov homoclinic criterion is not applicable , it is demonstrated to be chaotic in the sense of having a positive largest lyapunov exponent , a fractional dimension , a continuous broad frequency spectrum , and a period - doubling route to chaos . + + pacs : 05.45.-a , 05.45.ac , 05.45.pq
the fluctuations in the cosmic microwave background ( cmb ) radiation are theoretically very well understood , allowing precise and unambiguous predictions for a given cosmological model .consequently , measurement of cmb anisotropy has spearheaded the remarkable transition of cosmology into a precision science .the transition has also seen the emergence of data analysis of large complex data sets as an important and challenging component of research in cosmology .increasingly sensitive , high resolution , measurements over large regions of the sky pose a stiff challenge for current analysis techniques to realize the full potential of precise determination of cosmological parameters .the analysis techniques must not only be computationally fast to contend with the huge size of the data , but , the higher sensitivity also limits the simplifying assumptions that can be then invoked to achieve the desired speed without compromising the final precision goals .there is a worldwide effort to push the boundary of this inherent compromise faced by the current cmb experiments that measure the anisotropy in the cmb temperature and its polarization .accurate estimation of the angular power spectrum , , is arguably the foremost concern of most cmb experiments .the extensive literature on this topic has been summarized in literature . for gaussian , statistically isotropic cmb sky , the that corresponds to the covariance that maximizes the multivariate gaussian pdf of the temperature map, is the maximum likelihood ( ml ) solution .different ml estimators have been proposed and implemented on cmb data of small and modest sizes .while it is desirable to use optimal estimators of that obtain ( or iterate toward ) the ml solution for the given data , these methods are usually limited by the computational expense of matrix inversion that scales as with data size .various strategies for speeding up ml estimation have been proposed , such as , exploiting the symmetries of the scan strategy , using hierarchical decomposition , iterative multi - grid method , etc .variants employing linear combinations of such as on set of rings in the sky can alleviate the computational demands in special cases .other promising ` exact ' power estimation methods have been recently proposed .however there also exist computationally rapid , sub - optimal estimators of . exploiting the fast spherical harmonic transform ( ) ,it is possible to estimate the angular power spectrum rapidly .this is commonly referred to as the pseudo- method .have also been explored . ]it has been recently argued that the need for optimal estimators may have been over - emphasized since they are computationally prohibitive at large .sub - optimal estimators are computationally tractable and tend to be nearly optimal in the relevant high regime .moreover , already the data size of the current sensitive , high resolution , ` full sky ' cmb experiments such as wmap have been compelled to use sub - optimal pseudo- related methods . on the other hand , optimal ml estimators can readily incorporate and account for various systematic effects , such as noise correlations , non - uniform sky coverage and beam asymmetries . the systematic correction to the pseudo- power spectrum estimate arising from non - uniform sky coverage has been studied and implemented for cmb temperature and polarization .the systematic correction for non circular beam has been studied by us . herewe extend the results to include non - uniform sky coverage .it has been usual in cmb data analysis to assume the experimental beam response to be circularly symmetric around the pointing direction . however , any real beam response function has deviations from circular symmetry .even the main lobes of the beam response of experiments are generically non - circular ( non - axisymmetric ) since detectors have to be placed off - axis on the focal plane .( side lobes and stray light contamination add to the breakdown of this assumption ) . for highly sensitive experiments ,the systematic errors arising from the beam non - circularity become progressively more important . dropping the circular beam assumption leads to major complications at every stage of analysis pipeline . the extent to which the non - circularity affects the step of going from the time - stream data to sky map is very sensitive to the scan - strategy .the beam now has an orientation with respect to the scan path that can potentially vary along the path .this implies that the beam function is inherently time dependent and difficult to deconvolve . even after a sky map is made , the non - circularity of the effective beam affects the estimation of the angular power spectrum , , by coupling the power at different multipoles , typically , on scales beyond the inverse angular beam - width .mild deviations from circularity can be addressed by a perturbation approach and the effect of non - circularity on the estimation of cmb power spectrum can be studied ( semi ) analytically .[ clerr ] shows the predicted level of non - circular beam correction in our formalism for elliptical beams with _beam - width of compared to the non - circular beam corrections computed in the recent data release by wmap . to avoid contamination of the primordial cmb signal by galactic emission , the region adjoining the galactic planeis masked from maps .if the galactic cut is small enough , then the coupling matrix will be invertible , and the two - point correlation function can be determined on all angular scales from the data within the uncut sky .hivon et al . present a technique ( master ) for fast computation of the power spectrum taking accounting for the galactic cut , but for circular beams . _ in our present work , we present analytical expressions for the bias matrix of the pseudo- estimator for the incomplete sky coverage , using a non - circular beam . _the observed cmb temperature fluctuation is convolved with a beam function and contaminated by noise .further , the cmb signal can not be obtained for full sky because of galactic contamination ( and extragalactic point sources ) .we derive the general form of the bias matrix including non - uniform / incomplete sky coverage and a general beam function .the observed temperature fluctuation field is the convolution of the beam " profile with the real temperature fluctuation field ( ignoring the additive noise term for simplicity ) : the two point correlation function for a statistically isotropic cmb anisotropy signal is where is the angular spectrum of cmb anisotropy signal and the window function encodes the effect of finite resolution through the beam function .a cmb anisotropy experiment probes a range of angular scales characterized by a _ window _function .the window depends both on the scanning strategy as well as the angular resolution and response of the experiment .however , it is neater to logically separate these two effects by expressing the window as a sum of ` elementary ' window function of the cmb anisotropy at each point of the map . for a given scanning strategy , the results can be readily generalized using the representation of the window function as sum over elementary window functions ( see , _ e.g. , _ ) . for some experiments, the beam function may be assumed to be circularly symmetric about the pointing direction , i.e. , without significantly affecting the results of the analysis . in any case , this assumption allows a great simplification since the beam function can then be represented by an expansion in legendre polynomials as consequently , it is straightforward to derive the well known simple expression for a circularly symmetric beam function .we define the pseudo- estimator as where denotes the mask function representing the incomplete sky .the expectation value of the pseudo- estimator can be shown to take the form \right|^2 .\label{eq : pscl1}\end{aligned}\ ] ] the integral in the square bracket can be simplified to the beam distortion parameter ( bdp ) is expressed in terms of where is the _circularized _ beam obtained by averaging over azimuth . hence , = \sqrt{\frac{4\pi}{2l+1 } } \ , b_{l0}. \label{blbl0}\ ] ] making a spherical harmonic expansion of the mask function we can simplify eq.[eq : pscl1 ] as the general form of the _ bias matrix _ , is thus given by where to proceed further _ analytically _ , we need a model for .we shall continue assuming _ non - rotating _beams , i.e. .we evaluate the integral , with two different approaches . in the first method , using only the sinusoidal expansion of wigner- , we get .\end{aligned}\ ] ] in the alternative method using clebsch gordon coefficients , we can evaluate as : .\end{aligned}\ ] ] the analytic expressions reduce to the known analytical results for circular beam and non - uniform sky coverage studied in ref . and our earlier results for non - circular beam for full sky .these results offer the possibility of rapid estimation of the non - circular beam effect in the pseudo- estimation .the expression in terms of the coefficients is the computationally superior approach .these coefficients can be computed using stable recurrence relations . in a more detailed publicationwe describe the algorithm in more detail .the expressions also highlight the two aspects to speeding up the computation of the systematic effect : * mildly non - circular beams where the beam distortion parameters ( bdp ) , at each fall off rapidly with .this allows us to neglect for . for most real beams, is a sufficiently good approximation .this cuts - off the summations over bdp in the expressions for . *soft , azimuthally apodized , masks where the coefficients are small beyond .moreover , it is useful to smooth the mask in , such the die off rapidly for too .the mild - circularity perturbation approach has been introduced and discussed in ref .the circularity of the beam has to be addressed in the design of the cmb experiments .our results suggest the systematics due to non - circular distortions of the beam are manageable if one ensures the large bdp are limited to a few ( i.e. , narrow band limited violation of axis - symmetry ) . the beams for many experiments , such as python - v and wmap are well approximated as elliptical gaussian functions .for radically non - axisymmetric beams , modeling the beam in terms of superposition of displaced circular gaussian beams has been proposed .our approach allows a simple , cost effective extension to modeling with the more general elliptical gaussian beams , or other mildly non - circular beam forms .the mask of the galactic region can be chosen at the time of data analysis .the coupling of bdp with suggests that a judicious choice of mask reduces the computational costs of non - circular beam corrections .[ reconmask ] shows a softened version of the kp2 mask used by the wmap team , where the mask is azimuthally smoothed .the final apodized mask is obtained by multiplying an azimuthally smoothed mask raised to a sufficiently large power with the original mask and has reduced power at large ( i.e. , is negligible for ) . in a forthcoming publicationwe describe the method of making soft masks . for mildly non - circular , nearly azimuthally symmetric case , the required number of computation cycle to compute the bias matrix up to a multipole scales as up to leading order for .here , is the cut - off in the beam distortion parameters ( bdp ) , and is the cut - off in .the assumptions of non - circular beam leads to major complications at every stage of the data analysis pipeline .the extent to which the non - circularity affects the step of going from the time - stream data to sky map is very sensitive to the scan - strategy .the beam now has an orientation with respect to the scan path that can potentially vary along the path .this implies that the beam function is inherently time dependent and difficult to deconvolve .we extend our analytic approach for addressing the effect of non - circular experimental beam function in the estimation of the angular power spectrum of cmb anisotropy , which also includes the effect of the galactic cut in the entire sky map .non - circular beam effects can be modeled into the covariance functions in approaches related to maximum likelihood estimation and can also be included in the harmonic ring and ring - torus estimators .however , all these methods are computationally prohibitive for high resolution maps and , at present , the computationally economical approach of using a pseudo- estimator appears to be a viable option for extracting the power spectrum at high multipoles .the pseudo- estimates have to be corrected for the systematic biases .while considerable attention has been devoted to the effects of incomplete / non - uniform sky coverage , no comprehensive or systematic approach is available for non - circular beam .the high sensitivity , ` full ' ( large ) sky observation from space ( long duration balloon ) missions have alleviated the effect of incomplete sky coverage and other systematic effects such as the one we consider here have gained more significance .non - uniform coverage , in particular , the galactic masks affect only cmb power estimation at the low multipoles .the analysis accompanying the recent second data from wmap uses the hybrid strategy where the power spectrum at low multipoles is estimated using optimal maximum likelihood methods and pseudo- are used for large multipoles .the non - circular beam is an effect that dominates at large comparable to the inverse beam width . for high resolution experiments , the optimal maximum likelihood methods which can account for non - circular beam functions are computationally prohibitive .in implementing the pseudo- estimation , we have included both the non - circular beam effect and the effect of non - uniform sky coverage .our work provides a convenient approach for estimating the magnitude of these effects in terms of the leading order deviations from a circular beam and azimuthally symmetric mask .the perturbation approach is very efficient . for most cmb experiments the leading few orders capture most of the effect of beam non - circularity .our results highlight the advantage of azimuthally smoothed masks ( mild deviations from azimuthal symmetry ) in reducing computational costs .the numerical implementation of our method can readily accommodate the case when pixels are revisited by the beam with different orientations .evaluating the realistic bias and error - covariance for a specific cmb experiment with non - circular beams would require numerical evaluation of the general expressions for using real scan strategy and account for inhomogeneous noise and sky coverage , the latter part of which has been addressed in this present work .it is worthwhile to note in passing that that the angular power contains all the information of gaussian cmb anisotropy only under the assumption of statistical isotropy .gaussian cmb anisotropy map measured with a non - circular beam corresponds to an underlying correlation function that violates statistical isotropy . in this case, the extra information present may be measurable using , for example , the bipolar power spectrum .even when the beam is circular the scanning pattern itself is expected to cause a breakdown of statistical isotropy of the measured cmb anisotropy . for a non - circular beam , this effect could be much more pronounced and , perhaps , presents an interesting avenue of future study .in addition to temperature fluctuations , the cmb photons coming from different directions have a random , linear polarization .the polarization of cmb can be decomposed into part with even parity and part with odd parity . besides the angular spectrum , the cmb polarization provides three additional spectra , , and which are invariant under parity transformations .the level of polarization of the cmb being about a tenth of the temperature fluctuation , it is only very recently that the angular power spectrum of cmb polarization field has been detected .the degree angular scale interferometer ( dasi ) has measured the cmb polarization spectrum over limited band of angular scales in late 2002 .the dasi experiment recently published 3-year results of much refined measurements .more recently , the boomerang collaboration reported new measurements of cmb anisotropy and polarization spectra . the wmap mission has also measured cmb polarization spectra .correcting for the systematic effects of a non - circular beam for the polarization spectra is expected to become important .extending this work to the case cmb polarization is another line of activity we plan to undertake in the near future . in summary ,we have presented a perturbation framework to compute the effect of non - circular beam function on the estimation of power spectrum of cmb anisotropy taking into account the effect of a non - uniform sky coverage ( eg ., galactic mask ) .we not only present the most general expression including non - uniform sky coverage as well as a non - circular beam that can be numerically evaluated but also provide elegant analytic results in interesting limits . as cmb experiments strive to measure the angular power spectrum with increasing accuracy and resolution , the work provides a stepping stone to address a rather complicated systematic effect of non - circular beam functions .we thank olivier dore and mike nolta for providing us with the data files of the non - circular beam correction estimated by the wmap team .we thank kris gorski , jeff jewel & ben wandelt for the reference and providing a code for computing wigner- functions .we have benefited from discussions with francois bouchet and simon prunet .computations were carried out at the hpc facility of iucaa .j. r. bond , _ theory and observations of the cosmic background radiation _ , in _ cosmology and large scale structure _ , les houches session lx , august 1993 , ed .r. schaeffer , ( elsevier science press , 1996 ) .
in the era of high precision cmb measurements , systematic effects are beginning to limit the ability to extract subtler cosmological information . the non - circularity of the experimental beam has become progressively important as cmb experiments strive to attain higher angular resolution and sensitivity . the effect of non - circular beam on the power spectrum is important at multipoles larger than the beam - width . for recent experiments with high angular resolution , optimal methods of power spectrum estimation are computationally prohibitive and sub - optimal approaches , such as the pseudo- method , are used . we provide an analytic framework for correcting the power spectrum for the effect of beam non - circularity and non - uniform sky coverage ( including incomplete / masked sky maps ) . the approach is perturbative in the distortion of the beam from non - circularity allowing for rapid computations when the beam is mildly non - circular . when non - circular beam effect is important , we advocate that it is computationally advantageous to employ ` soft ' azimuthally apodized masks whose spherical harmonic transform die down fast with . cosmology , cosmic microwave background , theory , observations
shape - constrained estimation has received much attention recently .the attraction is the prospect of obtaining automatic nonparametric estimators with no smoothing parameters to choose .convexity is among the popular shape constraints that are of both mathematical and practical interest . show that under the convexity constraint , the least squares estimator ( lse ) can be used to estimate both a density and a regression function . for density estimation, they showed that the lse of the true convex density converges pointwise at a rate under certain assumptions .the corresponding asymptotic distribution can be characterized via a so - called `` invelope '' function investigated by . in the regression setting ,similar results hold for the lse of the true regression function . however , in the development of their pointwise asymptotic theory , it is required that ( or ) has positive second derivative in a neighborhood of the point to be estimated .this assumption excludes certain convex functions that may be of practical value .two further scenarios of interest are given below : * at the point , the -th derivative ( or ) for and ( or ) , where is an integer greater than one ; * there exists some region ] and \big\} ] .nevertheless , we show that ( or ) converges to ( or ) at the minimax optimal rate up to a negligible factor of .last but not least , we establish the analogous rate and asymptotic distribution results for the lse in the regression setting .our study yields a better understanding of the adaptation of the lse in terms of pointwise convergence under the convexity constraint .it is also one of the first attempts to quantify the behavior of the convex lse at non - smooth points .when the truth is linear , the minimax optimal pointwise rate is indeed achieved by the lse on . the optimal rate at the boundary points and is also achievable by the lse up to a log - log factor .these results can also be applied to the case where the true function consists of multiple linear components .furthermore , our results can be viewed as an intermediate stage for the development of theory under misspecification .note that linearity is regarded as the boundary case of convexity : if a function is non - convex , then its projection to the class of convex functions will have linear components .we conjecture that the lse in these misspecified regions converges at an rate , with the asymptotic distribution characterized by a more restricted version of the invelope process .more broadly , we expect that this type of behavior will be seen in situations of other shape restrictions , such as log - concavity for and -monotonicity .the lse of a convex density function was first studied by , where its consistency and some asymptotic distribution theory were provided . on the other hand , the idea of using the lse for convex regression function estimation dates back to .its consistency was proved by , with some rate results given in . in this manuscript , for the sake of mathematical convenience , we shall focus on the non - discrete version discussed by .see for the computational aspects of all the above - mentioned lses .there are studies similar to ours regarding other shape restrictions .see remark 2.2 of and in the context of decreasing density function estimation ( a.k.a .grenander estimator ) when the truth is flat and with regard to discrete log - concave distribution estimation when the true distribution is geometric .in addition , studied the grenander estimator s global behavior in the norm under the uniform distribution and gave a connection to statistical problems involving combination of -values and two - sample rank statistics , while studied the adaptivity of the lse in monotone regression ( or density ) function estimation . for estimation under misspecification of various shape constraints , we point readers to , , , , and .more recent developments on global rates of the shape - constrained methods can be found in , , and .see also and where an additive structure is imposed in shape - constrained estimation in the multidimensional setting .the rest of the paper is organized as follows : in section [ sec : density ] , we study the behavior of the lse for density estimation . in particular , for notational convenience , we first focus on a special case where the true density function is taken to be triangular . the convergence rate and asymptotic distribution are given in section [ sec : densityrate ] and section [ sec : densityasym ] .section [ sec : test ] demonstrates the practical use of our asymptotic results , where a new consistent testing procedure on the linearity against a convex alternative is proposed .more general cases are handled later in section [ sec : densityext ] based on the ideas illustrated in section [ sec : densityspecial ] .section [ sec : densityadapt ] discusses the adaptation of the lse at the boundary points .analogous results with regard to regression function estimation are presented in section [ sec : regress ] .some proofs , mainly on the existence and uniqueness of a limit process and the adaptation of the lse , are deferred to the appendices .given independent and identically distributed ( iid ) observations from a density function .let be the corresponding distribution function ( df ) . in this section ,we denote the convex cone of all non - negative continuous convex and integrable functions on by .the lse of is given by where is the empirical distribution function of the observations .furthermore , we denote the df of by . throughout the manuscript , without specifying otherwise , the derivative of a convex function can be interpreted as either its left derivative or its right derivative . to motivate the discussion , as well as for notational convenience , we take }(t) ] . then for any fixed , of theorem [ thm : ls_triangular_rate ] a key ingredient of this proof is the version of marshall s lemma in this setting ( * ? ? ?* theorem 1 ) , which states that where is the uniform norm .let .two cases are considered here : ( a ) and ( b ) . in the first case , because is convex , one can find a supporting hyperplane of passing through such that where is a negative slope . if , then .otherwise , .figure [ fig : greater ] explains the above inequalities graphically .consequently , .{plot1.pdf } & \includegraphics[scale=0.37]{plot2.pdf } \\\mathrm{(a ) } & \mathrm{(b ) } \end{array} ] in the second case , if we can find such that and intersect at the point , then otherwise , by taking , we can still verify that the above two inequalities hold true .figure [ fig : smaller ] illustrates these findings graphically .therefore , where the last inequality uses the fact that for any , is an increasing function of while is a decreasing function of . by( [ eq : marshall ] ) , we have that it then follows that , as desired . [ cor : ls_triangular_uniform_rate ] for any , } |\hat{f}_n(x ) - f_0(x)| = o_p(n^{-1/2}).\ ] ] of corollary [ cor : ls_triangular_uniform_rate ] a closer look at the proof of theorem [ thm : ls_triangular_rate ] reveals that we have simultaneously for every ] is . let and denote respectively the left and right derivatives of .the same convergence rate also applies to these derivative estimators .[ cor : ls_triangular_uniform_rate_derivative ] for any , } \max \left(|\hat{f}^-_n(x ) - f_0'(x)| , |\hat{f}^+_n(x ) - f_0'(x)| \right ) = o_p(n^{-1/2}).\ ] ] of corollary [ cor : ls_triangular_uniform_rate_derivative ] by the convexity of , } \max \left(|\hat{f}^-_n(x ) - f_0'(x)| , |\hat{f}^+_n(x ) - f_0'(x)|\right ) \\ & \le \max \left(|\hat{f}^-_n(\delta)- f_0'(\delta)| , |\hat{f}^+(1-\delta ) - f_0'(1-\delta)|\right ) \\ &\le \frac{2}{\delta } \max \big(\big|\hat{f}_n(\delta ) - \hat{f}_n(\delta/2 ) - f_0(\delta ) + f_0(\delta/2)\big| , \\ & \qquad \qquad \qquad\big|\hat{f}_n(1- \delta/2 ) -\hat{f}_n(1-\delta ) - f_0(1- \delta/2 ) + f_0(1-\delta)\big| \big ) \\ & = o_p(n^{-1/2}),\end{aligned}\ ] ] where the final equation follows from corollary [ cor : ls_triangular_uniform_rate ] . to study the asymptotic distribution of , we start by characterizing the limit distribution .[ thm : triangular_limit ] let and for any , where is a standard brownian bridge process on ] satisfying the following conditions : 1 . for every ] and replace by , where , and where is different from .figure [ fig : xyh ] shows a typical realization of , , and .a detailed construction of the above limit invelope process can be found in appendix i. note that our process is defined on a compact interval , so is technically different from the process presented in ( which is defined on the whole real line ) . as a result ,extra conditions regarding the behavior of ( and ) at the boundary points are imposed here to ensure its uniqueness . , , and in theorem [ thm : triangular_limit ] with and the distribution function with triangular density }(t) ] . then for any , the process \times \mathcal{d}[\delta , 1-\delta],\end{aligned}\ ] ] where is the space of continuous functions equipped with the uniform norm , is the skorokhod space , and is the invelope process defined in theorem [ thm : triangular_limit ] . in particular , for any , of theorem [ thm : ls_triangular_asymptotic ] before proceeding to the proof , we first define the following processes on : furthermore , define the set of `` knots '' of a convex function on as we remark that the above definition of knots can be easily extended to convex functions with a different domain . by lemma 2.2 of , for ] , while corollary [ cor : ls_triangular_uniform_rate ] entails the tightness of in ] ( via an easy application of marshall s lemma ) .since converges to a brownian bridge , it is tight in ] for all .2 . since is linear on ] for every .the pointwise limit of convex functions is still convex , so is convex on ] .it follows from that since this holds for any , one necessarily has that .consequently , in view of theorem [ thm : triangular_limit ] , the limit is the same for any subsequences of in .fix any .it follows that the full sequence converges weakly in and has the limit .this , together with the fact that is continuous at any fixed with probability one ( which can be proved using conditions ( 1 ) and ( 5 ) of ) , yields ( [ eq : ls_pointwise_converg ] ) . it can be inferred from corollary [ cor : triangular_limit ] and theorem [ thm : ls_triangular_asymptotic ] that both and do not converge to the truth at a rate .in fact , proved that is an inconsistent estimator of .nevertheless , the following proposition shows that is at most . for the case of the maximum likelihood estimator of a monotone density, we refer readers to for a similar result .[ prop : ls_zero ] . of proposition[ prop : ls_zero ] let be the left - most point in .since is finite for every , the linearity of on ] against }(t ) \mbox { for some } t \in ( 0,\infty).\ ] ] the test we propose is free of tuning parameters .since the triangular distribution is frequently used in practice , and is closely related to the uniform distribution ( e.g. the minimum of two independent ] , where is the invelope process defined in theorem [ thm : triangular_limit ]. then .moreover , . of proposition[ prop : test_h_0 ] the first part follows directly from theorem [ thm : ls_triangular_asymptotic ] and corollary [ cor : triangular_limit ] . for the second part , note that if , then } h^{(2)}(t ) > 0 ] for some .first , we show that there exists some such that .suppose the conclusion fails , then }(t) ] with strict inequality if }(t) ] for every .this contradiction yields the conclusion .next , it follows from theorem 3.1 of that the lse is consistent at , i.e. , .therefore , almost surely , it follows that . the aim of this subsection is to extend the conclusions presented in section [ sec : densityspecial ] to more general convex densities .we assume that is positive and linear on for some , where the open interval is picked as the `` largest '' interval on which remains linear .more precisely , it means that there does not exist a bigger open interval ( i.e. ) on which is linear . for the sake of notational convenience ,we suppress the dependence of on , and in the following two theorems .their proofs are similar to those given in section [ sec : densityspecial ] , so are omitted for brevity .[ thm : triangular_limit_extension ] let and for any ., there exists a uniquely defined random continuously differentiable function on ] ; 2 . has convex second derivative on ; 3 . and ; 4 . and ; 5 . .[ thm : triangular_dist_extension ] for any , } \big(|\hat{f}_n(x ) - f_0(x)| , \ ; |\hat{f}_n'(x ) - f_0'(x)| \big ) = o_p(n^{-1/2}).\ ] ] moreover , \times \mathcal{d}[a+\delta , b-\delta],\end{aligned}\ ] ] where is the invelope process defined in theorem [ thm : triangular_limit_extension ] . in this subsection, we study the pointwise convergence rate of the convex lse at the boundary points of the region where is linear .examples of such points include and given in section [ sec : densityext ] . to begin our discussion , we assume that is such a boundary point in the interior of the support ( i.e. ) . hereagain is a convex ( and decreasing ) density function on .three cases are under investigation as below .these cases are illustrated in figure [ fig:23abc ] .a. for every in a fixed ( small ) neighborhood of , with and ; b. for every in a fixed ( small ) neighborhood of , with , and ; c. for every in a fixed ( small ) neighborhood of , with , and . in different casesare illustrated , where we set in ( b ) and ( c ) . ]note that in all cases above , only the behavior of in a small neighborhood of is relevant .as pointed out in example 2 of , the minimax optimal convergence rate at is in ( a ) .furthermore , in ( b ) and ( c ) , example 4 of suggests that the optimal rate at is . in the following ,we prove that the convex lse automatically adapts to optimal rates , up to a factor of .though almost negligible , the factor of here indicates that there might be room for improvement .these results should be viewed as a first step for the investigation of the adaptation of the convex lse .one could also compare our results with , where the adaptation of the lse in the context of decreasing density function estimation was tackled .[ thm : ls_adapt_1 ] in the case of ( a ) , moreover , of theorem [ thm : ls_adapt_1 ] suppose that for some fixed , ( a ) holds for every ] ( i.e. `` one - sided linearity '' ) . in the rest of the proof, it suffices to only consider the situation of .recall that to handle this scenario , the proof of theorem [ thm : ls_triangular_rate ] makes use of the fact that the triangular density is linear on the whole ] . in the following ,we deploy a different strategy to establish the rate .let and , where is defined in ( [ eq : defknots ] ) .we consider three different cases separately .a. . then and are linear on either ] .it follows from the line of reasoning as in the proof of theorem [ thm : ls_triangular_rate ] that .b. and .note that being in the set implies that and for and . since is linear on ] , one can apply a strategy similar to that used in the proof of theorem [ thm : ls_triangular_rate ] to show that .consequently , where we have used the facts that and }|\hat{f}_n'(t ) - f_0'(t)| = o_p(1) ] is a convex function , and where is a triangular array of iid random variables satisfying a. and . to simplify our analysis ,the following fixed design is considered : a. for .the lse of proposed by is where , in this section , denotes the set of all continuous convex functions on ] , define [ prop : re_exist ] the lse exists and is unique .[ prop : re_prop ] let denote the set of knots of .the following properties hold : a. ;}\end{array}\right. ] and have convex first derivative , then for any fixed , implies } |g_k'(t ) - g_0'(t)| \rightarrow 0 ] and any , it then follows that } \ { g_k'(t )- g_0'(t ) \ } \le \sup_{t \in [ \delta , 1-\delta ] } \bigg ( \frac{g_0(t+\epsilon ) - g_0(t)}{\epsilon } - g_0'(t ) , \frac{g_0(t ) - g_0(t-\epsilon)}{\epsilon } - g_0'(t ) \bigg).\ ] ] our claim ( [ eq : re_marshall1 ] ) can be verified by letting .secondly , we show that } \ { g_k'(t ) - g_0'(t ) \ } \ge 0 .\end{aligned}\ ] ] we prove this by contradiction .suppose that } \{ g_k'(t ) - g_0'(t ) \ } = -m ] as . in view of ( [ eq : re_marshall1 ] ), it follows from the convexity of and that one can find an interval of positive length ( which can depend on ) such that for every , where is a sufficiently large integer .this implies that for every , which contradicts the fact that as .combining these two parts together completes the proof of the intermediate result . since by empirical process theory , proposition [ prop : re_marshall ] entails that .consistency of then follows straightforwardly from the above intermediate result . in the following , we assume that that is linear on . moreover , is `` largest '' in the sense that one can not find a bigger open interval on which remains linear .[ thm : re_rate_dist ] under assumptions ( i ) ( ii ) , for any , } \big(|\hat{r}_n(x ) - r_0(x)| , \ ; |\hat{r}_n'(x ) - r_0'(x)| \big ) = o_p(n^{-1/2}).\ ] ] moreover , \times \mathcal{d}[a+\delta , b-\delta],\end{aligned}\ ] ] where is the invelope process defined in the second part of theorem [ thm : triangular_limit ] .the proof of theorem [ thm : re_rate_dist ] is very similar to what has already been shown in section [ sec : densityspecial ] , so is omitted for the sake of brevity . in presence of the linearity of on , the limit distribution of the process on does not depend on or .in addition , the above theorem continues to hold if we weaken assumption ( ii ) to : 1 .}\big|\frac{1}{n}\sum_{i=1}^n \mathbf{1}_{[x_{n , i},\ , \infty)}(t ) - t \big| = o\big(n^{-1/2}\big) ] .in this case , theorem [ thm : re_rate_dist ] is still valid , while a process different from is required to characterize the limit distribution .this follows from the fact that in the random design can converge to a gaussian process that is not a brownian motion .recall that }(t) ] .then with probability one , the problem of minimizing over has a unique solution . of lemma [ lem :limsolution ] we consider this optimization problem in the metric space .first , we show that if it exists , the minimizer must be in the subset \rightarrow { { \mathbb r } } , g \mbox { is convex } , g(0 ) = g(1 ) = k , \inf_{[0,1 ] } g(t ) \ge - m \}\end{aligned}\ ] ] for some . to verify this ,we need the following result let be a standard brownian motion .we note that has the same distribution as using the entropy bound of in ( theorem 2.7.1 of ) and dudley s theorem ( cf .theorem 2.6.1 of ) , we can establish that \rightarrow { { \mathbb r}}\ ; | \ ; h(t ) = g\big(1-\sqrt{1-t}\big ) , g \in \mathcal{g}_{1,1 } \big\}\ ] ] is a gc - set . as is an isonormal gaussian process indexed by , we have that a.s . . furthermore , it is easy to check that a.s . andso our claim of ( [ eq : gcfinite ] ) holds .now for sufficiently large ( with ) , thus , for any with } g = -m ] .since is compact in , the existence and uniqueness follow from a standard convex analysis argument in a hilbert space . as a remark , it can be seen from the proof of lemma [ lem : limsolution ] that for a given from the sample space ( which determines the value of ) , if the function has a unique minimizer over ( which happens a.s . ), it also admits a unique minimizer over for any .[ lem : parabolictangent ] almost surely , does not have parabolic tangents at either or . of lemma [ lem :parabolictangent ] first , consider the case of .theorem 1 of says that where is a standard brownian motion . from this , it follows that thanks to the scaling properties of brownian motion ( ) .this implies that does not have a parabolic tangent at .second , consider the case of . note that and where .therefore , to prove that does not have a parabolic tangent at , it suffices to show that denote by . for any ,we argue that the random variable follows a distribution of .this is because where we invoked the stochastic fubini s theorem in the last line , and thus , now setting and for every .it is easy to check that the collection of random variables is mutually independent , so where we made use of the fact that assume that there exists some such that for all sufficiently small .but it follows from ( [ eq : para_tangent ] ) that a.s .one can find a subsequence of ( denoted by ) satisfying consequently , as .the last step is due to the facts of and a.s .( which is a direct application of the law of the iterated logarithm ) . the proof is completed by contradiction . now denote by the unique function which minimizes over .let be the second order integral satisfying , and .[ lem : limprop ] almost surely , for every , and has the following properties : a. for every ] . of lemma [ lem :limprop ] to show ( i ) , ( ii ) , ( iii ) and ( iv ) , one may refer to lemma 2.2 and corollary 2.1 of and use a similar functional derivative argument . for ( v ) , we note that since is convex , discontinuity can only happen at or . in the following ,we show that it is impossible at .suppose that is discontinuous at zero .consider the class of functions .then for every ] if }|x(t)| ] and }|y(t)| < \infty ] . in view of lemma [ lem : limknot ] , we may assume that there exist knots and on ] for every , then we are done .otherwise , we focus on those }f_k < 0 ] .now }{|x(t)| } & \ge x(\tau_k^+ ) - x(\tau_k^- ) = h_k'(\tau_k^+ ) - h_k'(\tau_k^- ) = \int_{\tau_k^-}^{\tau_k^+ } f_k(s ) \ , ds \\ & \ge \int_{\tau_k^-}^{\tau_k^+ } \max \big ( a_{k,1}(s - t_{k,1}),a_{k,2}(s - t_{k,2 } ) \big ) \ , ds \\ & \ge \int_{\delta}^{1-\delta } \max \big ( a_{k,1}(s - t_{k,1}),a_{k,2}(s - t_{k,2 } ) \big ) \ , ds - 2\delta c_k \\ & \ge \int_{\delta}^{1-\delta } \max \big ( a_{k,1}(s - t_{k,1}),a_{k,2}(s - t_{k,2 } ) , 0\big ) \ , ds \\ &\qquad \qquad - \int_{\delta}^{1-\delta } \max \big ( - a_{k,1}(s - t_{k,1}),- a_{k,2}(s - t_{k,2 } ) , 0\big )ds - 2\delta c_k \\ & \ge \inf_{u \in [ 0,1 - 4\delta ] } \big\{|a_{k,1}| u^2 + |a_{k,2}|(1 - 4\delta - u)^2\big\}/2 - c_k ( t_{k,2 } - t_{k-1})/2 - 2 \delta c_k \\ & \ge \frac{(1 - 4\delta)^2}{4\delta}c_k -\delta c_k - 2 \delta c_k = \left(\frac{1}{4\delta } + \delta - 2\right ) c_k \ge c_k.\end{aligned}\ ] ] consequently , a.s . , . b. .now consider .it follows that let } f_k ] , and one knot in .denote these two points by and respectively . by the convexity of ,there exists some such that for every ] . by lemma [ lem : limlowerbound ], we see that for almost every , is bounded . combining this with the lowerbound we established previously entails the boundedness of .next , note that both and are bounded .the boundedness of and immediately follows from the convexity of by utilizing .\end{aligned}\ ] ] [ lem : limintegral ] for almost every , both } | h_k(t)|\right\}_k ] are bounded . of lemma[ lem : limintegral ] by lemma [ lem : limprop](iv ) , for any ] is bounded .furthermore , lemma [ lem : limknot ] says that one can always find a knot with for all sufficiently large .thus , the boundedness of } | h_k'(t)|\right\}_k ] by using the equality . [ lem : limequicontinuous ] for almost every , both and are uniformly equicontinuous on ] for some .three scenarios are discussed in the following . herewe fix .a. there exists at least one knot with ] .let be the left - most knot in .then is linear on ] for all sufficiently large . c. , but \neq \emptyset ] and be the left - most knot in . as a convention, we set if such a knot does not exist .note that is linear on ] so are uniformly bounded and equicontinuous .therefore , the arzel ascoli theorem guarantees that the sequence has a convergent subsequence in the supremum metric on ] for . now by lemma [ lem : limintegral ] and lemma [ lem : limequicontinuous ], we can assume that and are bounded and equicontinuous on ] , for ] .now let to see the required property .this completes the proof of existence .it remains to show the uniqueness of .suppose that there are and satisfying conditions ( 1 ) ( 5 ) listed in the statement of theorem [ thm : triangular_limit ] . for notational convenience , we write and . then , where we used conditions ( 1 ) ( 5 ) of to derive the last inequality . by swapping and ,we further obtain the following inequality adding together the above two inequalities yields , which implies the uniqueness of on .the uniqueness of then follows from its third condition .the proof for the second part ( i.e. the existence and uniqueness of ) is similar and is therefore omitted . of corollary [ cor : triangular_limit ] we can easily verify the existence of such a function by using the same construction in the proof of theorem [ thm : triangular_limit ] . in particular , if does not have parabolic tangents at both and ( which happens a.s . according to lemma [lem : parabolictangent ] ) , then and as . on the other hand ,if , there must be a sequence of knots of with . in views of conditions ( 1 ) , ( 2 ) and ( 5 ) ,one necessarily has and for every .the fact that , , and are all continuous entails that and .consequently , condition ( 3 ) implies condition ( 3 ) .we now apply the same argument to to conclude that condition ( 4 ) implies condition ( 4 ) .hence , in view of theorem [ thm : triangular_limit ] , is unique . the following three lemmas are required to prove theorem [ thm : ls_adapt_2 ] . [lem : ls_adapt_precalculation ] for any , } \big\ { 4k^{\alpha+2}+(1+k)^{\alpha+1}(\alpha-2k)\big\ } > 0 ] , first , it is easy to check that the above inequality holds true when . in the case of , we can restate the inequality to be proved as next , we define ] for some small , with , and .then for any , where , and where is a constant that only depends on . of lemma [ lem: ls_adapt_calculation ] first , it is easy to check that if , then ( [ eq : ls_adapt_calculation1 ] ) can be expressed as on the other hand , if , then after some elementary calculations , we can show that ( [ eq : ls_adapt_calculation1 ] ) is equal to }{2(\alpha+1)(\alpha+2)}.\end{aligned}\ ] ] denote by , so that ( [ eq : ls_adapt_calculation2 ] ) can be rewritten as where is a constant that only depends on , and where we applied lemma [ lem : ls_adapt_precalculation ] with the fact that ] , with small .suppose that for a fixed ] , the collection } , \ \ \ f \in \mathcal{f } , \ \ x \le y \le x+r \right \}\end{aligned}\ ] ] admits an envelope such that for some fixed and depending only on and , where .moreover , suppose that then , for every and , there exist a random variable of such that of lemma [ lem : ls_adapt_vc ] this lemma slightly generalizes lemma a.1 of .its proof proceeds as in with minor modifications , so is omitted for brevity .we remark that here only the collection of functions defined on ], an analogous version of this lemma also holds true by symmetry . of theorem [ thm : ls_adapt_2 ] here we only consider case ( b ) . case ( c ) can be handled similarly by symmetry .suppose that ( b ) holds true for every ] , we can proceed as in the proofs of theorem [ thm : ls_triangular_rate ] and corollary [ cor : ls_triangular_uniform_rate_derivative ] to show that } \min\big(\hat{f}_n(t ) - f_0(t),0\big ) = o_p(n^{-1/2})\end{aligned}\ ] ] and } |\hat{f}_n^-(t ) - f_0'(t)| = o_p(n^{-1/2}),\ ] ] where is the left derivative of .therefore , it suffices to only consider the behavior of .the proof can be divided into four parts .a. suppose that .denote by .then because both and are linear on ] .since our assumption in ( iv ) guarantees that , rearranging the terms in the above displayed equation leads to finally , as , we can plug ( [ eq : ls_adapt_2_proof_0 ] ) and ( [ eq : ls_adapt_2_proof_4 ] ) into ( [ eq : ls_adapt_2_proof_6 ] ) to verify first author is grateful to richard samworth for helpful conversations .the second author owes thanks to tony cai and mark low for questions concerning the problems addressed in section 2.3 .he also owes thanks to fadoua balabdaoui and hanna jankowski for conversations and initial work on the problems addressed in sections 2.1 and 2.2 .part of this work was completed while the first author visited the statistics department at the university of washington .the first author was supported by epsrc grant ep / j017213/1 .the second author was supported in part by nsf grant dms-1104832 and by ni - aid grant 2r01 ai291968 - 04 .dmbgen , l. , rufibach , k. and wellner , j. a. ( 2007 ) marshall s lemma for convex density estimation . in _ asymptotics : particles , processes and inverse problems : festschrift for piet groeneboom _ ( e. cator , g. jongbloed , c. kraaikamp , r. lopuha and j.a .wellner , eds . ) , 101107 .institute of mathematical statistics , ohio .groeneboom , p. , jongbloed , g. and wellner , j. a. ( 2001a ) a canonical process for estimation of convex functions : the `` invelope '' of integrated brownian motion ._ , * 29 * , 16201652 .
we prove that the convex least squares estimator ( lse ) attains a pointwise rate of convergence in any region where the truth is linear . in addition , the asymptotic distribution can be characterized by a modified invelope process . analogous results hold when one uses the derivative of the convex lse to perform derivative estimation . these asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative . moreover , we show that the convex lse adapts to the optimal rate at the boundary points of the region where the truth is linear , up to a log - log factor . these conclusions are valid in the context of both density estimation and regression function estimation .
image encryption is somehow different from text encryption due to some inherent features of images , such as bulk data capacity and high correlation among pixels .therefore , digital chaotic ciphers like those in and traditional cryptographic techniques such as des , idea and rsa are no longer suitable for practical image encryption , especially for real - time communication scenarios .so far , many chaos - based image cryptosystems have been proposed .the major core of these encryption systems consists of one or several chaotic maps serving the purpose of either just encrypting the image or shuffling the image and subsequently encrypting the resulting shuffled image . in a new image encryption algorithm based on chaotic map lattices has been proposed .the aim of this paper is to assess the security of such cryptosystem .the rest of the paper is organized as follows .section [ section : description ] describes the cryptosystem introduced in .after that , section [ section : designproblems ] points out some design problems inherent to that cryptosystem , and section [ section : attacks ] gives some attacks on the cryptosystem under study .finally , some security enhancements are presented in section [ section : enhance ] followed by the last section , which presents the conclusions .the encryption scheme described in is based on the logistic map given by for a certain value of , the chaotic phase space is ] .5 . set .if , go to step [ algorithm : encryption1 ] ; otherwise the encryption procedure stops for the current color component . after performing the above encryption procedure for all three color components , the three sequences , and make up the ciphertext . as claimed in , the secret key is composed of the following four sub - keys : 1 .the control parameter of the logistic map , i.e. , .2 . the image height and the image width , i.e. , and respectively .3 . the number of chaotic iterations in step [ algorithm : encryption2 ] , i.e. , .4 . the number of cycles , i.e. , .the decryption procedure is similar to the above description , but in an reverse order , and the following inverse map \label{eq : real2integer}\ ] ] is used in the last step to recover the plain - image by converting real numbers back to integer pixel values . for more details about the encryption / decryption procedures ,the reader is referred to .following kerckhoffs principle , the security of a cryptosystem should depend only on its key . for the cryptosystem defined in ,the size of the image to be encrypted determines one of its four secret sub - keys . in a known - plaintext attackwe have access to both the plain image and its encrypted version , which means that we know the size of the image .moreover , in a ciphertext - only attack the value is known and it is possible to get if is known and vice versa .therefore , it is not a good idea to include the size of the image as part of the key , since it does not increase the difficulty to break the cryptosystem .in addition , the control parameter of the logistic map is also part of the key . in chosen in for the sake of the map defined in eq .( [ equation : logistic ] ) being always chaotic . however , the bifurcation diagram of the logistic map ( fig .[ figure : logistic ] ) shows the existence of periodic windows in that interval .it means that a user could choose such that the logistic map would be working in a non - chaotic area , which is not a good security criterium when considering chaotic cryptosystems ( * ? ? ?* rule 5 ) .hence , it is advisable to give a more detailed definition of the possible values of , so that the user can only choose those values of the control parameter preventing the logistic map from showing a periodic behaviour .finally , the other parts of the key are the number of iterations of the logistic map per pixel ( ) and the number of encryption cycles ( ) . as secret sub - keys, both values should possess a high level of entropy to avoid being guessed by a possible attacker .however , it is not advisable to select large values for and , since it will definitely lead to a very slow encryption speed . on the other hand ,using small values of and reduces the level of security , since those small values do not provide good confusion and diffusion properties .both restrictions imply a reduction of the associated sub - key space and thus they make the brute - force attack more likely to be successful . as a conclusion ,it is convenient to use and as design parameters and not as part of the secret key .this approach has been traditionally followed with respect to the number of encryption rounds in classical schemes such as des or aes .as it happens during the encryption procedure , all the intermediate values obtained through the decryption stage must be inside the phase space .this means that should appear in eq .( 10 ) and ( 11 ) in instead of .having in mind this consideration , the performance of the decryption process will be analyzed in the following .the cryptosystem described in generates a ciphertext consisting of a number of real values .all the operations to encrypt an image are performed using floating - point arithmetic . from section [ section : description ] we know that , where is the resulting value of iterating the logistic map times from .hence , if we want to recover ( the original value of the -th element in the last round ) , we have to iterate times the logistic map from to get and , after that , to substract this value from .however , the resulting value of this previous operation might not match the actual value of , due to the wobbling precision problem that exists when dealing with floating - point operations .this wobbling precision problem also causes the resulting guessed value of to depend on the cryptosystem implementation .therefore , if an image is encrypted on one platform and decrypted on another , and the implementations of floating - point arithmetics on both platforms are not compatible with each other , then the decrypted image might not match the original one . in cryptosystem was implemented using microsoft visual c # .net 2005 and no comment was given about the wobbling precision problem in the decryption process .however , we have experimentally verified that this problem indeed exists when the cryptosystem is implemented using matlab .a very useful measure of the performance of the decryption procedure is the mean square error or mse . for and being a plain image and the decrypted image respectively ,the mse for the color component is defined as where is the number of pixels of the images considered .consequently , for a well designed encryption / decryption scheme the mse should be 0 for each color component .unfortunately , for the cryptosystem under study , the values of mse for all three color components are generally not equal to 0 due to the wobbling precision problem associated to the floating - point arithmetic . in order to evaluate the underlying decryption error of the cryptosystem defined in , a plain - image `` lena '' , as shown in fig .[ figure : lena ] , was encrypted and decrypted using the same key .the results showed that the three mses obtained for the red , green and blue components of the decrypted image with respect to the original one were 6.49 , 0.018 , 0.057 , respectively . for another key ,the obtained mses were 206.96 , 123.45 , 58.65 , respectively .figure [ fig_error ] shows the decrypted image and the error image when the cryptosystem was implemented in matlab using a third key .the maximum value of in eq . is reached when , which informs that the maximum value of a sequence generated from the iteration of the logistic map is , i.e. , .the ciphertext of the cryptosystem proposed in is composed of real values , each of which is in the range ] .these values of were then estimated from the ciphertexts by applying eqs . and .the estimation errors are shown in fig .[ figure : parameterestimationlena ] .the average estimation error was , whereas the maximum and minimum errors were and , respectively . by increasing the value of from 1 to 3 and keeping the other sub - keys unchanged ,the parameter estimation errors are shown in fig .[ figure : parameterestimationlena_2 ] , being the mean estimation error , the minimum error and the maximum error . and .] and . ] finally , in figs .[ figure : psnr1 ] and [ figure : psnr2 ] the sensitivity of the cryptosystem with respect to the control parameter is shown .this sensitivity is measured using the peak signal to noise ratio ( psnr ) , which is defined for the color component as figure [ figure : psnr1 ] displays the psnrs of the different color components of the decrypted image `` lena '' with respect to the original image `` lena '' for $ ] when the same key is used for encryption and decryption .the values of the other sub - keys are , . on the other hand, figure [ figure : psnr2 ] shows the psnrs when the control parameter used in decryption shows some deviation from that employed in the encryption process .one can see that for a deviation of the control parameter of less than and for a certain range of values of the control parameter , it is possible to recover the original image `` lena '' with a similar psnr to that obtained using the correct control parameter .for instance , for the pnsrs for the red , green and blue components of the recovered `` lena '' are , and , respectively .for the same value of and a parameter estimation error equal to , the psnr of the recovered `` lena '' with respect to the original one is for the red component , for the green and for the blue component .one important feature of a secure encryption scheme is that the encryption speed should not depend on the key value . indeed ,if the time consumed on encryption / decryption is correlated with the value of the key ( or a sub - key ) , then it is possible to approximate that ( sub-)key . this kind of attack is called timing attack . as it has been shown in section [ section : description ] , in every encryption round , step [ algorithm : encryption2 ] is carried out through the iterations of eq ., where is a sub - key .this means that , for a certain number of encryption rounds ( i.e. , a certain value of ) and a certain value of the control parameter , the encryption speed decreases as does .similarly , because the encryption / decryption procedure is composed of repeated cycles , the encryption speed will also become slower if the value of increases . to be more precise , for a given plain - image, we can expect the existence of the following bi - linear relationship between the encryption / decryption time ( edt ) and the values of and : where corresponds to the common operations consumed on each chaotic iteration , to the operations performed in each cycle excluding those about chaotic iterations , and to those operations performed on the initialization process and the postprocessing after all the cycles are completed .in addition , because is just the control parameter of the chaotic map , it is expected that will be independent of its value . with the aim of verifying this hypothesis ,some experiments have been made under the following scenario .an image with random pixel values of size was encrypted for different values of , and .the encryption time corresponding to each key is shown in fig .[ figure : encryptiontime ] , from which one can see that eq . is verified .the above experimental results ensure the feasibility of a timing attack to a sub - key of the cryptosystem under study : by observing the encryption time , it is possible to estimate the values of if is known and vice versa . without loss of generality , assuming an attacker eve knows the value of , but not that of , let us demonstrate how the timing attack can be performed in practice . in this case , the relationship between edt and the value of can be simplified as , where and .then , if eve gets a temporary access to the encryption ( or decryption ) machine , she can carry out a real timing attack in the following steps : 1 .she observes the whole process of encryption ( or decryption ) to get the encryption ( or decryption ) time and also the size of the ciphertext ( i.e. , the size of the plaintext ) .2 . by choosing two keys with different values of , she encrypts a plaintext ( or decrypts a ciphertext ) of the same size and gets and .she derives the values of and by substituting and into .4 . she estimates the value of to be . 5 .she verifies the estimated value by using it to decrypt the observed ciphertext .if the recovered plaintext is something meaningful , the attack stops ; otherwise , she turns to search the correct value of in a small neighborhood of until a meaningful plaintext is obtained .the above timing attack actually reveals that partial knowledge about the key constitutes useful information to determine the rest of the key .however , such a problem should not exist for a well - designed cryptosystem ( * ? ? ?* rule 7 ) .hence , we reach the conclusion that the cryptosystem proposed in was not well designed . finally , it deserves being mentioned that the linear relationship between the encryption / decryption time and the value of has been implicitly shown in ( * ? ? ?* table i ) . there, for an image of size and equal to , and , the encryption times were observed to be , and seconds , respectively .this clearly showed a linear relationship between the encryption time and the value of .unfortunately , the authors of did not realize that this is a security defect that could be used to develop the timing attack reported in this paper .to overcome the problems of the original cryptosystem , we propose to enhance it by applying the following rules : * use a piecewise linear chaotic map ( pwlcm ) instead of the logistic map for the size of the chaotic phase space being independent with respect to the control parameter value . indeed , the chaotic phase space of the pwlcm is ( 0,1 ) for all the values of the control parameter .the pwlcm also has a uniform invariant probability distribution function , which makes impossible to estimate the control parameter through the maximum value of the ciphertext , as we can do for the cryptosystem under study . *the wobbling precision problem should be circumvented by forcing fixed - point computations .a possible solution is to transform the values of the phase space of the chaotic map into integer values , so the encryption and decryption operations are carried out using integer numbers instead of real numbers . * without loss of security , the enhanced cryptosystem should be easy to implement with acceptable cost and speed ( * ? ? ?* rule 3 ) .it is expected that the enhanced cryptosystem can encrypt at least a pixel per iteration to reach high encryption / decryption speed . *the key of the enhanced cryptosystem should be precisely defined ( * ? ? ?* rule 4 ) , and the key space from which valid keys are chosen should be precisely specified and avoid non - chaotic regions ( * ? ? ?* rule 5 ) . this can be assured by choosing the control parameter(s ) of a pwlcm as the secret key , because for every valid control parameter , the behavior of the pwlcm is chaotic .* having in mind today s computer speed , the key space size should be in order to elude brute - force attacks ( * ? ? ?* rule 15 ) . in the encryption scheme defined in color component is encrypted independently from the other color components . nevertheless , the secret key employed in the encryption process of each color component is the same .it is convenient to use a different value of the key for each color component and make the encryption of the three color components dependent on each other , since this implies a considerable increase of the key space .it has been tested that the sensitivity of the pwlcm with respect to the control parameter is around . therefore ,when the control parameter is used as the key of the cryptosystem , the size of the key space will be .nonetheless , if we use a different value of for every color component , and the encryption of each color component depends on the others , the size of the key space will be , which satisfies the security requirement related to the resistance against brute - force attacks .in this paper , some problems of a new image encryption scheme based on chaotic map lattices are reported and two attacks on this cryptosystem have been presented . to overcome these problems and weaknesses , we have introduced some countermeasures to enhance the cryptosystem by following the cryptographical rules listed in .the work described in this paper was partially supported by _ ministerio de educacin y ciencia of spain _ , research grant seg2004 - 02418 and _ ministerio de ciencia y tecnologa _ of spain , research grant tsi2007 - 62657 .shujun li was supported by a research fellowship from the _ alexander von humboldt foundation , germany_. p. c. kocher , `` timing attacks on implementations of diffie - hellman , rsa , dss , and other systems , '' advances in cryptology crypto96 , vol .1109 of _ lecture notes in computer science _ , 104113 ( 1996 ) .
this paper reports a detailed cryptanalysis of a recently proposed encryption scheme based on the logistic map . some problems are emphasized concerning the key space definition and the implementation of the cryptosystem using floating - point operations . it is also shown how it is possible to reduce considerably the key space through a ciphertext - only attack . moreover , a timing attack allows the estimation of part of the key due to the existent relationship between this part of the key and the encryption / decryption time . as a result , the main features of the cryptosystem do not satisfy the demands of secure communications . some hints are offered to improve the cryptosystem under study according to those requirements . * * recently a new cryptosystem was proposed by using a chaotic map lattice ( cml ) . in this paper , we analyze the security of this cryptosystem and point out some of its security defects . a number of measures have been suggested to enhance the security of the cryptosystem following some established guidelines on how to design good cryptosystems with chaos .
in an important paper , m. e. j. newman studied a network - based susceptible - infectious - removed ( sir ) epidemic model in which infection is transmitted through a network of contacts between individuals .the contact network itself is a random undirected network with an arbitrary degree distribution of the form studied by newman , strogatz , and watts .given the degree distribution , these networks are maximally random , so they have no small loops and no degree correlations in the limit of a large population . in the stochastic sir model considered by newman , the probability that an infected node makes infectious contact with a neighbor is given by , where is the rate of infectious contact from to and is the time that remains infectious .( we use _ infectious contact _ to mean a contact that results in infection if and only if the recipient is susceptible . )the infectious period is a random variable with the cumulative distribution function ( cdf ) , and the infectious contact rate is a random variable with the cdf .the infectious periods for all individuals are independent and identically distributed ( iid ) , and the infectious contact rates for all ordered pairs of individuals are iid . under these assumptions ,newman claimed that the spread of disease on the contact network is exactly isomorphic to a bond percolation model on the contact network with bond occupation probability equal to the _ a priori _ probability of disease transmission between any two connected nodes in the contact network .this probability is called the _transmissibility _ and denoted by : newman used this bond percolation model to derive the distribution of finite outbreak sizes , the critical transmissibility that defines the epidemic ( i.e. , percolation ) threshold , and the probability and relative final size of an epidemic ( i.e. , an outbreak that never goes extinct ) . as a counterexample ,consider a contact network where each subject has exactly two contacts .assume that ( i ) with probability and with probability and ( ii ) with probability one for all . under the sir model ,the probability that the infection of a randomly chosen node results in an outbreak of size one is , which is the sum of the probability that and the probability that and disease is not transmitted to either contact . under the bond percolation model ,the probability of a cluster of size one is , corresponding to the probability that neither of the bonds incident to the node are occupied . since the bond percolation model correctly predicts the probability of an outbreak of size one only if or .when the infectious period is not constant , it underestimates this probability .the supremum of the error is , which occurs when and . in this limit, the sir model corresponds to a site percolation model rather than a bond percolation model . when the distribution of infectious periods is nondegenerate , there is no bond occupation probability that will make the bond percolation model isomorphic to the sir model . to see why ,suppose node has infectious period and degree in the contact network . in the epidemic model , the conditional probability that transmits infection to a neighbor in the contact network given is since the contact rate pairs for all edges incident to are iid , the transmission events across these edges are ( conditionally ) independent bernoulli( ) random variables ; but the transmission probabilities are strictly increasing in , so the transmission events are ( marginally ) dependent unless with probability one for some fixed .in contrast , the bond percolation model treats the infections generated by node as ( marginally ) independent bernoulli( ) random variables regardless of the distribution of .neither counterexample assumes anything about the global properties of the contact network , so newman s claim can not be justified as an approximation in the limit of a large network with no small loops . in section 2, we define a semi - directed random network called the _ epidemic percolation network _ and show how it can be used to predict the outbreak size distribution , the epidemic threshold , and the probability and final size of an epidemic in the limit of a large population for any time - homogeneous sir model . in section 3 ,we show that the network - based stochastic sir model from can be analyzed correctly using a semi - directed random network of the type studied by bogu and serrano . in section 4 ,we show that it predicts the same epidemic threshold , mean outbreak size below the epidemic threshold , and relative final size of an epidemic as the bond percolation model . in section 5, we show that the bond percolation model fails to predict the distribution of outbreak sizes and the probability of an epidemic when the distribution of infectious periods is nondegenerate . in section 6 , we compare predictions made by epidemic percolation networks and bond percolation models to the results of simulations . in an appendix, we define epidemic percolation networks for a very general time - homogeneous stochastic sir model and show that their out - components are isomorphic to the distribution of possible outcomes of the sir model for any given set of imported infections .consider a node with degree in the contact network and infectious period . in the sir model defined above , the number of people who will transmit infection to if they become infectious has a binomial( ) distribution regardless of .if is infected along one of the edges , then the number of people to whom will transmit infection has a binomial( ) distribution . in order to produce the correct joint distribution of the number of people who will transmit infection to and the number of people to whom will transmit infection, we represent the former by directed edges that terminate at and the latter by directed edges that originate at .since there can be at most one transmission of infection between any two persons , we replace pairs of directed edges between two nodes with a single undirected edge .starting from the contact network , a single realization of the _ epidemic percolation network _ can be generated as follows : 1 .choose a recovery period for every node in the network and choose a contact rate for every ordered pair of connected nodes and in the contact network .2 . for each pair of connected nodes and in the contact network , convert the undirected edge between them to a directed edge from to with probability , to a directed edge from to with probability , and erase the edge completely with probability .the edge remains undirected with probability .the epidemic percolation network is a semi - directed random network that represents a single realization of the infectious contact process for each connected pair of nodes , so possible percolation networks exist for a contact network with edges .the probability of each possible network is determined by the underlying sir model .the epidemic percolation network is very similar to the locally dependent random graph defined by kuulasmaa for an epidemic on a -dimensional lattice .there are two important differences : first , the underlying structure of the contact network is not assumed to be a lattice .second , we replace pairs of ( occupied ) directed edges between two nodes with a single undirected edge so that its component structure can be analyzed using a generating function formalism . in the appendix, we prove that the size distribution of outbreaks starting from any node in a time - homogeneous stochastic sir model is identical to the distribution of its out - component sizes in the corresponding probability space of percolation networks .since this result applies to any time - homogeneous sir model , it can be used to analyze network - based models , fully - mixed models ( see ) , and models with multiple levels of mixing . in this section ,we briefly review the structure of directed and semi - directed networks as discussed in . in the next section ,we relate this to the possible outcomes of an sir model .the _ indegree _ and _ outdegree _ of node are the number of incoming and outgoing directed edges incident to . since each directed edge is an outgoing edge for one node and an incoming edge for another node , the mean indegree and outdegree are equal .the _ undirected degree _ of node is the number of undirected edges incident to .the _ size _ of a component is the number of nodes it contains and its _ relative size _ is its size divided by the total size of the network . the _ out - component _ of node includes and all nodes that can be reached from by following a series of edges in the proper direction ( undirected edges are bidirectional ) .the _ in - component _ of node includes and all nodes from which can be reached by following a series of edges in the proper direction . by definition ,node is in the in - component of node if and only if is in the out - component of . therefore , the mean in- and out - component sizes in any ( semi-)directed network are equal .the _ strongly - connected component _ of a node is the intersection of its in- and out - components ; it is the set of all nodes that can be reached from node and from which node can be reached .all nodes in a strongly - connected component have the same in - component and the same out - component .the _ weakly - connected component _ of node is the set of nodes that are connected to when the direction of the edges is ignored . for giant components, we use the definitions given in .giant components have asymptotically positive relative size in the limit of a large population . all other components are small " in the sense that they have asymptotically zero relative size .there are two phase transitions in a semi - directed network : one where a unique giant weakly - connected component ( gwcc ) emerges and another where unique giant in- , out- , and strongly - connected components ( gin , gout , and gscc ) emerge .the gwcc contains the other three giant components .the gscc is the intersection of the gin and the gout , which are the common in- and out - components of nodes in the gscc ._ tendrils _ are components in the gwcc that are outside the gin and the gout ._ tubes _ are directed paths from the gin to the gout that do not intersect the gscc .all tendrils and tubes are small components .a schematic representation of these components is shown in figure ( [ bowtie ] ) .an _ outbreak _ begins when one or more nodes are infected from outside the population .these are called _ imported infections_. the _ final size _ of an outbreak is the number of nodes that are infected before the end of transmission , and its _ relative final size _ is its final size divided by the total size of the network . in the epidemic percolation network ,the nodes infected in the outbreak can be identified with the nodes in the out - components of the imported infections .this identification is made mathematically precise in the appendix .informally , we define a _ self - limited outbreak _ to be an outbreak whose relative final size approaches zero in the limit of a large population and an _ epidemic _ to be an outbreak whose relative final size is positive in the limit of a large population . there is a critical transmissibility that defines the _ epidemic threshold : _ the probability of an epidemic is zero when , and the probability and final size of an epidemic are positive when . if all out - components in the epidemic percolation network are small , then only self - limited outbreaks are possible .if the percolation network contains a gscc , then any infection in the gin will lead to the infection of the entire gout .therefore , the epidemic threshold corresponds to the emergence of the gscc in the percolation network .for any finite set of imported infections , the probability of an epidemic is equal to the probability that at least one imported infection occurs in the gin .the relative final size of an epidemic is equal to the proportion of the network contained in the gout .although some nodes outside the gout may be infected ( e.g. nodes in tendrils and tubes ) , they constitute a finite number of small components whose total relative size is asymptotically zero .it is possible to define epidemic percolation networks for a much larger class of stochastic sir epidemic models than the one from .first , we specify an sir model using probability distributions for recovery periods in individuals and times from infection to infectious contact in ordered pairs of individuals .second , we outline time - homogeneity assumptions under which the epidemic percolation network is well - defined .finally , we define infection networks and use them to show that the final outcome of the sir model depends only on the set of imported infections and the epidemic percolation network .suppose there is a closed population in which every susceptible person is assigned an index .a susceptible person is infected upon infectious contact , and infection leads to recovery with immunity or death . each person is infected at his or her _ infection time _ , with if is never infected .person is removed ( i.e. , recovers from infectiousness or dies ) at time , where the _ recovery period _ is a random sample from a probability distribution . the recovery period may be the sum of a _ latent period _ , when is infected but not yet infectious , and an _ infectious period _ , when can transmit infection .we assume that all infected persons have a finite recovery period .let be the set of susceptible individuals at time .let be the order statistics of , and let be the index of the person infected .when person is infected , he or she makes infectious contact with person after an _ infectious contact interval _ .each is a random sample from a conditional probability density .let if person never makes infectious contact with person , so has a probability mass concentrated at infinity .person can not transmit disease before being infected or after recovering , so for all and all .the _ infectious contact time _ is the time at which person makes infectious contact with person . if person is susceptible at time , then infects and . if , then because person avoids infection at only if he or she has already been infected .for each person , let his or her _ importation time _ be the first time at which he or she experiences infectious contact from outside the population , with if this never occurs .let be the cumulative distribution function of the importation time vector .first , an importation time vector is chosen .the epidemic begins with the introduction of infection at time .person is assigned a recovery period .every person is assigned an infectious contact time .we assume that there are no tied infectious contact times less than infinity .the second infection occurs at , which is the time of the first infectious contact after person is infected .person is assigned a recovery period .after the second infection , each of the remaining susceptibles is assigned an infectious contact time .the third infection occurs at , and so on .after infections , the next infection occurs at .the epidemic stops after infections if and only if . in principle , the above epidemic algorithm could allow the infectious period and outgoing infectious contact intervals for individual to depend on all information about the epidemic available up to time . in order to generate an epidemic percolation network, we must ensure that the joint distributions of recovery periods and infectious contact intervals are defined _a priori_. the following restrictions are sufficient : 1 .we assume that the distribution of the recovery period vector does not depend on the importation time vector , the contact interval matrix ] , and can be calculated with arbitrary precision by iterating equations ( [ hf , out ] ) and ( [ hu , out ] ) starting from initial values .estimates of and can be used to estimate with arbitrary precision .the expected size of the out - component of a randomly chosen node below the epidemic threshold is .taking derivatives in ( [ hout ] ) yields taking derivatives in equations ( [ hf , out ] ) and ( [ hu , out ] ) and using the fact that below the epidemic threshold yields a set of linear equations for and .these can be solved to yield and where the argument of all derivatives is .the in - component size distribution of a semi - directed network can be derived using the same logic used to find the out - component size distribution , except that we consider going backwards along directed edges .let be the pgf for the size of the in - component at the beginning of a directed edge , be the pgf for the size of the in - component at the beginning " of an undirected edge , and be the pgf for the in - component size of a randomly chosen node .then , in the limit of a large population , [ hur , in] the probability that a node has a finite in - component is , so the probability that a randomly chosen node is in the gout is .the expected size of the in - component of a randomly chosen node is .power series and numerical estimates for , , and can be obtained by iterating these equations .the expected size of the out - component of a randomly chosen node below the epidemic threshold is .taking derivatives in equation ( [ hin ] ) yields taking derivatives in equations ( [ hr , in ] ) and ( [ hu , in ] ) and using the fact that in a subcritical network yields and where the argument of all derivatives is . the epidemic threshold occurs when the expected size of the in- and out - components in the network becomes infinite .this occurs when the denominators in equations ( [ hfout(1 ) ] ) and ( [ huout(1 ) ] ) and equations ( [ hrin(1 ) ] ) and ( [ huin(1 ) ] ) approach zero . from the definitions of , and , both conditions are equivalent to therefore, there is a single epidemic threshold where the gscc , the gin , and the gout appear simultaneously in both purely directed networks and semi - directed networks .a node is in the gscc if its in- and out - components are both infinite .a randomly chosen node has a finite in - component with probability and a finite out - component with probability .the probability that a node reached by following an undirected edge has finite in- and out - components is the solution to the equation and the probability that a randomly chosen node has finite in- and out - components is .thus , the relative size of the gscc is this section , we prove that the in - component size distribution of the epidemic percolation network for the sir model from is identical to the component size distribution of the bond percolation model with bond occupation probability .the probability generating function for the total number of incoming and undirected edges incident to any node is which is independent of .if node has degree in the contact network , then the number of nodes we can reach by going in reverse along a directed edge or an undirected edge has a binomial distribution regardless of .if we reach node by going backwards along edges , the number of nodes we can reach from by continuing to go backwards ( excluding the node from which we arrived ) has a binomial distribution . therefore , the in - component of any node in the percolation network is exactly like a component of a bond percolation model with occupation probability .this argument was used to justify the mapping from an epidemic model to a bond percolation model in , but it does not apply to the out - components of the epidemic percolation network .methods of calculating the component size distribution of an undirected random network with an arbitrary degree distribution using the pgf of its degree distribution were developed by newman _et al_. .these methods were used to analyze the bond percolation model of disease transmission , obtaining results similar to those obtained by andersson for the epidemic threshold and the final size of an epidemic . in this paragraph , we review these results and introduce notation that will be used in this section . let be the pgf for the degree distribution of the contact network .then the pgf for the degree of a node reached by following an edge ( excluding the edge used to reach that node ) is , where is the mean degree of the contact network . with bond occupation probability , the number of occupied edges adjacent to a randomly chosen node has the pgf and the number of occupied edges from which infection can leave a node that has been infected along an edge has the pgf .the pgf for the size of the component at the end of an edge is and the pgf for the size of the component of a randomly chosen node is the proportion of the network contained in the giant component is , and the mean size of components below the percolation threshold is . and can be expanded as power series to any desired degree by iterating equations ( [ h1 ] ) and ( [ h0 ] ) , and their value for any fixed ] . since is convex , by jensen s inequality .equality holds only if , , is constant , or is constant . since is the solution to we must have .this can be seen by fixing and considering the graphs of and . is the value of at which intersects the line . is the value of at which intersects the line . since , we must have . for all ] , so the in- and out - component size distributions are identical and the probability and final size of an epidemic are equal .when the infectious period has a nondegenerate distribution and the percolation network is subcritical , for all ( so the in- and out - components have dissimilar size distributions ) but ( so the probability and final size of an epidemic are both zero ) .if the network is supercritical and the infectious period is nonconstant , for all $ ] , so in- and out - components have dissimilar size distributions and the probability of an epidemic is strictly less than its final size .since the bond percolation model predicts the distribution of in - component sizes , it can not predict the distribution of out - component sizes or the probability of an epidemic for any sir model with a nonconstant infectious period .however , it does establish an upper limit for the probability of an epidemic in an sir model .we have recently become aware of independent work that shows similar results for more general sources of variation in infectiousness and susceptibility in a model where these are independent and uses jensen s inequality to establish a lower bound for the probability and final size of an epidemic .the lower bound corresponds to a site percolation model with site occupation probability , which is the model that minimized the probability of no transmission in the introduction .in a series of simulations , the bond percolation model correctly predicted the mean outbreak size ( below the epidemic threshold ) , the epidemic threshold , and the final size of an epidemic . in section 4 , we showed that the epidemic percolation network generates the same predictions for these quantities . in newman s simulations , the contact network had a power - law degree distribution with an exponential cutoff around degree , so the probability that a node has degree is proportional to for all .this distribution was chosen to reflect degree distributions observed in real - world networks .the probability generating function for this degree distribution is where is the -polylogarithm of .in , newman used . in our simulations , we retained the same contact network but used a contact model adapted from the counterexample in the introduction .we fixed for all and let with probability and with probability for all .the predicted probability of an outbreak of size one is in the epidemic percolation network and in the bond percolation model .the predicted probability of an epidemic is in the epidemic percolation network and in the bond percolation model . in all simulations ,an epidemic was declared when at least persons were infected ( this low cutoff produces a slight overestimate of the probability of an epidemic in the simulations , favoring the bond percolation model ) .figures [ k10p1 ] and [ k20p1 ] show that percolation networks accurately predicted the probability of an outbreak of size one for all combinations , whereas the bond percolation model consistently underestimated these probabilities .figures [ k10pepi ] and [ k20pepi ] show that the bond percolation model significantly overestimated the probability of an epidemic for all combinations .the percolation network predictions were far closer to the observed values .for any time - homogeneous sir epidemic model , the problem of analyzing its final outcomes can be reduced to the problem of analyzing the components of an epidemic percolation network .the distribution of outbreak sizes starting from a node is identical to the distribution of its out - component sizes in the probability space of percolation networks .calculating this distribution may be extremely difficult for a finite population , but it simplifies enormously in the limit of a large population for many sir models . for a single randomly chosen imported infection in the limit of a large population ,the distribution of self - limited outbreak sizes is equal to the distribution of small out - component sizes and the probability of an epidemic is equal to the relative size of the gin . for any finite set of imported infections ,the relative final size of an epidemic is equal to the relative size of the gout . in this paper , we used epidemic percolation networks to reanalyze the sir epidemic model studied in .the mapping to a bond percolation model correctly predicts the distribution of in - component sizes , the critical transmissibility , and the final size of an epidemic .however , it fails to predict the correct distribution of outbreak sizes and overestimates the probability of an epidemic when the infectious period is nonconstant .since all known infectious diseases have nonconstant infectious periods and heterogeneity in infectiousness has important consequences in real epidemics , it is important to be able to analyze such models correctly .the exact finite - population isomorphism between a time - homogeneous sir model and our semi - directed epidemic percolation network is not only useful because it provides a rigorous foundation for the application of percolation methods to a large class of sir epidemic models ( including fully - mixed models as well as network - based models ) , but also because it provides further insight into the epidemic model .for example , we used the mapping to an epidemic percolation network to show that the distribution of in- and out - component sizes in the sir model from could be calculated by treating the incoming and outgoing infectious contact processes as separate directed percolation processes , as in .however , in contrast with , the semi - directed epidemic percolation network isolates the fundamental role of the gscc in the emergence of epidemics .the design of interventions to reduce the probability and final size of an epidemic is a central concern of infectious disease epidemiology . in a forthcoming paper , we analyze both fully - mixed and network - based sir models in which vaccinating those nodes most likely to be in the gscc is shown to be the most effective strategy for reducing both the probability and final size of an epidemic . if the incoming and outgoing contact processes are treated separately , the notion of the gscc is lost .* acknowledgments : * _ this work was supported by the us national institutes of health cooperative agreement 5u01gm076497 models of infectious disease agent study `` ( e.k . ) and ruth l. kirchstein national research service award 5t32ai007535 epidemiology of infectious diseases and biodefense '' ( e.k . ) , as well as a research grant from the institute for quantitative social sciences at harvard university ( e.k . ) .joel c. miller s comments on the proofs in sections 3 and 4 were extremely valuable , and we are also grateful for the comments of marc lipsitch , james h. maguire , and the anonymous referees of pre .e.k . would also like to thank charles larson and stephen p. luby of the health systems and infectious diseases division at icddr , b ( dhaka , bangladesh ) ._ 99 m. e. j. newman ( 2002 ) .spread of epidemic disease on networks ._ physical review e _ 66 , 016128 .m. e. j. newman , s. h. strogatz , and d. j. watts ( 2001 ) .random graphs with arbitrary degree distributions and their applications , _ physical review e _ 64 * * , * * 026118 .m. bogu and m. a. serrano ( 2005 ) .generalized percolation in random directed networks . _ physical review e _ 72 , 016106 . l. a. meyers , m. e. j. newman , and b. pourbohloul ( 2006 ) . predicting epidemics on directed contact networks ._ journal of theoretical biology _240(3 ) : 400 - 418 .k. kuulasmaa ( 1982 ) . the spatial general epidemic and locally dependent random graphs ._ journal of applied probability _19(4 ) : 745 - 758 .e. kenah and j. robins ( 2007 ) .network - based analysis of stochastic sir epidemic models with random and proportionate mixing .arxiv : q-bio.qm/0702027 .a. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , r. stata , a. tomkins , and j. weiner ( 2000 ) .graph structure in the web ._ computer networks _ 33 : 309 - 320 .dorogovtsev , j.f.f .mendes , and a.n .sakhunin ( 2001 ) .giant strongly connected component of directed networks ._ physical review e , _ 64 : 025101(r ) .n. schwartz , r. cohen , d. ben - avraham , a .-barabsi , and s. havlin ( 2002 ) .percolation in directed scale - free networks ._ physical review e _ 66 , 015104(r ) .h. andersson and t. britton ( 2000 ) ._ stochastic epidemic models and their statistical analysis _( lecture notes in statistics v.151 ) .new york : springer - verlag .o. diekmann and j. a. p. heesterbeek ( 2000 ) ._ mathematical epidemiology of infectious diseases : model building , analysis and interpretation_. chichester ( uk ) : john wiley & sons .l. m. sander , c. p. warren , i. m. sokolov , c. simon , and j. koopman ( 2002 ) .percolation on heterogeneous networks as a model for epidemics ._ mathematical biosciences _ 180 , 293 - 305 .r. albert and a - l .barabsi ( 2002 ) .statistical mechanics of complex networks ._ reviews of modern physics , _ 74 : 47 - 97 .m. e. j. newman ( 2003 ) . the structure and function of complex networks ._ siam reviews , _45(2 ) : 167 - 256 .m. e. j. newman ( 2003 ) .random graphs as models of networks , in _ handbook of graphs and networks , _ pp .s. bornholdt and h. g. schuster , eds .berlin : wiley - vch , 2003 .newman , a .-barabsi , and d.j .watts ( 2006 ) . _ the structure and dynamics of networks _( princeton studies in complexity ) .princeton : princeton university press .h. andersson .limit theorems for a random graph epidemic model ( 1998 ) . _the annals of applied probability_ 8(4 ) : 1331 - 1349 .k. kuulasmaa and s. zachary ( 1984 ) . on spatial general epidemics and bond percolation processes ._ journal of applied probability _21(4 ) : 911 - 914 .j. miller ( 2007 ) . predicting the size and probability of epidemics in populations with heterogeneous infectiousness and susceptibility ._ physical review e _ 76 : 010101(r ) .m. lipsitch , t. cohen , b. cooper , j. m. robins , s. ma , l. james , g. gopalakrishna , s. k. chew , c. c. tan , m. h. samore , d. fishman , and m. murray ( 2003 ) . transmission dynamics and control of severe acute respiratory syndrome ._ science _ 300 : 1966 - 1970 .s. riley , c. fraser , c. a. donnelly , a. c. ghani , l. j. abu - raddad , a. j. hedley , g. m. leung , l .- m .lam , t. q. thach , p. chau , k .-chan , s .- v .lo , p - y leung , t. tsang , w. ho , k .- h .lee , e. m. c. lau , n. m. ferguson , and r. m. anderson ( 2003 ) .transmission dynamics of the etiological agent of sars in hong kong : impact of public health interventions . _ science _ 300 : 1961 - 1966 . c. dye and n. gay . modeling the sars epidemic ( 2003 ) ._ science _ 300 : 1884 - 1885 .j. wallinga and p. teunis ( 2004 ) .different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures ._ american journal of epidemiology , _160(6 ) : 509 - 516 .
in an important paper , m.e.j . newman claimed that a general network - based stochastic susceptible - infectious - removed ( sir ) epidemic model is isomorphic to a bond percolation model , where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor . in this paper , we show that this isomorphism is incorrect and define a semi - directed random network we call the _ epidemic _ _ percolation network _ that is exactly isomorphic to the sir epidemic model in any finite population . in the limit of a large population , ( i ) the distribution of ( self - limited ) outbreak sizes is identical to the size distribution of ( small ) out - components , ( ii ) the epidemic threshold corresponds to the phase transition where a giant strongly - connected component appears , ( iii ) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in - component , and ( iv ) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out - component . for the sir model considered by newman , we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold , the same epidemic threshold , and the same final size of an epidemic as the bond percolation model . however , the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution . we confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations . in an appendix , we show that an isomorphism to an epidemic percolation network can be defined for any time - homogeneous stochastic sir model .
in a previous note we have obtained a solution of the lamb integral equation using a formalism based on the fractional derivatives .our result coincides with the solution provided , without any proof , by bateman .a generalization of our method has been proposed in , where fractional derivatives are used to solve a more complicated version of the lamb equation . in this notewe discuss a further extension of the technique put forward in and we will develop a more general procedure to treat integral equations whose solution requires fractional forms of differential operators other than the ordinary derivative . to this aim we remind that the exponential operator is a dilatation operator whose action on a given function is ( see ref . ) let us now consider the operator with .the use of the following property of the laplace transform allows to write and , according to the previous discussion , we find which is essentially the generalization of the riemann - liouville integral representation for non - integer negative powers of the operator .the procedure outlined in sec .[ sec : intro ] can be generalized to other differential forms .the first example we consider is the following modified form of the lamb - bateman equation where is the function to be determined , and is a continuous , free from singularities , known function .according to the properties of the dilatation operator , we can cast eq . in the form and , treating the derivative operator as a generic constant , the evaluation of the the gaussian integral yields by rewriting the operator as , and taking into account eq ., we get = 0 $ ] , with generic function that admits a power series expansion , one has : . ] whose correctness as solution of eq. has been checked numerically . as a further example of integral equation which does not involve fractional operators but requires the use of the dilatation operator , we consider the following problem where is the unknown function . in this casewe get and , thus in the case , we obtain the solution in terms of 0-th order bessel functions of first kind , namely we obtain the following identity . ] in general , if is any function which can be expanded as the solution writes it is interesting to note that in the case in which the equation is of the type the relevant solution can be written in terms of bessel - like functions as follows where is the bessel - wright function of order .the same result holds for the integer replaced by any real .we go back to fractional operational calculus by considering the following example of integral equation which is apparently more complicated than the original lamb - bateman equation , but , as will be shown later , in spite of its different form , it represents the same mathematical problem .also in this case the solution can be obtained using the previously outlined shift operator technique .we first recall the identity and therefore , as a straightforward consequence of our procedure , we find the use of the same logical steps leading to eq .yields the following expression for the solution of eq . examples discussed so far show that the method we have proposed is fairly powerful and is amenable of useful generalizations . a further example supporting the usefulness of our techniqueis offered by the calculation of the integral \ ; \qquad\qquad \qquad \left(\re \nu\,\geq\,\frac{1}{2}\right)\;.\ ] ] the shift operator method outlined in the introductory remarks allows to write this integral as follows the operator can be defined explicitly by using the following result that specified to the case , yields by substituting to its taylor expansion , and assuming that the sum and the operator can be interchanged , we get we have checked the correctness of this result performing a direct numerical integration of eq . .the behavior of the functions for different values of is shown in fig . [ fig : funs ] .the procedure employed for the evaluation of the integral is quite general and can be extended to any integral of the type in fact , writing this integral as follows ^{x\,\partial_x}\,f(x)\;,\ ] ] we can calculate ^\mu\;,\ ] ] and , thus a complete theory of the generalized shift operators has been presented in ref . , where , among the other things , the following identity has been proved \quad \quad f(x ) \,=\ , \int^x\,\mathrm{d}\xi\,\frac{1}{q(\xi)}\;,\ ] ] which is a generalization of the ordinary shift operator ( the examples discussed before are just a particular case of this formula ) . according to this identity and to the methods here discussed ,it is easy to show that the solution of the following integral equation \,=\ , f(x)\ ] ] can be written as in a forthcoming investigation we will discuss the conditions under which the previous solution holds and some examples of applications of the obtained identities . before closing this paperwe consider the following integral equation by using eq . , it is easy to show that ( with ) . ] and , therefore , for the solution of eq .we obtain in this paper we have studied the properties of differential operators like and their fractional generalizations .it is well known that , for integer different from zero , the following identity holds where are the stirling number of second kind .the extension of this equation to the non - integer powers of does not exist .we make the conjecture that in this case eq . can be modified as follows the numerical check confirms the validity of our conjecture , but we did not succeed in getting a rigorous proof for it .eq . , as well as other identities presented in this paper , will be more throughly discussed in a more general forthcoming paper .
we present an extension of a previously developed method employing the formalism of the fractional derivatives to solve new classes of integral equations . this method uses different forms of integral operators that generalizes the exponential shift operator .
genetic regulatory networks exhibit a wide range of oscillatory phenomena , ranging from the fast oscillations in calcium to the 24 hour cycle of circadian clocks .the occurrence of oscillations is generally caused by the presence of a negative feedback loop in the regulatory network . from a theoretical point of view, negative feedback can cause a hopf bifurcation and thus generate a transition between a stable fixed point , corresponding to homeostasis , and an attracting limit cycle , corresponding to oscillations .however , in real regulatory networks , the loop causing oscillations is usually embedded in a larger network including multiple positive and negative feedbacks . in some cases ,these additional loops have a demonstrable biological function , for example in giving tunability to the oscillation period or in stabilizing the period of circadian clocks in the presence of temperature fluctuations or molecular noise . in general, one expects that multiple feedback loops could lead to non - trivial behaviors from the viewpoint of bifurcation theory .the dynamics could become even richer when the noise induced by stochastic gene expression is taken into account .as we discuss later , circadian clocks are typical examples of genetic circuits where it may be important to understanding these effects . in this paper , we present and analyze a class of network motifs in which a two - node positive feedback motif is inserted into a three - node negative feedback loop , as represented in fig . [ pattern ] . in section 2.1 , we show that a deterministic dynamical models of the simplest of such networks , in a suitable parameter range , exhibits co - existence of a stable fixed point and a stable limit cycle .we explain this behavior in terms of a saddle - node separatrix - loop bifurcation , and show that it results in a diverging oscillation period close to the bifurcation point .in section 2.2 , we use stochastic simulations using the gillespie algorithm to demonstrate that the noise can make the system switch between oscillatory state and the stationary state .we show that similar behaviour occurs in more realistic models of circadian clocks that contain the same combination of positive and negative feedback loops .section 3 summarizes our thoughts on the relevance of these results for the behaviour of circadian clocks .we study the class of genetic networks represented in fig .[ pattern ] . in each of the four networks ,a positive feedback between node 1 and node 2 can give rise to a bi - stable switch .when node 3 is introduced , node 1 , 2 , 3 together form a negative feedback loop .the negative feedback loop tends to destabilize one of the stable fixed points of the switch .the simplest motifs that exhibit such `` frustrated bistability '' are studied in ref . . here , we study slightly larger networks which allow for more intricate dynamics , in particular a scenario where a stable limit cycle emerges around the unstable fixed point while the other stable fixed point remains unchanged .we shall first focus our discussion on the network ( a ) in fig [ pattern ] , for which we write the following dynamical equations : here , are the concentrations of the proteins associated with the three nodes , is the strength of the three inhibitory regulations , is the hill coefficient , is a constant source term for each node , and is the degradation rate for each protein ( we assume that all three proteins are stable , and therefore their degradation rate is determined by the cell division time ) . is the control parameter to adjust the inhibition from node 2 to node 1 . to simplify the equations , we introduce dimensionless parameters and variables , , , : we study this system of equations for parameter values of , and vary as a control parameter , which changes the strength of the positive feedback relative to the negative feedback .we found three bifurcation points , at , , and . when ( fig.[phase_space]a ) , only one unstable fixed point and one stable limit cycle are found in the phase space . here , the stable limit cycle is a global attractor . at ,a saddle - node bifurcation occurs , where a stable fixed point and a saddle node emerge .when ( fig.[phase_space]b ) , we find one stable limit cycle and three fixed points one stable and two unstable . within this region ,the stable limit cycle is not the global attractor .depending on the initial condition , the system may either reach the limit cycle or the stable fixed point . in the parameter range we studied ,the volumes of the two basins of attraction are both non - negligible , separated by the surface shown in fig [ basin ] .approaching the critical point , the stable limit cycle approaches the saddle point , and eventually at ( fig.[phase_space]c ) , they generate a homoclinic cycle .when is near , the period of oscillation increases dramatically , and eventually diverges at the critical point ( fig [ period ] ) as typical for homoclinic cycles .such a bifurcation is classified as saddle - node separatrix - loop bifurcations and is a robust bifurcation scenario in a phase space of dimension or more .it corresponds to an attractor crisis , so that when is reduced further , the stable limit cycle disappears .when ( fig.[phase_space]d ) , three fixed points still exist in the phase space : a stable fixed point , a saddle node and an unstable fixed point with complex eigenvalues .the stable fixed point is the global attractor . at , the saddle node and the unstable fixed point collide anddisappear after a saddle - node bifurcation .so , when ( fig.[phase_space]e ) , there is only one fixed point , which is stable and is a global attractor .the location and nature of the fixed points can be better understood by a graphical study of the intersections of the nullclines .we set terms in equations ( [ eq4])-([eq6 ] ) to zero , and rearrange the resulting algebraic relations to express and in terms of .then , using this to eliminate and in equation ( [ eq4 ] ) yields : fig [ phase_space ] right column shows the right hand side of eq .( [ eq : one_d ] ) as the parameter is varied . when ( fig.[phase_space]a ) , the function has one zero , corresponding to a unique unstable fixed point in phase space .when ( fig.[phase_space]b ) , ( fig.[phase_space]c ) and ( fig.[phase_space]d ) , the function has two additional zeroes , indicating two more fixed points . from the eigen values of all three fixed points , one can show that only one of them ( the circular one ) is stable . finally , when ( fig.[phase_space]e ) , the function has one zero again .two fixed points disappear and only one stable fixed point remains .we checked numerically the parameter range where the same bifurcations and qualitatively the same phase space portrait can be obtained . for and , is required to see the same behavior . for and , is required .the behavior is found to be insensitive to the value of ; for and , the behavior was unchanged for .the same bifurcations can also be found in all the other 3 motifs listed in fig [ pattern ] .namely , the observed sequence of bifurcations is a robust feature of such motifs . in this section, we investigate the effect of the intrinsic noise due to the discrete nature of molecular reactions in such motifs .we use the gillespie algorithm for stochastic simulations of the dynamical system specified by equations ( [ eq1])([eq3 ] ) .we denote the copy number of molecules of species as .the allowed transitions , along with their kinetic rates , are : } n_1 + 1 , \label{stoc1}\\ & & n_1 \xrightarrow{\gamma n_1 } n_1 - 1,\\ & & n_2 \xrightarrow{cv+\alpha v/[(1+(n_1/k)^h ) ] } n_2 + 1,\\ & & n_2 \xrightarrow{\gamma n_2 } n_2 - 1,\\ & & n_3 \xrightarrow{cv+\alpha v/[(1+(n_2/k)^h ) ] } n_3 + 1,\\ & & n_3 \xrightarrow{\gamma n_3 } n_3 - 1 .\label{stocn}\end{aligned}\ ] ] to control the noise , we change the volume of the system , which changes the production rates of , but leaves the average concentration constant , as long as the values of , and are unchanged . the larger the value of , and therefore the larger the copy numbers , the smaller the noise ( the relative fluctuations in ) .note that we do not explicitly consider processes like mrna production , binding of transcription factors , etc .inclusion of these steps can increase the noise in the system further .figure [ noise_3_node ] shows the concentration vs. time for a stochastic simulation in dimensionless units with .we convert numbers so that the dimensionless concentration corresponds to one molecule when volume , and simulated ( a ) and ( b ) .we can clearly see switching between the oscillatory state ( the stable limit cycle ) and and the steady state ( the stable fixed point ) , and this switching happens more often for smaller , i.e. , for larger noise . to quantify this switching behavior, we measured the average switching time from the oscillatory state to the steady state and vice versa as a function of the system volume ( fig.[switchrate ] ) .because of the noisy dynamics , switching is determined by using two thresholds for the distance from the stable fixed point , and : we define a switching event from the steady regime to osillatory regime when when the distance exceeds , while the reverse switching happens when the distance becomes smaller than .therefore , when the distance is between and , there is a history dependence in which regime the state belongs to .for this parameter set , the oscillation period is .15 , thus we can see that the system with hundreds of molecules ( corresponds to ) can still cause frequent switching , on the order of once in every 10 oscillations .the switching rate decreases with as expected . for large enough , the switching rate decreases exponentially with .finally , in order to see whether such switching behavior can be relevant for real biological systems , we study effect of noise on the more realistic model for drosophila circadian rhythms described in ref .the deterministic version of this model has a combination of several positive and negative feedback loops and exhibits the coexistence of a stable fixed point and a stable limit cycle .we simulated the stochastic version of this model with parameters used in ref . . a detailed description of the model and parameters are given in the appendix .we observe again the switching between the oscillatory state and the steady state due to the noise ( fig [ noise_drosophila ] ) .the switching is quite often compared to the oscillation frequency for the noise level expected for an average cell volume , i.e. ( fig . [ noise_drosophila]c ) , where 1 [ nm ] corresponds to about 600 molecules molecules per liter . ] .we have analyzed a class of network motifs consisting of a two - node positive feedback inserted into a three - node negative feedback loop .we demonstrated that a stable fixed point and a stable limit cycle can co - exist in this class of motifs .as parameters are changed , the system undergoes a saddle - node separatrix - loop bifurcation , with a diverging oscillation period as the system approaches the bifurcation .the location and nature of the fixed points were investigated in detail by outlining the intersections of the nullclines of the three variables .we then studied the effect of intrinsic noise to the motif , in the parameter regime where a fixed point and stable limit cycle co - exist .we showed that stochastic switching between the two happens with a rate that decreases with decreasing the level of the noise .actually , our results complement the study in , where similar motifs for biochemical systems are studied focusing on the how a negative feedback perturbs the switch by positive feedback loops .we further showed that similar behaviour is observed in a more complex model of the drosophila circadian clock .this switching behaviour was not reported in a study of the effect of noise in a simplified version of the detailed model .the studies on the drosophila model focused instead on the singular behaviour of the real circadian clock , where a single pulse of light can cause long - term suppression of the circadian rhythm , the models explain this as a switch from the limit cycle to the stable fixed point , caused by a short external perturbation ( the pulse of light ) to some parameter values .recent research suggests an alternate possibility , where the singular behaviour is caused by desynchronization of the clocks rather than stopping individual oscillations .our analysis of simple motif combining positive and negative feedback demonstrates that intrinsic noise , and not just external perturbations , can also cause switching where a limit cycle and a stable fixed point coexist in the phase space .as such repeated switching would disrupt the circadian rhythm , we can predict that specific regulatory mechanism must exist in real circadian clocks to suppress it .it would be interesting to explore the space of small network motifs to understand what mechanisms could implement this kind of suppression most effectively .we thank hiroshi kori and lei - han tang for useful comments .this work is supported by the danish national research foundation .we studied the stochastic version of the model for drosophila circadian rhythm given in ref .the summary of the reactions in the model with deterministic equations of the model are shown in fig .[ drosophila ] with parameters given in the caption .we converted these equations into stochastic form using the gillespie algorithm , as has been done to convert the simple motifs eqs .( [ eq1])-([eq3 ] ) to the stochastic version ( [ stoc1])-([stocn ] ) .the concentrations are converted to the number of molecules based on the typical cell volume of drosophila , about , which means 1 nm corresponds to about 600 molecules .figure [ noise_drosophila]c is based on this conversion . for the case where the cell volume is 10 ( 100 ) fold bigger , the copy number is also converted to be 10 ( 100 ) fold bigger , which corresponds to the simulations shown in fig .[ noise_drosophila]b ( a ) . 99 s. schuster , m. marhl , and t. hofer , modelling of simple and complex calcium oscillations , eur . j. biochem . * 269 * , 1333 - 1355 ( 2002 ) a.t .winfree , the geometry of biological time .new york : springer , ( 1980 ) .b. pfeuty , q. thommen , and m. lefranc , robust entrainment of circadian oscillators requires specific phase response curves , biophysical journal 100 , 2557 ( 2011 ) .g. tiana , s. krishna , s. pigolotti , m.h .jensen , k. sneppen , oscillations and temporal signalling in cells , phys .biol . 4 ( 2007 ) 45719 - 3 .s. pigolotti , s. krishna , m.h .jensen , oscillation patterns in negative feedback loops , proc .104 ( 2007 ) 65336537 .tsai , y.s .choi , w. ma , j.r .pomerening , c. tang , j.e .ferrell jr ., robust , tunable biological oscillations from interlinked positive and negative feedback loops , science 321 ( 2008 ) , 126129 .d. gonze , j. halloy , a. goldbeter , robustness of circadian rhythms with respect to molecular noise , proc .99(2002 ) 673 - 678 .magnitskii , s.v .sidorov , new methods for chaotic dynamics , world scientific publishing company , 2006 .d. t. gillespie , exact stochastic simulation of coupled chemical reactions , j. phys .* 81 * ( 1977 ) 2340 - 2361 .s. krishna , s. semsey and m. h. jensen , frustrated bistability as a means to engineer oscillations in biological systems phys .* 6 * ( 2009 ) 036009 .izhikevich , neural excitability , spiking , and bursting , international journal of bifurcation and chaos . 10 ( 2000 ) 1171 - 1266 .a. loinger , o. biham , stochastic simulations of the repressilator circuit , phys .e 76051917 ( 2007 ) .leloup , a. goldbeter , a molecular explanation for the long - term suppression of circadian rhythms by a single light pulse , am j physiol regulatory integrative comp physiol . 280 ( 2001 )leloup , d. gonze , a. goldbeter , limit cycle models for circadian rhythms based on transcriptional regulation in drosophila and neurospora , j biol rhythms .14 ( 1999 ) 433 - 448 .s. honma , k.i .honma , light - induced uncoupling of multioscillatory circadian system in a diurnal rodent , asian chipmunk , am j physiol regulatory integrative comp physiol . 276 ( 1999 ) 1390 - 1396. h. ukai , t.j .kobayashi , m. nagano , k.h .masumoto , m. sujino , t. kondo , k. yagita , y. shigeyoshi , h.r .ueda , melanopsin - dependent photo - perturbation reveals desynchronization underlying the singularity of mammalian circadian clocks , nat cell biol 9(2007 ) , 1327 - 1334 .leloup , a. goldbeter , a model for circadian rhythms in drosophila incorporating the formation of a complex between the per and tim proteins , j biol rhythms .13(1998 ) 70 - 87 .u. schibler , p. sassone - corsi , a web of circadian pacemakers , cell 111 ( 2002 ) , 919922 .shearman , s. sriram , d.r .weaver , e.s .maywood , i. chaves , b. zheng , k. kume , c.c .lee , t.j .van der horst , m.h .hastings , s.m .reppert , interacting molecular loops in the mammalian circadian clock , science 288 ( 2000 ) , 1013 - 1019 .u. albrecht , invited review : regulation of mammalian circadian clock genes , j appl physiol 92 ( 2002 ) , 13481355 .aton , e.d .herzog , come together , right ... now : synchronization of rhythms in a mammalian circadian clock , neuron . 48 ( 2005 ) , 531534 .u. albrecht , g. eichele , the mammalian circadian clock , curr opin genetics dev 13 ( 2003 ) , 271277 .u. albrecht , the mammalian circadian clock : a network of gene expression , frontiers in bioscience 9 ( 2004 ) , 48 - 55 .f. gachon , e. nagoshi , s.a .brown , j. ripperger , u. schibler , the mammalian circadian timing system : from gene expression to physiology , chromosoma 113 ( 2004 ) , 103112 . b. pfeuty and k. kaneko , the combination of positive and negative feedback loops confers exquisite flexibility to biochemical switches , phys .( 2009 ) , 046013 .
we analyze a class of network motifs in which a short , two - node positive feedback motif is inserted in a three - node negative feedback loop . we demonstrate that such networks can undergo a bifurcation to a state where a stable fixed point and a stable limit cycle coexist . at the bifurcation point the period of the oscillations diverges . further , intrinsic noise can make the system switch between oscillatory state and the stationary state spontaneously . we find that this switching also occurs in previous models of circadian clocks that use this combination of positive and negative feedback . our results suggest that real - life circadian systems may need specific regulation to prevent or minimize such switching events . negative feedback , positive feedback , oscillation , noise , circadian rhythm
-defined networking can imbue the network management process with an unparalleled level of state monitoring and control .the ability to migrate the routing elements of a network from closed , static hardware solutions towards an open , re - programmable paradigm is expected to promote significantly the adaptivity to demand patterns , eventually yielding a healthy and constant innovation rate .the openflow protocol and assorted hardware , which enables an administrative authority to centrally monitor a network and deploy fitting routing strategies , has produced significant gains in a wide set of application scenarios .nonetheless , sdn - enabled traffic engineering ( te ) approaches are presently characterized by a high degree of architectural penetration .each networking element must yield its inner operation to a remote , central controller . while this assumption is valid for networks managed by the same authority ( e.g. ), it poses an issue for networks comprising self - managed elements .furthermore , related solutions may come at a high capital cost , requiring multiple powerful controllers to cover a network , as well as a high operational cost , incurred by the need for close interaction between the network elements and the controller , which naturally translates to traffic overheads .these concerns , combined with point - of - failure and security considerations , can discourage self - managed elements for adopting or even trying an sdn - based , central traffic orchestration .the present study claims that a lightweight te solution is in need in order to demonstrate the gains of sdn - enabled collaboration and gradually convince self - managed elements to participate further .the methodology consists of applying the principles of backpressure routing to a backbone network of self - managed nodes , deriving stability - optimal flow routing rules .nodes that choose to participate to the proposed scheme initially inform a central controller of their aggregate , internal congestion states . in return, they receive the aforementioned rule set in the form of a proposal . apart from its simplicity and ability to respect peering agreements, the proposed scheme also fills a theoretical gap in the related work , offering _analytically_-proven throughput optimality and network stabilization potential .studies on traffic engineering in networks , whether sdn - enabled or not , target the real - time grooming of data flows , in order to provide the best possible quality of service on a given physical infrastructure . to this end , maximizing the network s throughput has constituted a prominent goal .microte , hedera and mahout focus on the detection and special handling of large elephant flows , under the assumption that they constitute the usual suspects of congestion.when a large flow is detected , it is treated as a special case , and it is assigned a separate path , which does not conflict with the bulk of the remaining traffic .these schemes require constant monitoring of the network s state , which is achieved by scanning the network for large flows via periodic polling ( at the scale of ) , raising sdn controller scalability and traffic overhead concerns .they differ , however , in where the scanning takes place .hedera constantly scans the edge switches of the network , requiring less nodes to visit but more flows per node to scan .mahout scans the hosts , scanning on average more nodes than hedera , but with less flows to be monitored per node .finally , microte relies on push - based network monitoring , with nodes posting periodically their state to the controller .companies have also invested in sdn - powered solutions for optimizing their proprietary networks , within or among datacenters .emphasis is placed on prioritizing the applications and flows that compete for bandwidth , based on their significance or operational requirements .b4 incorporates this concern by keeping tuples of source , destination and qos traits per network flow .the network s resources are constantly monitored and the flows are assigned paths according to their priority , breaking ties in a round - robin manner .swan considers classes of priorities , pertaining to critical - interactive , elastic and background traffic .resources are first assigned per priority class . within each coarse assignment ,a max - min fairness approach is used to distribute resources to specific flows .bell labs propose a more direct approach , seeking to solve the formal link utilization problem , given explicit flow requests .other studies focus on scenarios such as partially sdn - controlled networks , or advancing the efficiency of multipath routing beyond classic approaches , exploiting the monitoring capabilities of openflow . differentiating from the outlined studies ,the present work proposes a sdn - enabled traffic engineering approach that is considerably more lightweight in terms of overhead , as well as less intrusive in terms of architecture .its goal is to encourage centralized , sdn - based orchestration among autonomously managed networked elements .the proposed scheme is throughput - optimal , yields minimal interaction with the controller and minimal number of required flow rules .an important term in networking studies is the notion of _ network stability_. it is defined as the ability of a routing policy to keep all network queues bounded , provided that the input load is within the network s traffic dispatch ability , i.e. within its _stability region_. with denoting the aggregate traffic accumulated within a network node at time , destined towards node , stability is formally defined as : where is the time horizon and denotes averaging over any probabilistic factors present in the system .a well - developed framework for deducing network stability under a given network management policy is the lyapunov drift approach .it defines a quadratic function of the form : the goal is then to deduce the bounds of which describes the evolution of the network queue levels over a period .the _ lyapunov stability theorem _ states that if it holds : for two positive quantities , then the network is stable and average queue size of inequality ( [ eq : stability - defintion ] ) is bounded by instead of drifting towards infinity .the _ backpressure algorithm _ ( bpr ) defines a joint scheduling - routing algorithm that complies with the stability criteria of inequality ( [ eq : lyastabilitycriteria ] ) and , most importantly , has been proven to be throughput optimal .its goal is to minimize the lower bound of , effectively suppressing the average queue level within the network .the analytical approach , followed by related studies , is based on the queue dynamics expressed by the following relation : where , and denotes outgoing , incoming and locally generated data at time interval .the usual methodology then dictates a series of relaxations of the right part of eq .( [ eq : strictqdynamics ] ) , based on the following inequalities : where is the maximum allowed bitrate over a network link carrying traffic destined to node . squaring both sides of eq .( [ eq : strictqdynamics ] ) and incorporating relaxations ( [ eq : relax1 ] ) , ( [ eq : relax2 ] ) , as well as the identity : one derives an inequality of the form of relation ( [ eq : lyastabilitycriteria ] ) .further relaxation by substituting all and with maximum allowed values yields compliance with the lyapunov stability theorem .furthermore , it is deduced that the upper bound of relation ( [ eq : lyastabilitycriteria ] ) can be minimized when maximizing the quantity : the standard backbressure routing process , summarized for reference as the sbpr algorithm , expresses the optimization pursuit of relation ( [ eq : optpursuitbpr ] ) . according to spbr , at timeslot ,each network link must carry data towards node , such that : bidirectional links are considered as two separate unidirectional links . originally meant for use in wireless ad hoc networks , the bpr process and its variants have found extensive use in packet switching hardware and satellite systems due to their throughput optimality trait .sbpr variants have adopted latency considerations as well .most prominently , authors in restrict the node selection step ( [ eq : spbr_step ] ) of sbpr only within a subset of links that offer a bounded maximum number of hops towards the target .other studies have shown that simply altering the queueing discipline from fifo to lifo yields considerable latency gains .finally , it is worth noting that sbpr can be easily made tcp compatible . the employed system setup .a network of autonomously managed elements , a - f , uses backpressure - derived flow rules on top of its standard routing scheme , in order to mitigate congestion events .a centralized control plane orchestrates the operation of the system . ]the present paper studies the use of bpr - variants in backbone networks . the assumed setup , given in fig .[ fig : setup ] , considers a network comprising autonomously managed elements .a node can represent a single physical router or a complete subnetwork , provided that it supports a self - inspecting mechanism for monitoring its internal congestion levels , as well as support a flow - based routing scheme .the nodes are connected with links of known , time - invariant bandwidth and can be asymmetric or unidirectional with no restriction . due to this assumption ,the notation is simplified to .data is classified by the originating network identifier ( e.g. ) , with no further sub - categorization . the formed network may have any traffic - invariant traffic policy , such as distance vector or shortest path routing .the bpr approach operates on top of the underlying routing scheme and is enforced by a centralized controller , which can receive node state information and propose the installation of priority flow rules .an example is shown in fig .[ fig : setup ] . at time moment , the controller has assembled a snapshot of the network s state and notices that at node exceeds a predefined alarm threshold .a bpr - variant is executed , which deduces that traffic from towards should better be offloaded to neighboring node for the time being .a corresponding routing instruction is given to node , which takes precedence over all other routing rules pertaining to link .operation is then resumed until the next network state snapshot is received .openflow - based solutions are most prominent candidates for the control plane and the interaction with the network nodes . in this case , network monitoring can be accomplished by several polling techniques . without loss of generality, we will assume that the controller obtains a consistent network state with period .peering agreements and routing preferences among nodes are also allowed .for example , returning to the example of fig .[ fig : setup ] , the controller would not propose the illustrated flow rule if it was disallowed by the peering policy / agreement between and .in other words , when the bpr - variant searches for neighbors , the search is assumed to be limited to nodes that comply to any form of policy , preference of agreement .finally , targeting minimal controller load , we allow for at most one priority flow rule per physical network link .we begin the analysis by simplifying the rhs of relation ( [ eq : relax2 ] ) , based on the fact that the network links have time - invariant bandwidth : the rhs of relation ( [ eq : relax1 ] ) is simplified even further , given that all traffic from a node towards a given destination is served by a single outgoing link , regardless of the enforcement of any bpr priority rules : where is a neighboring node of complying with any bilateral agreements .furthermore , applying identity ( [ eq : identity ] ) to eq .( [ eq : strictqdynamics ] ) produces : ^{2}+\left[i_{(n , c)}^{t\to t+t}+g{}_{(n , c)}^{t\to t+t}\right]^{2}\\ -2\cdot u_{(n , c)}(t)\cdot\left[o_{(n , c)}^{t\to t+t}-i{}_{(n ,c)}^{t\to t+t}-g{}_{(n , c)}^{t\to t+t}\right]\end{gathered}\ ] ] using the updated relaxations ( [ eq : relax1 - 1 ] ) and ( [ eq : relax2 - 1 ] ) and setting for brevity : ^{2}\\ -2\cdot u_{(n , c)}(t)\cdot\left[t\cdot\mu_{l_{nb(n)}}^{(c)}-t\cdot\underset{l:\,d(l)=n}{\sum}\mu_{l}^{(c)}-g{}_{(n , c)}^{t\to t+t}\right]\label{eq : rhs_raw}\end{gathered}\ ] ] it is not difficult to show that the rhs of inequality ( [ eq : rhs_raw ] ) can be reorganized as : ^{2}\\ + \left[t\cdot\mu_{l_{nb(n)}}^{(c)}-u_{(n , c)}(t)\right]^{2}-2\cdot u_{(n , c)}^{2}(t)\end{gathered}\ ] ] summing both sides and reminding that : ^{2}}\\ + \underset{\forall n}{\sum}\underset{\forall c}{\sum}\underset{(a)}{\left[t\cdot\mu_{l_{nb(n)}}^{(c)}-u_{(n , c)}(t)\right]^{2}}-2\cdot\underset{\forall n}{\sum}\underset{\forall c}{\sum}u_{(n , c)}^{2}(t)\label{eq : earlyinsights}\end{gathered}\ ] ] we proceed by considering the rhs of relation ( [ eq : earlyinsights ] ) as a function of the bpr - derived routing decisions and attempt a straightforward optimization. the can be initially treated as continuous variables . once optimal values have been derived, they can be mapped to the closest of the actually available options within the network topology .the sufficient conditions for the presence of a minimum are : where denotes a node , is the hessian matrix and the refers to each of its elements . from condition ( [ eq : hessian]-a )we obtain : \\ -\left[u_{(n , c)}(t)-\left(u_{(b(n),c)}(t)+g{}_{(b(n),c)}^{t\to t+t}\right)\right]=0,\,\foralln , c\label{eq : optimization_goal}\end{gathered}\ ] ] for condition ( [ eq : hessian]-b ) , it is not difficult to show that it is satisfied due to : equation ( [ eq : optimization_goal ] ) represents a generalization over the sbpr algorithm , which operates by equation ( [ eq : spbr_step ] ) . at first ,( [ eq : optimization_goal ] ) defines a linear system with discrete variables and can be solved as such .however , interesting approximations can be derived , which also exhibit the dependence of the optimal solution from the network topology and traffic statistics .firstly , the term ] refers to the role of node as generator of new traffic .the quantity also introduces dependence from traffic prediction .indeed , at time the controller must obtain an approximation of the traffic that will be generated at node within the interval ] . in other words , the throughput - optimizing routing decision at node , regarding traffic destined to node are derived as follows : where is the optimal neighboring node of to offload data towards .we notice that the transit assumption of ( [ eq : transitassumption ] ) is also implied by the sbpr algorithm .specifically , sbrp implies that is uniform for all nodes in the network , reducing equation ( [ eq : foresightoptimal ] ) to ( [ eq : spbr_step ] ) .this limitation is alleviated by the proposed , foresight - enabled backpressure routing ( algorithm [ alg : fbpr ] ) which targets backbone networks , where the transit assumption of relation ( [ eq : transitassumption ] ) is expected to hold .define priority flows .\gets 0 , \forall c ] \gets 1 ] array is also introduced , to make sure that each possible destination is routed via one link at most , at each node .the optimization of line pertains to the treatment of multi - links that may exist in the network .assume a triple link and a corresponding set of assignments .line refers to the optimal reordering of the assignments out of all possible combinations and for each multi - link of the network , maximizing the expected throughput .finally , lines install the fbpr - derived priority rules to the corresponding nodes .fbpr is throughput - optimal .we notice that the preceding analysis takes place before the relaxation of equation ( [ eq : optpursuitbpr ] ) of the classic analytical procedure . applying this final relaxation to equation ( [ eq : earlyinsights ] )leads to compliance with the lyapunov stability criterion ( relation ( [ eq : lyastabilitycriteria ] ) ) to the proof of throughput optimality , as detailed in .in this section , the performance of the proposed schemes is evaluated in various settings , in terms of achieved average throughput , latency and traffic losses .specifically , the ensuing simulations , implemented on the anylogic platform , focus on : i ) the performance and stability gains arising from the combination of bpr - based and shortest path - based ( opsf ) policies , ii ) the gains of foresight - enabled bpr over its predecessors .the simulations assume autonomously managed nodes , arranged in a grid .each node , , is connected to its four immediate neighbors , , , , , where applicable .this type of topology is chosen to ensure a satisfactory degree of path diversity , i.e. a good choice of alternative paths to connect any two given nodes .we note that path diversity is a prerequisite for efficient traffic engineering in general .each link connecting two nodes is bidirectional with bandwidth at each direction . given that packet - level simulation of backbone networks in not easily tractable in terms of simulation runtimes , we assume slotted time ( slot duration ) and traffic organized in -long batches . at each slot , a number of batches is generated at each node , expressing concurrent traffic generated from multiple internal users .the destination of each batch is chosen at random ( uniform distribution ) .then , traffic batches are dispatched according to the routing rules and the channel rates .each node is assumed to keep track of its internal congestion level and push it with report / actuation period to a central controller ( e.g. like ) . and are set or varied per experiment .a node is assumed to reject / drop incoming or generated traffic when it has more than batches on hold , using a single queue model .finally , the bpr schemes are enabled on a node where the number of batches on hold exceed a certain , set per experiment .the can also be perceived as a parameter that defines whether the adoption of the bpr priority flow rules is partial or global .when enabled , the bpr - derived routing rules handle the enqueued batches in a lifo manner , as advised in .this holds for both sbpr and fbpr in the ensuing comparisons .the open - shortest - path - first ( ospf ) approach is used as the underlying dvr routing scheme in all applicable cases . finally ,while is loop - free due to the described , hop - based filtering at line of algorithm [ alg : fbpr ] , the routing rules proposed by may create loops . therefore , a pairwise check is performed among the nodes for the detection of loops . if one exists , the specific bpr - derived priority routing rules that caused it are filtered - out and are not forwarded to the nodes . figure [ fig : t5_a20 ] illustrates the performance of pure ( no overlayed bpr ) , and , for varying network load .the x - axis corresponds to the number of batches generated at each node per second , , which is uniform for all nodes ( ) .a load of batches per second corresponds to data generation rate . for a nodebeing serviced by four outgoing channels of each , this translates to a channel over - subscription rate with regard to local users only . at batches per second ,the ratio rises to .the actuation period , is set to and the alarm level is of the buffer size . in terms of batch latency ,[ fig : t5_a20_delay ] shows that the proposed approach offers the best latency times , even over ospf , until batches / sec . at that point ,ospf - pure offers better latency , at the expense of an excessive traffic overflow rate ( fig .[ fig : t5_a20_ovf ] ) . as expected ,dropping much of the flowing traffic benefits the delivery times of the `` surviving '' traffic .however , all bpr - based schemes are able to sustain operation with a limited overflow rate , even under maximal load . in other words ,the stability of the system is clearly increased with the use of the bpr class of routing schemes .this phenomenon is also evident from the throughput plot of fig .[ fig : t5_a20_thpt ] .ospf - pure offers the worst performance , since it leads to queue build - up and high overflow rate .on the other hand , the proposed offers significantly improved results .nonetheless , offers the maximum throughput in all cases .however , given its performance in term of latency , the superiority in raw throughput is clearly not useful and is owed to batches traveling via excessively long routes within the network .we proceed to study the benefits of endowing bpr with foresight . in fig .[ fig : complex ] , the batch generation rate per node is set randomly at ( uniform ditribution ) where is a percentage ranging from to .notice that , in the previous experiment , fbpr and sbpr where equivalent from the aspect of foresight , due to the constant values over all nodes . the alarm level is kept at of the buffer size and is varied from to sec .each point in fig .[ fig : complex ] is derived as the average over simulation iterations .since the goal of the comparison is to deduce the gains derived from foresight , perfect knowledge of is passed to .furthermore , for fairness reasons , the latency - favoring , hop - based node filtering of fbpr is discarded .( i.e. line of algorithm [ alg : fbpr ] considers all neighbors of node ) .thus , drops any latency considerations that could have given an advantage over from this aspect .the performance gains in batch latency and overflow rate are apparent in fig .[ fig : complex_delay ] and [ fig : complex_ovf ] respectively . in general, the bonus of foresight is significant as increases , since the system can make more long - lived routing decisions .the gains are also accentuated for medium to high network loads , where bpr in general makes sense .the trade - off between latency and overflow rates is present in [ fig : complex_delay ] and [ fig : complex_ovf ] as well .finally , the throughput optimality continues to hold ( fig .[ fig : complex_thpt ] ) with the slight difference being owed to the redundant data traveling produced by sbpr .in other words , having no foresight , sbpr takes decisions that distribute the network traffic slightly wider , but lead to higher latency and overflow rate in the future .the present study brought backpressure routing ( bpr ) and its benefits to the sdn - derived traffic engineering ecosystem .its inherited benefits include throughput maximization and optimal stability under increased network load .the bpr and sdn combination can offer attractive , lightweight and centrally orchestrated routing solutions .minimum cost , non - penetrative approaches could be the key for gradually encouraging cooperation between distrustful autonomous parties , with significant gains for the end - users. the presented approach can pave the way for a new class of lightweight traffic engineering schemes that require minimal commitment from the orchestrated network elements .this work was funded by the eu project net - volution ( eu338402 ) and the research committee of the aristotle university of thessaloniki .n. mckeown , t. anderson , h. balakrishnan , g. parulkar , l. peterson , j. rexford , s. shenker , and j. turner , `` openflow : enabling innovation in campus networks , '' _ acm sigcomm computer communication review _ , vol .38 , no . 2 ,pp . 6974 , 2008 .hong , s. kandula , r. mahajan , m. zhang , v. gill , m. nanduri , and r. wattenhofer , `` achieving high utilization with software - driven wan , '' in _ proceedings of the acm sigcomm conference _ , 2013 , pp . 1526 .s. jain , a. kumar , s. mandal , j. ong , l. poutievski , a. singh , s. venkata , j. wanderer , j. zhou , m. zhu _ et al ._ , `` b4 : experience with a globally - deployed software defined wan , '' in _ proceedings of the acm sigcomm conference _ , 2013 , pp .314 .l. tassiulas and a. ephremides , `` stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks , '' _ automatic control , ieee transactions on _ , vol .37 , no . 12 , pp . 19361948 , 1992 .m. al - fares , s. radhakrishnan , b. raghavan , n. huang , and a. vahdat , `` hedera : dynamic flow scheduling for data center networks , '' in _ proceedings of the 7th usenix conference on networked systems design and implementation _ , 2010 .j. doma , z. duliski , m. kantor , j. rzsa , r. stankiewicz , k. wajda , and r. wjcik , `` a survey on methods to provide multipath transmission in wired packet networks , '' _ comp . networks _ , vol .77 , pp . 1841 , 2015 .l. georgiadis , m. j. neely , and l. tassiulas , `` resource allocation and cross - layer control in wireless networks , '' _ fnt in networking ( foundations and trends in networking ) _ , vol . 1 , no . 1 ,pp . 1144 , 2005 .m. j. neely , e. modiano , and c. e. rohrs , `` dynamic power allocation and routing for time - varying wireless networks , '' _ ieee journal on selected areas in communications _ , vol .23 , no . 1 ,pp . 89103 , 2005 .l. ying , s. shakkottai , a. reddy , and s. liu , `` on combining shortest - path and back - pressure routing over multihop wireless networks , '' _ ieee / acm trans . on networking _ ,19 , no . 3 , pp .841854 , 2011 .s. r. chowdhury , m. f. bari , r. ahmed , and r. boutaba , `` payless : a low cost netowrk monitoring framework for software defined networks , '' in _ieee / ifip network operations and management symposium ( noms ) _ , 2014 .hiriart - urruty , j .- j .strodiot , and v. h. nguyen , `` generalized hessian matrix and second - order optimality conditions for problems withc 1,1 data , '' _ applied mathematics & optimization _ , vol . 11 , no . 1 , pp . 4356 , 1984 .
software - defined networking enables the centralized orchestration of data traffic within a network . however , proposed solutions require a high degree of architectural penetration . the present study targets the orchestration of network elements that do not wish to yield much of their internal operations to an external controller . backpressure routing principles are used for deriving flow routing rules that optimally stabilize a network , while maximizing its throughput . the elements can then accept in full , partially or reject the proposed routing rule - set . the proposed scheme requires minimal , relatively infrequent interaction with a controller , limiting its imposed workload , promoting scalability . the proposed scheme exhibits attracting network performance gains , as demonstrated by extensive simulations and proven via mathematical analysis . software - defined networking , traffic engineering , backpressure routing .
_ outlier detection _ is a data analysis task that aims to find atypical behaviors , unusual outcomes , erroneous readings or annotations in data .it has been an active research topic in data mining community , and it is frequently used in various applications to identify rare and interesting data patterns , which may be associated with beneficial or malicious events , such as fraud identification , network intrusion surveillance , disease outbreak detection , patient monitoring for preventable adverse events ( pae ) , _ etc_. it is also utilized as a primary data preprocessing step that helps to remove noisy or irrelevant signals in data . despite an extensive research, the majority of existing outlier methods are developed to detect _ unconditional _ outliers that are expressed in the joint space of all data attributes .such methods may not work well when one wants to identify _ conditional _ ( contextual ) outliers that reflect unusual responses for a given set of contextual attributes .briefly , since conditional outliers depend on the context or properties of data instances , application of unconditional outlier detection methods may lead to incorrect results .for example , assume we want to identify incorrect ( or highly unusual ) image annotations in a collection of annotated images . then by applying unconditional detection methods to the joint image - annotation spacemay lead to images with rare themes to be falsely identified as outliers due to the scarcity of these themes in the dataset , leading to false positives .similarly , an unusual annotation of images with frequent themes may not be judged ( scored ) as very different from images with less frequent themes leading to false negatives .this paper focuses on _ multivariate conditional outlier detection _, a special type of the conditional outlier detection problem where data consists of -dimensional continuous input vectors ( context ) and corresponding -dimensional binary output vectors ( responses ) .our goal is to precisely identify the instances with unusual input - output associations . following the definition of outlier given by hawkins , we give a description of multivariate conditional outlier in plain language as : a multivariate conditional outlier is an observation , which consists of context and associated responses , whose responses are deviating so much from the others in similar contexts as to arouse suspicions that it was generated by a different response mechanism .this formulation fits well various practical outlier detection problems that require contextual understanding of data .as briefly illustrated above , for example , recent social media services allow users to tag their content ( _ e.g. _ , online documents , photos , or videos ) with keywords and thereby permit keyword - based retrieval .these user annotations sometimes include irrelevant words by mistake that could be effectively pinpointed if the conditional relations between content and tags are considered .likewise , evidence - based expert decisions ( _ e.g. _ , functional categorization of genes , medical diagnosis and treatment decisions of patients ) occasionally involve errors that could cause critical failures .such erroneous decisions would be adequately detected through contextual analysis of evidence - decision pairs .the multivariate conditional outlier detection problem is challenging because both the contextual- and inter - dependences of data instances should be taken into account when identifying outliers .we tackle these challenges by building a probabilistic model , where denotes the input variables and denotes the associated output variables . briefly ,the model is built ( learned ) from all available data , aiming to capture and summarize all relevant dependences among data attributes and their strength as observed in the data .conditional outliers are then identified with the help of this model .more specifically , a conditional outlier corresponds to a data instance that is assigned a low probability by the model .the exact implementation of the above approach is complicated , and multiple issues need to be resolved before it can be applied in practice .first , it is unclear how the probabilistic model should be represented and parameterized . to address this problem ,we resort to and adapt structured probabilistic data models of that provide an efficient representation of input - output relations by decomposing the model using the chain rule into a product of univariate probabilistic factors ; _ i.e. _ , each response is dependent on and a subset of the other responses .the univariate conditional models and their learning are rather common and well studied , and multiple models ( _ e.g. _ , logistic regression ) can be applied to implement them .we note the structured probabilistic data models were originally proposed and successfully applied to support structured output prediction problems . however , their application to outlier detection problems is new .the key difference is that while in prediction we seek to find outputs that maximize the probability given the inputs , in conditional outlier detection we aim to identify unusual ( or low probability ) associations in between observed inputs and outputs .the second issue is that the probabilistic model must be learned from available data which can be hard especially when the number of context and output variables is high and the sample size is small .this may lead to model inaccuracies and miscalibration of probability estimates , which in turn may effect the identification of outliers . to alleviate this problem , we formulate and present outlier scoring methods that combine the probability estimates with the help of weights reflecting their reliability in assessment of outliers . through empirical studies ,we test our approach on datasets with multi - dimensional responses .we demonstrate that our method is able to successfully identify multivariate conditional outliers and outperforms the existing baselines .the rest of this paper is organized as follows .section [ sec : problem_def ] formally define the problem .section [ sec : related ] reviews existing research on the topic .section [ sec : approach ] describes our multivariate conditional outlier detection approach .section [ sec : experiments ] presents the experimental results and evaluations .lastly , section [ sec : concl ] summarizes the conclusions of our study .in this work , we study a special type of the conditional outlier detection problem where data consist of multi - dimensional input - output pairs ; that is , each instance in dataset consists of an -dimensional continuous input vector and a -dimensional binary output vector .our goal is to detect irregular response patterns in given context .the fundamental issues in developing a multivariate conditional outlier detection method are how to take into account the _ contextual dependences between output and their input _ , as well as the _ mutual dependences among . we address these issues by building a decomposable probabilistic representation for .note that multivariate conditional outlier detection is clearly different from unconditional outlier detection when the problems are expressed probabilistically . in conditional outlier detection , we are interested in the instances that fall into low - probability regions of the conditional joint distribution . on the other hand ,unconditional outlier detection approaches generally seek instances in low - probability regions of the joint distribution .* notation : * for notational convenience , we will omit the index superscript when it is not necessary .we may also abbreviate the expressions by omitting variable names ; e.g. , .outlier detection has been extensively studied in the data mining and statistics communities .a wide variety of approaches to tackle the detection problem for multivariate data have been proposed in the literature .accordingly , depending on the type of outliers the method aims to detects , five general categories of _ unconditional _ outlier detection approaches appear in the literature .these include density - based approaches , distance - based approaches , depth - based approaches , deviation - based approaches , and high - dimensional approaches .below we briefly summarize each of these categories .for technical details , please refer to .density - based approaches assume that the density around a normal data instance is similar to that of its neighbors .a typical representative method is local outlier factor ( lof ) , which measures a relative local density in -nearest neighbor boundary .lof has shown good performance in many applications and is considered as an off - the - shelf outlier detection method . in section [sec : experiments ] , we use lof as the representative unconditional outlier detection method and compare the performance with our proposed approach .distance - based approaches assume that normal data instances come from dense neighborhoods , while outliers correspond to isolated points .a representative method is which gives an outlier score to each instance using a robust variant of the mahalanobis distance , measuring the distance between each instance to the main body of data distribution such that the instances located far from the center of data distribution are identified as outliers .depth - based approaches assume that outliers are at the fringe of the data regions and normal instances are close to or in the center of the region . the methods in this category assign depth to each instance by gradually removing data from convex hulls , and the instances with small depth are considered as outliers .a relevant method is the one - class support vector machines , which assumes all the training data belong to the `` normal '' class and finds a decision boundary defining the region of normal data , whereas instances lie across the boundary are identified as outliers .deviation - based approaches assume that outliers are the outmost data instances in the data region and can be identified by measuring the impact of each instance on the variance of the dataset .one of the well - known algorithms in this category is linear method for deviation detection ( lmdd ) . compared to the depth - based approaches, deviation - based approaches do not require complicated contour generation process .in high - dimensional spaces , the above approaches often fail because the distance metrics and density estimators become computationally intractable and analytically ineffective .moreover , due to the sparsity of data , no meaningful neighborhood can be defined .high - dimensional approaches are proposed to handle such extreme cases .typical methods in this category project the data to a lower dimensional subspace , such as grid - based subspace outlier detection . for a detailed review on related methods ,see .while the vast majority of existing work were built to solve the unconditional outlier detection problem , the approaches may not work properly when it comes to _ conditional _ outliers , since they do not take into account the conditional relations among data attributes . realizing this ,recent years have seen increased interest in the _ conditional _ outliers detection that aims to identify outliers in a set of outputs for given values of inputs .several approaches have been proposed to address the problems in this regard .however , these solutions either are limited to handle problems with a single output variable or assume a restricted relations among real - valued input and output variables through a gaussian mixture . as results, the existing methods either make an independence assumptions that is too restrictive or are unfit for modeling multi - dimensional binary output variables .in contrast to the existing methods , our proposed approach is different in that ( 1 ) it properly models multi - label binary outputs by adopting a structured probabilistic data model to represent data ; and ( 2 ) it utilizes the decomposed conditional probability estimates from individual response dimensions to identify outliers .consequently , our proposed approach drives the process of outlier detection to a more granular level of the conditional behaviors in data and ( as follows in section [ sec : experiments ] ) leads to a significant performance improvement in outlier detection .furthermore , by maintaining separate models for individual output variables , our approach provides a practical advantage that the existing multivariate outlier detection methods do not allow .that is , one can delve into a trained multivariate conditional model and investigate the quality of each univariate representation to decide whether the individual model could be reliably used to support outlier detection .for example , a univariate model that produces inconsistent estimates could be preemptively excluded from the outlier detection phase . sinceour goal is not to recover a complete data representation but to obtain a useful utility function for outlier detection , this sort of modularity allows us to utilize only the model with high confidence and , hence , to perform more robust outlier detection .this section describes our approach to identify unusual input - output pairs , which we refer to as mcode : _ multivariate conditional outlier detection_. to facilitate an effective detection method , we utilize a decomposable probabilistic data representation for to capture the dependence relations among inputs and outputs , and to assess outliers by seeking low - probability associations between them . accordingly , having a precise probabilistic data model and proper outlier scoring methods is of primary concern . in section [ subsec :approach_model ] , we discuss how to obtain an efficient data representation and accurate conditional probability estimates of observed input - output pairs , using the probabilistic structured data modeling approach in section [ subsec : approach_score ] , we treat the probability estimates as a proxy representation of observed instances and present two outlier scoring methods by analyzing the reliability of these estimates .our mcode approach works by analyzing data instances come in input - output pairs with a statistical model representing the conditional joint distribution . a direct learning of the conditional joint from data , however , is generally very expensive or even infeasible , because the number of possible output combinations grows exponentially with . to avoid such a high cost of learning yetachieve an accurate data representation for outlier detection , we decompose the conditional joint into a product of conditional univariate distributions using the chain rule of probability : where denotes the parents of ; _ i.e. _ , all the output variables preceding .this decomposition lets us represent by simply specifying each univariate conditional factor , . in this work, we use a logistic regression model for each of the output dimensions , because it can effectively handle high - dimensional feature space defined by a mixture of continuous and discrete variables ( _ i.e. _ , conditioning ) using regularization . in theory, the result of the above product should be invariant regardless of the chain order ( order of ) . nevertheless ,in practice , different chain orders produce different conditional joint distributions as they draw in models learned from different data . for this reason ,several structure learning methods that determine the optimal set of parents have been proposed . however , these methods require at least of time , where denotes the time of learning a classifier , that would not be preferable , especially when the output dimensionality is high . in mcodewe address the above problem by relaxing the chain rule and by permitting circular dependences among the output variables .that is , we let , the parents of , be all the remaining output variables , and assume the true dependence relations among them could be recovered through a proper regularization of logistic regression . to summarize , our structural decomposition allows us to capture the interactions among the output variables , as well as the input - output relations , using a collection of individually trained probabilistic functions with a relaxed conditional independence assumption .we use to denote this structured data representation , where is the parameters of the probabilistic model for the -th output dimension . assuming logistic regression , these base statistical functions are parameterized using as : this defines a pseudo - conditional joint probability of an observation pair as : where denotes the values of all other output variables except .now let us apply our data representation to estimate the conditional probabilities of observed outputs . for notational convenience ,we introduce an auxiliary vector of random variables , each defined in a conditional probability space ] range , which coincides with the outlier ratio in our experiment setting .for both tpar and atpar , higher is better .figure [ fig : tpar_results ] and table [ table : aucprec ] show the performance of the five compared methods .all results are obtained from _ ten _ repeats .figures [ fig : r_mediamill ] , [ fig : r_yahoo_arts ] , and [ fig : r_birds ] present the results on three datasets ( _ mediamill _ , _ yahoo - arts _ , and _ birds _ ) for different outlier dimensions .each figure illustrates the tpars of all methods ; x - axes show the alert rate , ranging between 0 and 0.04 ; y - axes show tpar .the vertical gray line at alert rate indicates where the alert rate is equal to the injected outlier ratio . in general, tpars improve as the outlier dimensionality increases , because outliers with larger perturbations are easier to detect . comparing the conditional outlier detection approaches ( i - prod , m - prod , m - rw , and m - lrw ) with the unconditional approach ( lof ) ,the conditional approaches are clear winners as the conditional methods outperform lof in most cases .this shows the advantages of the conditional outlier detection approaches in addressing the problem .only exceptions are i - prod on _ mediamill _ when outlier dimensionality is low .this is because i - prod does not consider the dependence relations among the output variables .such advantages in modeling the inter - dependences of the outputs are consistently observed as m - prod outperforms i - prod in most experiments . to show the benefits of our reliability weights , we analyze the performance of m - rw and m - lrw in comparison to that of m - prod .an interesting point is that m - rw and m - lrw not only improve the performance drastically , but also make tpars stable .this confirms that our reliability weighting methods can effectively estimate the quality of the models , and the resulting weights are useful in outlier scoring .lastly , although m - lrw does not show much improvement from m - rw compared to the other key components of mcode that we have discussed , the local weights seem to make m - rw even more stable as shown with _ mediamill _ and _ yahoo - arts_. table [ table : aucprec ] summarizes the results on all eight datasets in terms of atpar at 0.01 .the table consists of four sections grouped by different values of outlier dimensionality ( ) .we do not report the results on the first four datasets ( _ birds _ , _ yeast _ , _ genbase _ , and _ yahoo - arts _ ) for outlier dimensionality ( for _ yeast _ , 2.5% and 5.0% ) because the output dimensionality ( ) is too small .the best performing methods on each experiment are shown in bold .the results confirms the conclusions that we have drawn with figure [ fig : tpar_results ] .one interesting point is that lof shows exceptionally high ( compared with its performance on other datasets ) atpar on _mediamill_. this is because the dataset has a similar number of input and output variables ; hence , as outlier dimensionality increases , the simulated outliers become like unconditional outliers .in this work , we introduced and tackled multivariate conditional outlier detection , a special type of the conditional outlier detection problem .we briefly reviewed existing research and motivated this new type of outlier detection problem .we presented our novel outlier detection framework that analyzes and detects abnormal input - output associations in data using a decomposable conditional probabilistic model that is learned from all data instances .we discussed how to obtain an efficient data representation and accurate conditional probability estimates of observed input - output pairs , using the probabilistic structured data modeling approach .motivated by the brier score , we developed present two outlier scoring methods by analyzing the reliability of probability estimates . through the experimental results , we demonstrated the ability of our framework to successfully identify multivariate conditional outliers .stephen d. bay and mark schwabacher .mining distance - based outliers in near linear time with randomization and a simple pruning rule . in _ proceedings of the ninth acm sigkdd international conference on knowledge discovery and data mining _ , kdd 03 , pages 2938 , new york , ny , usa , 2003 .acm .krzysztof dembczynski , weiwei cheng , and eyke hllermeier .bayes optimal multilabel classification via probabilistic classifier chains . in _ proceedings of the 27th international conference on machine learning ( icml-10 ) _ , pages 279286 .omnipress , 2010 .milos hauskrecht , michal valko , branislav kveton , shyam visweswaram , and gregory cooper .evidence - based anomaly detection . in _ annual american medical informatics association symposium _ , pages 319324 , november 2007 .abhishek kumar , shankar vembu , aditya krishna menon , and charles elkan .learning and inference in probabilistic classifier chains with beam search . in _ proceedings of the 2012 european conference on machine learning and knowledge discovery in databases_. springer - verlag , 2012 .spiros papadimitriou , hiroyuki kitagawa , phillip b gibbons , and christos faloutsos .loci : fast outlier detection using the local correlation integral . in _ data engineering , 2003 .19th international conference on _ , pages 315326 .ieee , 2003 .jesse read , bernhard pfahringer , geoff holmes , and eibe frank .classifier chains for multi - label classification . in _ proceedings of the european conference on machine learning and knowledge discovery in databases_. springer - verlag , 2009 .shiguo wang .a comprehensive survey of data mining - based accounting - fraud detection research . in _intelligent computation technology and automation ( icicta ) , 2010 international conference on _ , volume 1 , pages 5053 , may 2010 .weng - keen wong , andrew moore , gregory cooper , and michael wagner .bayesian network anomaly pattern detection for disease outbreaks . in _ proceedings of the twentieth international conference on machine learning _ ,pages 808815 .aaai press , august 2003 .min - ling zhang and kun zhang .multi - label learning by exploiting label dependency . in _ proceedings ofthe 16th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 10 , pages 9991008 .acm , 2010 .
despite tremendous progress in outlier detection research in recent years , the majority of existing methods are designed only to detect _ unconditional _ outliers that correspond to unusual data patterns expressed in the joint space of all data attributes . such methods are not applicable when we seek to detect _ conditional _ outliers that reflect unusual responses associated with a given context or condition . this work focuses on _ multivariate conditional outlier detection _ , a special type of the conditional outlier detection problem , where data instances consist of multi - dimensional input ( context ) and output ( responses ) pairs . we present a novel outlier detection framework that identifies abnormal input - output associations in data with the help of a decomposable conditional probabilistic model that is learned from all data instances . since components of this model can vary in their quality , we combine them with the help of weights reflecting their reliability in assessment of outliers . we study two ways of calculating the component weights : global that relies on all data , and local that relies only on instances similar to the target instance . experimental results on data from various domains demonstrate the ability of our framework to successfully identify multivariate conditional outliers .
recently two essential aspects of money circulation have been investigated based on the random transfer models .one is statistical distribution of money , which is closely related to earlier pareto income distribution and some recent empirical observations .the other one is the velocity of money , which measures the ratio of transaction volume to the money stock in an economic system .all the models which appeared in these researches regarded the monetary system as being composed of agents and money , and money could be transferred randomly among agents . in such a random transferring process , money is always being held by agents , any single agent s amount of money may strongly fluctuate over time , but the overall equilibrium probability distribution can be observed under some conditions . the shape of money distribution in each model is determined by its transferring rule , for instance , random exchange can lead to a boltzmann - gibbs distribution , transferring with uniform saving factor can lead to a gaussian - like distribution and that with diverse saving factors leads to a pareto distribution . on the other hand ,the time interval between two transfers named as holding time of money is also a random variable with a steady probability distribution in the random transferring process .the velocity of money could be expressed as the expectation of the reciprocal of holding time and the probability distribution over holding time was found to follow exponential or power laws .the amount of money held by agents was limited to be non - negative in the models mentioned above except ref . . allowing agents to go into debt and putting a limit on the maximal debt of an agent ,adrian drgulescu and victor yakovenko demonstrated the equilibrium probability distribution of money still follows the boltzmann - gibbs law .although they devote only one section to discussing the role of debt in the formation of the distribution , they are undoubtedly pathfinders on this aspect . as cited in their paper , debts create money " .specifically , most part of the money stock is created by debts through banking system , and this process of money creation plays a significant role in performance of economy especially by affecting the aggregate output .thus money creation should not be excluded from discussion on the issues of monetary economic system . with cognition of this significance , robert fischer and dieter braun analyzed the process of creation and annihilation of money from a mechanical perspective by proposing analogies between assets and the positive momentum of particles and between liabilities and the negative one .they further applied this approach into the study on statistical mechanics of money .as well known , the central bank plays an important role of controlling the monetary aggregate that circulates in the modern economy .it issues the monetary base which is much less than the monetary aggregate .the ratio of the monetary aggregate to the monetary base is called the money multiplier .the central bank controls the monetary aggregate mainly by adjusting the monetary base and by setting the required reserve ratio which is a key determinant of the multiplier .so the required reserve ratio is crucial in monetary economic system .the aim of this work is to investigate the impacts of the required reserve ratio on monetary wealth distribution and the velocity of money .our model is an extended version of that of robert fischer and dieter braun . in their model, random transfer would increase the quantity of money without bounds unless some limits are imposed exogenously on the stock of assets and liabilities , which are given by specifying an aggregate limit or imposing a transfer potential .compared with this , we introduce the monetary base and the required reserve ratio in our model by interpreting the process of money creation with the simplified money multiplier model .thus the limit can be governed by setting the initial values of the monetary base and the required reserve ratio .in addition , we adopt the conventional economic definition of money instead of what they used .we think that the conventional definition of money is more appropriate to the analysis of realistic monetary system .we hope that our work can expose the role of the required reserve ratio in monetary circulation and is helpful to understand the effect of the central bank on monetary economic system .this paper is organized as follows . in next sectionwe make a brief presentation of money creation and the simplified multiplier model . in section we propose a random transfer model of money with a bank . andthe shapes of monetary wealth distribution and latency time distribution are demonstrated . in section the dependence of monetary wealth distribution and the velocity of money on the required reserve ratio is presented quantitatively .we finish with some conclusions in section .modern banking system is a fractional reserve banking system , which absorbs savers deposits and loans to borrowers .generally the public holds both currency and deposits . as purchasing, the public can pay in currency or in deposits . in this sense , currency held by the public and deposits in bank can both play the role of exchange medium .thus the monetary aggregate is measured by the sum of currency held by the public and deposits in bank in economics .when the public saves a part of their currency into commercial banks , this part of currency turns into deposits and the monetary aggregate does not change .once commercial banks loan to borrowers , usually in deposit form , deposits in bank increase and currency held by the public keeps constant .so loaning behavior of commercial banks increases the monetary aggregate and achieves money creation .money creation of commercial banks is partly determined by the required reserve ratio . in reality, commercial banks always hold some currency as reserves in order to repay savers on demand .total reserves are made up of ones that the central bank compels commercial banks to hold , called required reserves , and extra ones that commercial banks elect to hold , called excess reserves . instead of appointing required reserves for each of commercial banks ,the central bank specifies a percentage of deposits that commercial banks must hold as reserves , which is known as the required reserve ratio .the role of the required reserve ratio in money creation is illuminated well by the multiplier model .the multiplier model , originally developed by brunner and meltzer , has become the standard paradigm in the textbooks of macroeconomics .we introduce its simplified version here . in monetary economic system ,the monetary aggregate can be measured by where denotes currency held by the public and denotes total deposits .the monetary base is the sum of currency held by the public and reserves in the banking system , : , which are decomposed into required reserves and excess reserves , can be given by reserves can be calculated according to the required reserve ratio and deposits in commercial banks : equation ( [ reserve1 ] ) can be rewritten as simplicity , assume that the public holds no currency in hand and that excess reserves are always zero . with these two assumptions , combining equations ( [ aggregate ] ) , ( [ base ] ) and ( [ reserve2 ] )produces the monetary base - multiplier representation of the monetary aggregate : , the money multiplier , is given by to this representation , an increment of one dollar in the monetary base produces an increment of dollars in the monetary aggregate .since loans made by commercial banks create equal amount of money , its volume is the difference between the monetary aggregate and the monetary base , that is equation shows clearly the relation between money creation and the required reserve ratio . as the required reserve ratio increases , the capability of money creation declines .please note if the public holds currency in hand or commercial banks decide to keep some amount of currency as excess reserves , the amount of money created by the banking system is less than the value given by the right - hand side of equation ( [ loan ] ) .although all factors involved in money creation except the required reserve ratio are ignored in the simplified multiplier model , it conveys us the essence of money creation in reality .this suggests that the role of money creation can be investigated by focusing on the impacts of the required reserve ratio on relevant issues .thus we simply introduced a bank into the random transfer model to examine how the required reserve ratio affects monetary wealth distribution and the velocity of money .our model is an extension of the model in ref .the economy turns into the one consisting of traders and a virtual bank .we postulate that all traders hold money only in deposit form throughout the simulations . at the beginning ,a constant monetary base is equally allocated to traders and is all saved in the bank . as a result ,total reserves held by the bank are at the beginning .time is discrete .each of the traders chooses his partner randomly in each round , and yield trade pairs . in each trade pair , one is chosen as `` payer '' randomly and the other as `` receiver '' . if the payer has deposits in the bank , he pays one unit of money to the receiver in deposit form .if the payer has no deposit and the bank has excess reserves , the payer borrows one unit of money from the bank and pays it to the receiver in deposit form .but if the bank has no excess reserve , the trade is cancelled . after receiving one unit of money ,if the receiver has loans , he repays his loans . otherwise the receiver holds this unit of money in deposit form .simulations are expected to show the results of two issues .one is monetary wealth distributions .monetary wealth is defined as the difference between deposit volume and loan volume of a trader .thus the data of deposit and loan volumes of each trader need to be collected .the other is the velocity of money .when the transferring process of currency is a poisson process , the velocity of money can be calculated by latency time , which is defined as the time interval between the sampling moment and the moment when money takes part in trade after the sampling moment for the first time .the distribution of latency time in this case takes the following form where is the intensity of the poisson process .it can be obtained by simple manipulation that the velocity of money is the same as the intensity .thus we have , as collecting latency time , each transfer of the deposits can be regarded as that of currency chosen randomly from reserves in the bank equivalently .since the initial settings of the amount of money and the number of traders have no impacts on the final results , we performed several simulations with and , while altering the required reserve ratio .it is found that given a required reserve ratio the monetary aggregate increases approximately linearly for a period , and after that it approaches and remains at a steady value , as shown in figure .we first recorded the steady values of the monetary aggregate for different required reserve ratios and the results are shown in figure .this relation is in a good agreement with that drawn from the simplified multiplier model .we also plotted the values of time when the monetary aggregate begins to be steady for different required reserve ratios in figure . since the maximal value among them is or so , the data of deposit volume , loan volume and latency time were collected after rounds .we are fully convinced that the whole economic system has reached a stationary state by that moment . as shown in figure , monetary wealthis found to follow asymmetric laplace distribution which is divided into two exponential parts by y axis , which can be expressed as and respectively , where is the average amount of positive monetary wealth and is the average amount of negative monetary wealth . this asymmetry of the distribution arises from the non - zero monetary base set initially in our model which money creation can be achieved on the basis of .it is worth mentioning that in ref . the distribution with such a shape can also be obtained by imposing an asymmetric , triangular - shaped transfer potential . from simulation results, it is also seen that latency time follows an exponential law , as shown in figure .this result indicates that the transferring process of currency is indeed a poisson type .we show monetary wealth distributions for different required reserve ratios in figure .it is seen that both and decrease as the required reserve ratio increases .when the required reserve ratio increases closely to , decreases closely to and the distribution changes gradually from asymmetric laplace distribution to boltzmann - gibbs law which is the result from the model of adrian drgulescu and victor yakovenko .the stationary distribution of monetary wealth can be obtained by the method of the most probable distribution . in our model , if traders are distributed over monetary wealth , with traders holding monetary wealth , traders holding monetary wealth , this distribution can be done in ways .it is also required that the total number of traders , the total amount of positive monetary wealth and that of negative monetary wealth must be kept constant at stationary state , that is and the stationary distribution can be obtained by maximizing subject to the constraints listed above . using the method of lagrange multipliers , we have whose solutions can be given respectively by and so the stationary distribution can be expressed in continuous form as p_{-}(m)=\displaystyle\frac{n_{0}}{n}e^{-\gamma m } \quad & \textrm{for , } \\\end{array}\ ] ] where denotes the the number of traders with no monetary wealth . substituting equations ( [ eq : soluion+ ] ) and ( [ eq : solution- ] ) into equations ( [ eq : trader ] ) , ( [ eq : positive ] ) and ( [ eq : negative ] ) , and replacing summation symbol with integral one , we have and where equation ( [ eq : con1 ] ) holds only when and .combining equations ( [ eq : distribution ] ) , ( [ eq : con1 ] ) , ( [ eq : con2 ] ) and ( [ eq : con3 ] ) , we can get and it is seen that both and decrease as the required reserve ratio increases , and the value of is always larger than that of at the same required reserve ratio .these results are illustrated by the solid lines in figure .they are in good agreement with simulation results denoted by dots .the formula of the velocity of money will be deduced here .it is known that the velocity of money is equal to the intensity of the poisson process from equation ( [ velocity1 ] ) .the intensity of the poisson process can be measured by average times a unit of money takes part in trades in each round .this suggests that the velocity of money is also the value of transaction volume in each round divided by the money stock , i.e. , in order to obtain the expression of in terms of the required reserve ratio , the analysis of is required at first . for convenience in manipulation, traders are now classified into two groups : the traders with positive monetary wealth and the ones with non - positive monetary wealth , whose numbers are denoted by and respectively . from the trading mode of our model , it can be reckoned out that each trader participates in trade averagely times in one round . in each transfer of money, the probability of transferring a unit of money is for the traders with positive monetary wealth , and it must be less than for the traders with non - positive monetary wealth , for borrowing may fail due to the limitation of required reserves .let denote this probability , from the detailed balance condition which holds in steady state , we have substituting the expressions of monetary wealth distribution ( [ eq : distribution ] ) into equation ( [ state ] ) , we obtain thus the total trade volume in each round on average can be expressed as substituting equation ( [ trade ] ) into ( [ velocity2 ] ) , the velocity of money can be given by since in steady state the number of traders whose monetary wealth changes from to is equal to that of traders whose monetary wealth changes from to , we have the following approximate relation where is the number of traders with monetary wealth .the left - hand side of equation ( [ state1 ] ) represents the number of traders whose monetary wealth changes from to and the right - hand side denotes the number of traders whose monetary wealth changes from to . substituting equation ( [ omega ] ) into ( [ state1 ] ) and taking into account yield and combining equations ( [ velocity3 ] ) , ( [ n_+ ] ) and ( [ n_- ] ) , we can obtain figure shows the relationships between the velocity of money and the required reserve ratio , from simulation results and from equation ( [ velocity_final ] ) respectively . by measuring latency time for different required reserve ratios ,the corresponding velocities of money are obtained from equation ( [ velocity1 ] ) . from figure , it is seen that the velocity of money has an inverse relation with the required reserve ratio .this can be interpreted in this way . in each round, if every pair of traders could fulfill their transfer of money , the trade volume would be in our model .however , in each round some transfers are cancelled because the payers with non - positive monetary wealth may not get loans from the bank . as indicated by equation ( [ velocity_final ] ) , the average realized transfer ratio can be expressed in the form of , which decreases as the required reserve ratio increases .thus the trade volume in each round decreases , and as a result the velocity of money decreases .in this paper , in order to see how money creation affects the statistical mechanics of money circulation , we develop a random transfer model of money by introducing a fractional reserve banking system . in this model , the monetary aggregate is determined by the monetary base and the required reserve ratio .computer simulations show that the steady monetary wealth distribution follows asymmetric laplace type and latency time of money obeys exponential distribution regardless of the required reserve ratio .the distribution function of monetary wealth in terms of the required reserve ratio is deduced .likewise , the expression of the velocity of money is also presented .these theoretical calculations are in quantitative agreement with the corresponding simulation results .we believe that this study is helpful for understanding the process of money creation and its impacts in reality .this research was supported by the national science foundation of china under grant no . 70371072 and 70371073 .the authors are grateful to thomas lux for comments , discussions and helpful criticisms .99 s. ispolatov , p. l. krapivsky , s. redner , _ eur .j. b _ * 2 * ( 1998 ) 267 .j. p. bouchaud , m. mzard , _ physica a _ * 282 * ( 2000 ) 536 .a. drgulescu , v. m. yakovenko , _ eur .j. b _ * 17 * ( 2000 ) 723 .a. chakraborti , b. k. chakrabarti , _ eur .j. b _ * 17 * ( 2000 ) 167 .a. chatterjee , b. k. chakrabarti , s. s. manna , _ physica a _ * 335 * ( 2004 ) 155 .b. hayes , _ am .* 90 * ( 2002 ) 400 .y. wang , n. ding , l. zhang , _ physica a _ * 324 * ( 2003 ) 665 .n. ding , n. xi , y. wang , _ eur .j. b _ * 36 * ( 2003 ) 149 .v. pareto , _cours deconomie politique _ , droz , geneva switzerland , 1896 .h. aoyama , y. nagahara , m. p. okazaki , w. souma , h. takayasu , m. takayasu , _ fractals _ * 8 * ( 2000 ) 293 .a. drgulescu , v. m. yakovenko , _ physica a _ * 299 * ( 2001 ) 213 .a. drgulescu , v. m. yakovenko , _ eur .j. b _ * 20 * ( 2001 ) 585 .a. c. silva , v. m. yakovenko , _ europhys .* 69 * ( 2005 ) 304 .f. levy , _ science _ * 236 * ( 1987 ) 923 .c. r. mcconnell , s. l. brue , _ economics : principles , problems , and policies _ , mcgraw - hill , new york , 1996 .w. j. baumol , a. s. blinder , _ macroeconomics _, 5th edn ., harcourt brace jovanovich , san diego , 1991 .d. braun , _ physica a _ * 290 * ( 2001 ) 491 .r. fischer , d. braun , _ physica a _ * 324 * ( 2003 ) 266 .r. fischer , d. braun , _ physica a _ * 321 * ( 2003 ) 605 .m. r. garfinkel , d. l. thornton , _ federal reserve bank of st .louis review _ * 73 * ( 1991 ) 47 .k. brunner , _ international economic review _ * january * ( 1961 ) 79 .k. brunner , a. h. meltzer , _ journal of finance _ * may * ( 1964 ) 240 .t. j. kozubowski , k. podgrski , _ math .* 25 * ( 2000 ) 37 .d. r. gaskell , _ introduction to the thermodynamics of materials _ , 4th edn . ,taylor & francis , new york , 2003 .figure 1 : : time evolution of the monetary aggregate for the required reserve ratio .the vertical line denotes the moment at which the monetary aggregate reaches a steady value .figure 2 : : steady value of the monetary aggregate versus the required reserve ratio obtained from simulation results ( dots ) and from the corresponding analytical formula derived from equations ( [ ma ] ) and ( [ multi ] ) ( continuous curve ) .figure 3 : : the moment at which the monetary aggregate reaches a steady value versus the required reserve ratio .figure 4 : : the stationary distribution of monetary wealth for the required reserve ratio .it can be seen that the distribution follows asymmetric laplace distribution from the inset .figure 5 : : the stationary distribution of latency time for the required reserve ratio .the fitting in the inset indicates that the distribution follows an exponential law .figure 6 : : the stationary distributions of monetary wealth for different required reserve ratios .note that the probability has been scaled by the corresponding maximum value .figure 7 : : ( upper ) and ( lower ) versus the required reserve ratio obtained from simulation results ( dots ) and from the corresponding analytical formulas ( continuous curves ) given by equations ( [ eq : m+ ] ) and ( [ eq : m- ] ) respectively . figure 8 : : the velocity of money versus the required reserve ratio obtained from simulation results ( dots ) and from the corresponding analytical formula ( continuous curve ) given by equation ( [ velocity_final ] ) .
in this paper the dependence of wealth distribution and the velocity of money on the required reserve ratio is examined based on a random transfer model of money and computer simulations . a fractional reserve banking system is introduced to the model where money creation can be achieved by bank loans and the monetary aggregate is determined by the monetary base and the required reserve ratio . it is shown that monetary wealth follows asymmetric laplace distribution and latency time of money follows exponential distribution . the expression of monetary wealth distribution and that of the velocity of money in terms of the required reserve ratio are presented in a good agreement with simulation results . money creation , reserve ratio , statistical distribution , velocity of money , random transfer 89.65.gh , 87.23.ge , 05.90.+m , 02.50.-r
graphical models of codes have been studied since the 1960s and this study has intensified in recent years due to the discovery of turbo codes by berrou _ et al . _ , the rediscovery of gallager s low - density parity - check ( ldpc ) codes by spielman __ and mackay _ et al . _ , and the pioneering work of wiberg , loeliger and koetter .it is now well - known that together with a suitable message passing schedule , a graphical model implies a soft - in soft - out ( siso ) decoding algorithm which is optimal for cycle - free models and suboptimal , yet often substantially less complex , for cyclic models ( cf .it has been observed empirically in the literature that there exists a correlation between the cyclic topology of a graphical model and the performance of the decoding algorithms implied by that graphical model ( cf . ) . to summarize this empirical `` folk - knowledge '' , those graphical models which imply near - optimal decoding algorithms tend to have large girth , a small number of short cycles and a cycle structure that is not overly regular . two broad classes of graphical modeling problems can be identified in the literature : * _ constructive _ problems : given a set of design requirements , design a suitable code by constructing a good graphical model ( i.e. a model which implies a low - complexity , near - optimal decoding algorithm ) . * _ extractive _ problems : given a specific ( fixed ) code , extract a graphical model for that code which implies a decoding algorithm with desired complexity and performance characteristics .constructive graphical modeling problems have been widely addressed by the coding theory community .capacity approaching ldpc codes have been designed for both the additive white gaussian noise ( awgn ) channel ( cf . ) and the binary erasure channel ( cf . ) . other classes of modern codes have been successfully designed for a wide range of practically motivated block lengths and rates ( cf . ) .less is understood about extractive graphical modeling problems , however .the extractive problems that have received the most attention are those concerning tanner graph and trellis representations of block codes .tanner graphs imply low - complexity decoding algorithms ; however , the tanner graphs corresponding to many block codes of practical interest , e.g. high - rate reed - muller ( rm ) , reed - solomon ( rs ) , and bose - chaudhuri - hocquenghem ( bch ) codes , necessarily contain many short cycles and thus imply poorly performing decoding algorithms .there is a well - developed theory of conventional trellises and tail - biting trellises for linear block codes .conventional and tail - biting trellises imply optimal and , respectively , near - optimal decoding algorithms ; however , for many block codes of practical interest these decoding algorithms are prohibitively complex thus motivating the study of more general graphical models ( i.e. models with a richer cyclic topology than a single cycle ) .the goal of this work is to lay out some of the foundations of the theory of extractive graphical modeling problems . following a review of graphical models for codes in section [ background_sec ] , a complexity measure for graphical modelsis introduced in section [ complexity_sec ] .the proposed measure captures a cyclic graphical model analog of the familiar notions of state and branch complexity for trellises .the _ minimal tree complexity _ of a code , which is a natural generalization of the well - understood minimal trellis complexity of a code to arbitrary cycle - free models , is then defined using this measure . the tradeoff between cyclic topology and complexity in graphical modelsis studied in section [ ticsb_sec ] .wiberg s cut - set bound ( csb ) is the existing tool that best characterizes this fundamental tradeoff .while the csb can be used to establish the square - root bound for tail - biting trellises and thus provides a precise characterization of the potential tradeoff between cyclic topology and complexity for single - cycle models , as was first noted by wiberg _ , it is very challenging to use the csb to characterize this tradeoff for graphical models with cyclic topologies richer than a single cycle . in order to provide a more precise characterization of this tradeoff than that offered by the csb alone, this work introduces a new bound in section [ ticsb_sec ] - the _ tree - inducing cut - set bound _ - which may be viewed as a generalization of the square - root bound to graphical models with arbitrary cyclic topologies .specifically , it is shown that an -root complexity reduction ( with respect to the minimal tree complexity as defined in section [ complexity_sec ] ) requires the introduction of _ at least _ cycles .the proposed bound can thus be viewed as an extension of the square - root bound to graphical models with arbitrary cyclic topologies .the transformation of graphical models is studied in section [ model_tx_sec ] and [ extraction_sec ] . whereas minimal conventional and tail - biting trellis models can be characterized algebraically via trellis - oriented generator matrices , there is in general no known analog of such algebraic characterizations for arbitrary cycle - free graphical models , let alone cyclic models . in the absence of such an algebraic characterization , it is initially unclear as to how cyclic graphical models can be extracted . in section[ model_tx_sec ] , a set of basic transformation operations on graphical models for codes is introduced and it is shown that any graphical model for a given code can be transformed into any other graphical model for that same code via the application of a finite number of these basic transformations .the transformations studied in section [ model_tx_sec ] thus provide a mechanism for searching the space of all _ all _ graphical models for a given code . in section [ extraction_sec ] , the basic transformations introduced in section [ model_tx_sec ]are used to extract novel graphical models for linear block codes . starting with an initial tanner graph for a given code , heuristics for extracting other tanner graphs , generalized tanner graphs , and more complex cyclic graphical modelsare investigated .concluding remarks and directions for future work are given in section [ conc_sec ] .the binomial coefficient is denoted where are integers . the finite field with elementsis denoted .given a finite index set , the vector space over defined on is the set of vectors suppose that is some subset of the index set .projection _ of a vector onto is denoted given a finite index set , a _ linear code _ over defined on is some vector subspace .the _ block length _ and _ dimension _ of are denoted and , respectively .if known , the minimum hamming distance of is denoted and may be described by the triplet ] , over on such that it follows from lemma [ tree_comp_growth_lemma ] that completing the proof . an immediate corollary to theorem [ ti_csb_theorem ]results when proposition [ num_cycle_lemma ] is applied in conjunction with the main result : [ ti_csb_cycle_cor ] let be a linear code over with minimal tree complexity .the number of cycles in any -ary graphical model for is lower - bounded by provided is known or can be lower - bounded , the tree - inducing cut - set bound ( ti - csb ) ( and more specifically corollary [ ti_csb_cycle_cor ] ) can be used to answer the questions posed in section [ ticsb_motivation_subsec ] .the ti - csb is further discussed below . on the surface , the ti - csb and the csb are similar in statement ; however , there are three important differences between the two .first , the csb does not explicitly address the complexity of the local constraints on either side of a given cut .forney provided a number of illustrative examples in that stress the importance of characterizing graphical model complexity in terms of both hidden variable size and local constraint complexity .second , the csb does not explicitly address the cyclic topology of the graphical model that results when the edges in a cut are removed .the removal of a tree - inducing cut results in two cycle - free disconnected components and the size of a tree - inducing cut can thus be used to make statements about the complexity of optimal siso decoding using variable conditioning in a cyclic graphical model ( cf .finally , and most fundamentally , the ti - csb addresses the aforementioned intractability of applying the csb to graphical models with rich cyclic topologies .theorem [ ti_csb_theorem ] can be used to make a statement similar to theorem [ square_root_bound ] which is valid for all graphical models containing a single cycle .[ ti_csb_root_cor ] let be a linear code over with minimal tree complexity and let be the smallest integer such that there exists a -ary graphical model for which contains at most one cycle .then more generally , theorem [ ti_csb_theorem ] can be used to establish the following generalization of the square - root bound to graphical models with arbitrary cyclic topologies .[ ti_csb_r_root_cor ] let be a linear code over with minimal tree complexity and let be the smallest integer such that there exists a -ary graphical model for which contains at most cycles .then a linear interpretation of the logarithmic complexity statement of corollary [ ti_csb_r_root_cor ] yields the desired generalization of the square - root bound : an -root complexity reduction with respect to the minimal tree complexity requires the introduction of at least cycles .there are few known examples of classical linear block codes which meet the square - root bound with equality .shany and beery proved that many rm codes can not meet this bound under _ any _ bit ordering .there does , however , exist a tail - biting trellis for the extended binary golay code which meets the square - root bound with equality so that given that this tail - biting trellis is a -ary single cycle graphical model for , the minimal tree complexity of the the extended binary golay code can be upper - bounded by corollary [ ti_csb_root_cor ] as note that the minimal bit - level conventional trellis for contains ( non - central ) state variables with alphabet size and is thus a -ary graphical model .the proof of lemma [ tree_comp_growth_lemma ] provides a recipe for the construction of a -ary cycle - free graphical model for from its tail - biting trellis .it remains open as to where the minimal tree complexity of is precisely , however .denote by the minimum number of cycles in any -ary graphical model for a linear code over with minimal tree complexity . for large values of ,the lower bound on established by corollary [ ti_csb_cycle_cor ] becomes the ratio of the minimal complexity of a cycle - free model for to that of an -ary graphical model is thus upper - bounded by in order to further explore the asymptotics of the tree - inducing cut - set bound , consider a code of particular practical interest : the binary image of the ] reed - solomon code.,width=288 ] much as there are many valid complexity measures for conventional trellises , there are many reasonable metrics for the measurement of cyclic graphical model complexity . while there exists a unique minimal trellis for any linear block code which simultaneously minimizes all reasonable measures of complexity , even for the class cyclic graphical models with the most basic cyclic topology - tail - biting trellises - minimal modelsare not unique .the complexity measure introduced by this work was motivated by the desire to have a metric which simultaneously captures hidden variable complexity and local constraint complexity thus disallowing local constraints from hiding " complexity .there are many conceivable measures of local constraint complexity : one could upper - bound the state complexity of the local constraints or even their minimal tree complexity ( thus defining minimal tree complexity recursively ) .the local constraint complexity measure used in this work is essentially wolf s bound and is thus a potentially conservative _ upper bound _ on any reasonable measure of local constraint decoding complexity .let be a graphical model for the linear code over .this work introduces eight _ basic graphical model operations _ the application of which to results in a new graphical model for : * the merging of two local constraints and into the new local constraint which satisfies * the splitting of a local constraint into two new local constraints and which satisfy * the insertion / removal of a degree- repetition constraint . * the insertion / removal of a trival length , dimension local constraint . *the insertion / removal of an isolated partial parity - check constraint .note that some of these operations have been introduced implicitly in this work and others already .for example , the proof of the local constraint involvement property of -ary graphical models presented in section [ model_prop_subsec ] utilizes degree- repetition constraint insertion .local constraint merging has been considered by a number of authors under the rubric of clustering ( e.g. ) .this work introduces the term merging specifically so that it can be contrasted with its inverse operation : splitting .detailed definitions of each of the eight basic graphical model operations are given in the appendix . in this section, it is shown that these basic operations span the entire space of graphical models for .[ basic_move_thm ] let and be two graphical models for the linear code over .then can be transformed into via the application of a _ finite _ number of basic graphical model operations .define the following four sub - transformations which can be used to transform into a tanner graph : 1 .the transformation of into a -ary model .2 . the transformation of into a ( possibly ) redundant generalized tanner graph .3 . the transformation of into a non - redundant generalized tanner graph .4 . the transformation of into a tanner graph . since each basic graphical model operation has an inverse , can be transformed into by inverting each of the four sub - transformations . in order to prove that can be transformed into via the application of a finite number of basic graphical model operations, it suffices to show that each of the four sub - transformations requires a finite number of operations and that the transformation of the tanner graph into a tanner graph corresponding to requires a finite number of operations . this proof summary is illustrated in figure [ model_tx_mech_fig ] . into via five sub - transformations.,width=148 ] that each of the five sub - transformations from to illustrated in figure [ model_tx_mech_fig ] requires only a finite number of basic graphical model operations is proved below .the graphical model is transformed into the -ary model as follows .each local constraint in is split into the -ary single parity - check constraints which define it . a degree- repetition constraintis then inserted into every hidden variable with alphabet index set size and these repetition constraints are then each split into -ary repetition constraints as illustrated in figure [ model_tx_qary_fig ] .each local constraint in the resulting graphical model satisfies .similarly , each hidden variable in the resulting graphical model satisfies .-ary hidden variable into -ary hidden variables.,width=312 ] a ( possibly redundant ) generalized tanner graph is simply a bipartite -ary graphical model with one vertex class corresponding to repetition constraints and one to single parity - check constraints in which visible variables are incident only on repetition constraints . by appropriately inserting degree- repetition constraints ,the -ary model can be transformed into .let the generalized tanner graph correspond to an redundant parity - check matrix for a degree- generalized extension of with rank a finite number of row operations can be applied to resulting in a new parity - check matrix the last rows of which are all zero .similarly , a finite number of basic operations can be applied to resulting in a generalized tanner graph containing trivial constraints which can then be removed to yield .specifically , consider the row operation on which replaces a row by where .the graphical model transformation corresponding to this row operation first merges the -ary single parity - check constraints and ( which correspond to rows and , respectively ) and then splits the resulting check into the constraints and ( which correspond to rows and , respectively ) .note that this procedure is valid since let the degree- generalized tanner graph correspond to an parity - check matrix .a degree- generalized tanner graph is obtained from as follows .denote by the parity - check matrix for the degree- generalized extension defined by which is systematic in the position corresponding to the -th partial parity symbol .since a finite number of row operations can be applied to to yield , a finite number of local constraint merge and split operations can be be applied to to yield the corresponding generalized tanner graph . removing the now isolated partial - parity check constraint corresponding to the -th partial parity symbol in yields the desired degree- generalized tanner graph . by repeatedly applying this procedure ,all partial parity symbols can be removed from resulting in .let the tanner graphs and correspond to the parity - check matrices and , respectively .since can be transformed into via a finite number of row operations , can be similarly transformed into via the application of a finite number of local constraint merge and split operations .the set of basic model operations introduced in the previous section enables the space of all graphical models for a given code to be searched , thus allowing for model extraction to be expressed as an optimization problem .the challenges of defining extraction as optimization are twofold .first , a cost measure on the space of graphical models must be found which is simultaneously meaningful in some real sense ( e.g. highly correlated with decoding performance ) and computationally tractable .second , given that discrete optimization problems are in general very hard , heuristics for extraction must be found . in this section , heuristics are investigated for the extraction of graphical models for binary linear block codes from an initial tanner graph .the cost measures considered are functions of the short cycle structure of graphical models .the use of such cost measures is motivated first by empirical evidence concerning the detrimental effect of short cycles on decoding performance ( cf . ) and second by the existence of an efficient algorithm for counting short cycles in bipartite graphs .simulation results for the models extracted via these heuristics for a number of extended bch codes are presented and discussed in section [ sim_results_subsec ] .the tanner graphs corresponding to many linear block codes of practical interest _ necessarily _ contain many short cycles .suppose that any tanner graph for a given code must have girth at least ; an interesting problem is the extraction of a tanner graph for containing the smallest number of -cycles .the extraction of such tanner graphs is especially useful in the context of ad - hoc decoding algorithms which utilize tanner graphs such as jiang and narayanan s stochastic shifting based iterative decoding algorithm for cyclic codes and the random redundant iterative decoding algorithm presented in .algorithm [ tanner_graph_alg ] performs a greedy search for a tanner graph for with girth and the smallest number of -cycles starting with an initial tanner graph which corresponds to some binary parity - check matrix .define an -row operation as the replacement of row in by the binary sum of rows and . as detailed in the proof of theorem [ basic_move_thm ] , if and are the single parity - check constraints in corresponding to and , respectively , then an -row operation in is equivalent to merging and to form a new constraint and then splitting into and ( where enforces the binary sum of rows and ) .algorithm [ tanner_graph_alg ] iteratively finds the rows and in with corresponding -row operation that results in the largest short cycle reduction in at every step .this greedy search continues until there are no more row operations that improve the short cycle structure of . ; ; ; girth of number of -cycles in number of -cycles in a number of authors have studied the extraction of generalized tanner graphs ( gtgs ) of codes for which with a particular focus on models which are -cycle - free and which correspond to generalized code extensions of minimal degree .minimal degree extensions are sought because no information is available to the decoder about the partial parity symbols in a generalized tanner graph and the introduction of too many such symbols has been observed empirically to adversely affect decoding performance .generalized tanner graph extraction algorithms proceed via the insertion of partial parity symbols , an operation which is most readily described as a parity - check matrix manipulation . following the notation introduced in section [ tg_gtg_subsec ] , suppose that a partial parity on the coordinates indexed by is to be introduced to a gtg for corresponding to a degree- generalized extension with parity - check matrix .a row is first appended to with a in the positions corresponding to coordinates indexed by and a in the other positions .a column is then appended to with a only in the position corresponding to .the resulting parity - check matrix describes a degree- generalized extension .every row in which contains a in all of the positions corresponding to coordinates indexed by is then replaced by the binary sum of and .suppose that there are such rows .it is readily verified that the tree - inducing cut size of the gtg that results from this insertion is related to that of the initial gtg , , by algorithm [ gtg_alg ] performs a greedy search for a -cycle - free generalized tanner graph for with the smallest number of inserted partial parity symbols starting with an initial tanner graph which corresponds to some binary parity - check matrix .algorithm [ gtg_alg ] iteratively finds the symbol subsets that result in the largest tree - inducing cut size reduction and then introduces the partial parity symbol corresponding to one of those subsets . at each step ,algorithm [ gtg_alg ] uses algorithm [ gtg_subalg ] to generate a candidate list of partial parity symbols to insert and chooses from that list the symbol which reduces the most short cycles when inserted .this greedy procedure continues until the generalized tanner graph contains no -cycles .algorithm [ gtg_alg ] is closely related to the gtg extraction heuristics proposed by sankaranarayanan and vasi in and kumar and milenkovic in ( henceforth referred to as the sv and km heuristics , respectively ) .it is readily shown that algorithm [ gtg_alg ] is guaranteed to terminate using the proof technique of .the sv heuristic considers only the insertion of partial parity symbols corresponding to coordinate index sets of size 2 ( i.e. ) .the km heuristic considers only the insertion of partial parity symbols corresponding to coordinate index sets satisfying .algorithm [ gtg_subalg ] , however , considers all coordinate index sets satisfying and and then uses ( [ gtg_xt_red_eq ] ) to evaluate which of these coordinate sets results in the largest tree - inducing cut size reduction .algorithm [ gtg_alg ] is thus able to extract gtgs corresponding to generalized extensions of smaller degree than the sv and km heuristics . in order to illustrate this observation , the degrees of the generalized code extensions that result when the sv , km and proposed ( hc )heuristics are applied to parity - check matrices for three codes are provided in table [ gtg_table ] .figure [ bch_31_21_fig ] compares the performance of the three extracted gtg decoding algorithms for the $ ] bch code in order to illustrate the efficacy of extracting gtgs corresponding to extensions of smallest possible degree ..generalized code extension degrees corresponding to the -cycle - free gtgs obtained via the sv , km , and hc heuristics . [ cols="^,^,^,^",options="header " , ]this work studied the space of graphical models for a given code in order to lay out some of the foundations of the theory of extractive graphical modeling problems .the primary contributions of this work were the introduction of a new bound characterizing the tradeoff between cyclic topology and complexity in graphical models for linear codes and the introduction of a set of basic graphical model transformation operations which were shown to span the space of all graphical models for a given code .it was demonstrated that these operations can be used to extract novel cyclic graphical models - and thus novel suboptimal iterative soft - in soft - out ( siso ) decoding algorithms - for linear block codes .there are a number of interesting directions for future work motivated by the statement of the tree - inducing cut - set bound ( ti - csb ) .while the minimal trellis complexity of linear codes is well - understood , less is known about minimal tree complexity and characterizing those codes for which is an open problem .a particularly interesting open problem is the use of the cut - set bound to establish an upper bound on the difference between and ; such a bound would allow for a re - expression of the ti - csb in terms of the more familiar minimal trellis complexity .a study of those codes which meet or approach the ti - csb is also an interesting direction for future work which may provide insight into construction techniques for good codes with short block lengths ( e.g. to of bits ) defined on graphs with a few cycles ( e.g. , or ) .the development of statements similar to the ti - csb for alternative measures of graphical model complexity and for graphical models of more general systems ( e.g. group codes , nonlinear codes ) is also interesting .there are also a number of interesting directions for future work motivated by the study of graphical model transformation .while the extracted graphical models presented in section [ sim_results_subsec ] are notable , ad - hoc techniques utilizing massively redundant models and judicious message filtering outperform the models presented in this work .such massively redundant models contain many more short cycles than the models presented in section [ sim_results_subsec ] indicating that short cycle structure alone is not a sufficiently meaningful cost measure for graphical model extraction .it is known that redundancy can be used to remove pseudocodewords ( cf . ) thus motivating the study of cost measures which consider both short cycle structure and pseudocodeword spectrum . finally , it would be interesting to study extraction heuristics beyond simple greedy searches , as well as those which use all of the basic graphical model operations ( rather than just constraint merging ) .[ basic_move_appendix ] this appendix provides detailed definitions of both the -ary graphical model properties described in section [ model_prop_subsec ] and the basic graphical model operations introduced in section [ model_tx_sec ] .the proof of lemma [ tree_comp_growth_lemma ] is also further illustrated by example . in order to elucidate these properties and definitions , a single - cycle graphical model for the extended hamming codeis studied throughout .the hidden variables and are binary while , , , , , and are -ary .all of the local constraint codes in this model are interface constraints . equations ( [ tbt_gen_start])-([tbt_gen_end ] ) define the local constraint codes via generator matrices ( where generates ) : , & g_2= \left[\begin{array}{ccc } 10&1&10\\ 01&1&01 \end{array}\right]\:\end{aligned}\ ] ] , & g_4= \left[\begin{array}{ccc } 10&1&0\\ 01&1&1 \end{array}\right]\;\;\;\end{aligned}\ ] ] , & g_6= \left[\begin{array}{ccc } 10&1&10\\ 01&1&01 \end{array}\right]\:\end{aligned}\ ] ] , & g_8= \left[\begin{array}{ccc } 10&1&0\\ 01&1&1 \end{array}\right].\;\;\end{aligned}\ ] ] the graphical model for illustrated in figure [ rm13_tbt_fig ] is -ary ( i.e. , ) : the maximum hidden variable alphabet index set size is and all local constraints satisfy .the behavior , , of this graphical model is generated by .\end{aligned}\ ] ] the projection of onto the visible variable index set , , is thus generated by which coincides precisely with a generator matrix for .the three properties of -ary graphical models introduced in section [ model_prop_subsec ] are discussed in detail in the following where it is assumed that a -ary graphical model with behavior for a linear code over defined on an index set is given .suppose there exists some hidden variable ( involved in the local constraints and ) that does not satisfy the local constraint involvement property .a new hidden variable that is a copy of is introduced to by first redefining over and then inserting a local repetition constraint that enforces .the insertion of and does not fundamentally alter the complexity of since and since degree- repetition constraints are trivial from a decoding complexity viewpoint .furthermore , the insertion of and does not fundamentally alter the cyclic topology of since no new cycles can be introduced by this procedure . as an example , consider the binary hidden variable in figure [ rm13_tbt_fig ] which is incident on the interface constraints and . by introducing the new binary hidden variable and binary repetition constraint ,as illustrated in figure [ rm13_internal_fig ] , can be made to be incident on the internal constraint .the insertion of and redefines over resulting in the generator matrices , & g_9= \left[\begin{array}{cc } 1&1 \end{array}\right].\end{aligned}\ ] ] clearly , the modified local constraints and satisfy the condition for inclusion in a -ary graphical model .the removal of the internal constraint from in order to define the new code proceeds as follows .each hidden variable , , is first disconnected from and connected to a new degree- internal constraint which does not impose any constraint on the value of ( since it is degree- ) .the local constraint is then removed from the resulting graphical model yielding with behavior .the new code is the projection of onto . as an example , consider the removal of the internal local constraint from the graphical model for described above ; the resulting graphical model update is illustrated in figure [ rm13_removal_fig ] .the new codes and are length 1 , dimension 1 codes which thus impose no constraints on and , respectively .it is readily verified that the code which results from the removal of from has dimension and is generated by note that corresponds to all paths in the tail - biting trellis representation of , not just those paths which begin and end in the same state . the removal of an internal local constraint results in the introduction of new degree- local constraints .forney described such constraints as `` useless '' in and they can indeed be removed from since they impose no constraints on the variables they involve .specifically , for each hidden variable , , involved in the ( removed ) local constraint , denote by the other constraint involving in .the constraint can be redefined as its projection onto .it is readily verified that the resulting constraint satisfies the condition for inclusion in a -ary graphical model .continuing with the above example , , , , and can be removed from the graphical model illustrated in figure [ rm13_removal_fig ] by redefining and with generator matrices , & g_8= \left[\begin{array}{cc } 10&1\\ 01&1 \end{array}\right].\:\end{aligned}\ ] ] let satisfy and consider a hidden variable involved in ( i.e. ) with alphabet index set .each of the coordinates of can be redefined as a -ary sum of some subset of the visible variable set as follows .consider the behavior and corresponding code which result when is removed from ( before is discarded ) .the projection of onto , , has length and dimension over .there exists a generator matrix for that is systematic in some size subset of the index set . a parity - check matrix that is systematic in the positions corresponding to the coordinates of thus be found for this projection ; each coordinate of is defined as a -ary sum of some subset of the visible variables by . following this procedure ,the internal local constraint is redefined over by substituting the definitions of implied by for each into each of the -ary single parity - check equations which determine .returning to the example of the tail - biting trellis for , the internal local constraint is redefined over the visible variable set as follows .the projection of onto is generated by a valid parity - check matrix for this projection which is systematic in the position corresponding to is which defines the binary hidden variable as where addition is over .a similar development defines the binary hidden variable as the local constraint thus can be redefined to enforce the single parity - check equation finally , in order to illustrate the use of the -ary graphical model properties in concert , denote by the single parity - check constraint enforcing ( [ c9_visible_def ] ) .it is readily verified that only the first four rows of ( as defined in ( [ g_hminus9_def ] ) ) satisfy .it is precisely these four rows which generate proving that in the following , the proof of lemma [ tree_comp_growth_lemma ] is illustrated by updating a cycle - free model for ( as generated by ( [ g_hminus9_def ] ) ) with the single parity - check constraint defined by ( [ c9_visible_def ] ) in order to obtain a cycle - free graphical model for . a cycle - free binary graphical model for illustrated in figure [ rm13_binary_fig ] are in no way related to those labels used previously , the labeling of hidden variables and local constraints begin at and , respectively . ] .all hidden variables in figure [ rm13_binary_fig ] are binary and the local constraints labeled , , , and are binary single parity - check constraints while the remaining local constraints are repetition codes . by construction , it has thus been shown that in light of ( [ c9_visible_def ] ) and ( [ ch_ch9_eqn ] ) , a -ary graphical model for can be constructed by updating the graphical model illustrated in figure [ rm13_binary_fig ] to enforce a single parity - check constraint on , , , and .a natural choice for the root of the minimal spanning tree containing the interface constraints incident on these variables is .the updating of the local constraints and hidden variables contained in this spanning tree proceeds as follows .first note that since , , , and simply enforce equality , neither these constraints , nor the hidden variables incident on these constraints , need updating .the hidden variables , , , and are updated to be -ary so that they send downstream to the values of , , , and , respectively .these hidden variable updates are accomplished by redefining the local constraints , , , and ; the respective generator matrices for the redefined codes are , & g_{17}= \left[\begin{array}{ccc } 1&0&11\\ 0&1&10 \end{array}\right]\:\end{aligned}\ ] ] , & g_{23}= \left[\begin{array}{ccc } 1&0&10\\ 0&1&11 \end{array}\right]\:\end{aligned}\ ] ] finally , is updated to enforce both the original repetition constraint on the respective first coordinates of , , , and and the additional single parity - check constraint on , , , and ( which correspond to the respective second coordinates of , , , and ) .the generator matrix for the redefined is .\end{aligned}\ ] ] the updated constraints all satisfy the condition for inclusion in a -ary graphical model .specifically , can be decomposed into the cartesian product of a length binary repetition code and a length binary single parity - check code .the updated graphical model is -ary and it has thus been shown by construction that the eight basic graphical model operations introduced in section [ model_tx_sec ] are discussed in detail in the following where it is assumed that a -ary graphical model with behavior for a linear code over defined on an index set is given . [ lc_merge_subsec ] suppose that two local constraints and are to be merged .without loss of generality , assume that there is no hidden variable incident on both and ( since if there is , a degree- repetition constraint can be inserted ) . the hidden variables incident on be partitioned into two sets where each , , is also incident on a constraint which is adjacent to .the hidden variables incident on may be similarly partitioned .the set of local constraints incident on hidden variables in both and are denoted _ common constraints _ and indexed by .figure [ merging_variable_defs_fig ] illustrates this notation .the merging of local constraints and proceeds as follows . for each common local constraint , , denote by ( ) the hidden variable incident on and ( ) .denote by the projection of onto the two variable index set and define a new -ary hidden variable which encapsulates the possible simultaneous values of and ( as constrained by ) . after defining such hidden variables for each , , a set of new hidden variables results which is indexed by .the local constraints and are then merged by replacing and by a code defined over which is equivalent to and redefining each local constraint , , over the appropriate hidden variables in . as an example , consider again the -ary cycle - free graphical model for derived in the previous section , a portion of which is re - illustrated on the bottom left of figure [ constraint_merging_fig ] , and suppose that the local constraints and are to be merged . the local constraints , , and are defined by ( [ 4ary_redef_1 ] ) and ( [ 4ary_redef_2 ] ) .the hidden variables incident on are partitioned into the sets and .similarly , and . the sole common constraint is thus .the projection of onto and has dimension and the new -ary hidden variable is defined by the generator matrix .\end{aligned}\ ] ] the local constraints and when defined over rather than and , respectively , are generated by , & g_{17}^\prime= \left[\begin{array}{ccc } 1&0&101\\ 1&0&111\\ 0&1&100 \end{array}\right].\:\end{aligned}\ ] ] finally , is redefined over and generated by \end{aligned}\ ] ] while and are replaced by which is equivalent to and is generated by .\end{aligned}\ ] ] note that the graphical model which results from the merging of and is -ary .specifically , is an -ary hidden variable while and .local constraint splitting is simply the inverse operation of local constraint merging .consider a local constraint defined on the visible and hidden variables indexed by and , respectively .suppose that is to be split into two local constraints and defined on the index sets and , respectively , such that and partition while but and need not be disjoint .denote by the intersection of and .local constraint splitting proceeds as follows .for each , , make a copy of and redefine the local constraint incident on ( which is not ) over both and . denote by an index set for the copied hidden variables .the local constraint is then replaced by and such that is defined over and is defined over following this split procedure , some of the hidden variables in and may have larger alphabets than necessary . specifically , if the dimension of the projection of ( ) onto a variable , ( ) , is smaller than the alphabet index set size of , then can be redefined with an alphabet index set size equal to that dimension .the merged code in the example of the previous section can be split into two codes : defined on , , and , and defined on , , and .the projection of onto has dimension and can thus be replaced by the -ary hidden variable .similarly , the projection of onto has dimension and can be replaced by the -ary hidden variable .suppose that is a hidden variable involved in the local constraints and .a degree- repetition constraint is inserted by defining a new hidden variable as a copy of , redefining over and defining the repetition constraint which enforces .degree- repetition constraint insertion can be similarly defined for visible variables .conversely , suppose that is a degree- repetition constraint incident on the hidden variables and .since simply enforces , it can be removed and relabeled .degree- repetition constraint removal can be similarly defined for visible variables .the insertion and removal of degree- repetition constraints is illustrated in figures [ repetition_insertion_hidden_fig ] and [ repetition_insertion_visible_fig ] for hidden and visible variables , respectively .trivial constraints are those incident on no hidden or visible variables so that their respective block lengths and dimensions are zero .trivial constraints can obviously be inserted or removed from graphical models .suppose that are -ary repetition constraints ( that is each repetition constraint enforces equality on -ary variables ) and let be non - zero .the insertion of an isolated partial parity - check constraint is defined as follows .define new -ary hidden variables and , and two new local constraints and such that enforces the -ary single parity - check equation and is a degree- constraint incident only on with dimension .the new local constraint defines the partial parity variable and is denoted _ isolated _ since it is incident on a hidden variable which is involved in a degree- , dimension local constraint ( i.e. does not constrain the value of ) .since is isolated , the graphical model that results from its insertion is indeed a valid model for .similarly , any such isolated partial parity - check constraint can be removed from a graphical model resulting in a valid model for . as an example , figure [ isolated_insert_remove_fig ] illustrates the insertion and removal of an isolated partial - parity check on the binary sum of and in a tanner graph for corresponding to ( [ ch_gen_mx ] ) ( note that is self - dual so that the generator matrix defined in ( [ ch_gen_mx ] ) is also a valid parity - check matrix for ) . c. berrou , a. glavieux , and p. thitmajshima , `` near shannon limit error - correcting coding and decoding : turbo - codes , '' in _ proc . international conf .communications _ , geneva , switzerland , may 1993 , pp . 10641070 .y. mao and a. h. banihashemi , `` a heuristic search for good low - density parity - check codes at short block lengths , '' in _ proc . international conf .communications _ , vol . 1 , helsinki , finland , june 2001 , pp . 4144 . t. tian , c. r. jones , j. d. villasenor , and r. d. wesel , `` selective avoidance of cycles in irregular ldpc code construction , '' _ ieee trans .information theory _ ,52 , no . 8 , pp . 12421247 , august 2004 . s. y. chung , g. d. forney , jr . , t. j. richardson , and r. urbanke , `` on the design of low - density parity - check codes within 0.0045 db of the shannon limit , '' _ ieee communications letters _ , vol . 5 , no . 2 ,pp . 5860 , february 2001 .t. richardson , m. shokrollahi , and r. urbanke , `` design of capacity - approaching irregular low - density parity - check codes , '' _ ieee trans .information theory _ ,47 , no . 2 ,pp . 619673 , february 2001 .h. d. pfister , i. sason , and r. urbanke , `` capacity - achieving ensembles for the binary erasure channel with bounded complexity , '' _ ieee trans .information theory _51 , no . 7 , pp .23522379 , july 2005 .k. m. chugg , p. thiennviboon , g. d. dimou , p. gray , and j. melzer , `` a new class of turbo - like codes with universally good performance and high - speed decoding , '' in _ proc .ieee military comm ._ , atlantic city , nj , october 2005 .j. s. yedidia , j. chen , and m. c. fossorier , `` generating code representations suitable for belief propagation decoding , '' in _ proc .allerton conf .commun . , control , comp ._ , monticello , il , october 2002 .t. kasami , t. takata , t. fujiwara , and s. lin , `` on the optimum bit orders with respect to the state complexity of trellis diagrams for binary linear codes , '' _ ieee trans .information theory _ ,39 , no . 1 ,pp . 242245 , january 1993 .t. r. halford and k. m. chugg , `` conditionally cycle - free generalized tanner graphs : theory and application to high - rate serially concatenated codes , '' communication sciences institute , usc , los angeles , ca , tech .csi-06 - 09 - 01 , september 2006 .s. sankaranarayanan and b. vasi , `` iterative decoding of linear block codes : a parity - check orthogonalization approach , '' _ ieee trans .information theory _ ,51 , no . 9 , pp .33473353 , september 2005 .
two broad classes of graphical modeling problems for codes can be identified in the literature : constructive and extractive problems . the former class of problems concern the _ construction _ of a graphical model in order to define a new code . the latter class of problems concern the _ extraction _ of a graphical model for a ( fixed ) given code . the design of a new low - density parity - check code for some given criteria ( e.g. target block length and code rate ) is an example of a constructive problem . the determination of a graphical model for a classical linear block code which implies a decoding algorithm with desired performance and complexity characteristics is an example of an extractive problem . this work focuses on extractive graphical model problems and aims to lay out some of the foundations of the theory of such problems for linear codes . the primary focus of this work is a study of the space of all graphical models for a ( fixed ) given code . the tradeoff between cyclic topology and complexity in this space is characterized via the introduction of a new bound : the tree - inducing cut - set bound . the proposed bound provides a more precise characterization of this tradeoff than that which can be obtained using existing tools ( e.g. the cut - set bound ) and can be viewed as a generalization of the square - root bound for tail - biting trellises to graphical models with arbitrary cyclic topologies . searching the space of graphical models for a given code is then enabled by introducing a set of basic graphical model transformation operations which are shown to span this space . finally , heuristics for extracting novel graphical models for linear block codes using these transformations are investigated .
due to the remarkable growth of the credit derivatives market , the interest in corporate claim value models and risk structure has recently increased .financial distress tends to be an important factor in many corporate decisions .the two main sources of financial distress are corporate illiquidity and insolvency . in his paper , gryglewicz explains how changes in solvency affect liquidity and also how liquidity concerns affect solvency via capital structure choice .corporate solvency is the ability to cover debt obligations in the long run .uncertainty about average future profitability , with financial leverage , generates solvency concerns. corporate insolvency may lead to corporate reorganization or to bankruptcy of the firm in the worst case .corporate bankruptcy is central to the theory of the firm .a firm is generally considered bankrupt when it can not meet a current payment on a debt obligation . in this event ,the equity holders lose all claims on the firm , and the remaining loss which is the difference between the face value of the fixed claims and the market value of the firm , is supported by the debt holders . in the literature of corporate finance , merton appears to be the main pioneers in the derivation of formulas for corporate claims .this model is a dual of black and scholes model for stock price .merton further analyzed the risk structure of interest rates .more specifically , he found the relation between corporate bond spreads and government bond , and attempted to determine a valid measure of risk .he also developed the deterministc partial differential equation modelling the debt and equity of the firm .the assumption of constant volatility in the original black - scholes and merton models from which most claims derivations are inspired , is incompatible with derivatives prices observed in the market ( see and the references therein ) . for stock price ,two alternative theories are mostly used to overcome the constant volatility drawback .the first approach sometime called level - dependent volatility describes the stock price as a diffusion with level dependent volatility .the second approach sometime called stochastic volatility defines the volatility as an autonomous diffusion driven by a second brownian motion . in , a new class of nonconstant volatility model which can be extended to include the first of the above approaches , that we called delayed model is introduced and further study in for options prices .this model shows that the past dependence of the stock price process is an important feature and therefore should not be ignored .the main goal of this model is to make volatility self reinforcing .since the volatility is defined in terms of past behavior of the asset price , the self reinforcing is high , precisely when there have been large movements in the recent past ( see ) .this is designed to reflect real world perceptions of market volatility , particularly if practitioners are to compare historic volatility with implied .following the duality between the stock price and corporate finance , we have recently introduced in the nonlinear delayed model in debt and guarantee . using self - financed strategy and replicationwe established that debt value and equity value follow two similar random partial differential equations ( rpdes ) within the last delay period interval .the analytical solution of our nonlinear model and rpdes are unknown in general case and therefore numerical techniques are needed . in recent years, the computational complexity of mathematical models employed in financial mathematics has witnessed a tremendous growth ( see and references therein ) .the aims of this paper is to solve numerically our delayed nonlinear model for firm market value along with the corresponding rpdes , using real data from firms .comparison will be done with classical merton model . to the best of our knowledgesuch comparison has not yet been done in the financial literature .two major comparisons will be preformed : the market value of each corporate and its equity value ( or its debt value ) .we will first approximate the volatility of each corporate , afterward solve numerically our nonlinear model for the market value of the corporate along with the corresponding merton model using the semi implicit euler maruyama scheme to obtain sample numerical solutions .monte carlo method will be thereafter used to approximate the mean numerical solution of each model .the meam numerical value from each model ( our nonlinear model and merton model ) will be therefore compared with the real market value ( ) of the corporate . for debt value ( or equity value ) solutions of rpdes established in the accompanied paper , efficient numerical scheme based on finite volume - finite difference methods ( discretization respect to the firm value ) and exponential integrator ( discretization respect to the time ) will be used . recently, exponential integrators have been used efficiency in many applications in porous media flow , but are not yet well spread in finance .the same numerical technique is also used to solve deterministic partial differential equations ( pdes ) modeling debt value or equity value in merton model .comparisons are done with the real data from firms for each model ( our delay model and merton model ) . from our comparison, it comes up that in corporate finance the past dependence of the firm value process is an important feature and therefore should not be ignored .the main goal of this paper is to call for further attention into the possibility of modeling market value of the firm with nonlinear delayed stochastic differential equations .the paper is organized as follows . in section [ model ] , we recall our delayed nonlinear model for corporate claims as presented in along with the merton model . in section [ numeric ] , numerical techniques for our delayed nonlinear model are provided .we first present the semi implicit euler maruyama for the firm market value and provide numerical experimentations for both our nonlinear model and merton model using real data for some firms .we end this section by providing numerical technique to solve efficiently our ( rpdes ) modeling the debt and equity of the firm along with numerical experimentations for the two models ( our delayed nonlinear model and merton model ) with real data for some firms .the conclusion is provided in section [ conclusion ] .here we present the stochastic delay model formulated in the accompanied paper along with random partial differential equation ( rpde ) that should satisfy any claim .we assume that : * the value of the company is unaffected by how it is financed ( the capital structure irrelevance principle ) . *the market value of firm at time ] is -measurable with respect to the borel -algebra of ,\mathbb{r}) ] .the results ensuring the feasiblility of the price model ( [ model ] ) is given in .following the work in , in order the rpde which must be satisfied by any security whose value can be written as a function of the value of the firm and time , we assume that any claim with market value ( which can be replicated using self - financed strategy ) at time with follows a nonlinear stochastic delay differential equation \\ \newline\\ y(t)=\varphi_y(t),\,\ , t\in [ -l,0 ] , \end{array } \right.\end{aligned}\ ] ] on a probability space . where is the constant riskless interest rate of return per unit time on this claim ; is the amount payout per unit time to this claim ; is a continuous function representing the volatility function of the return on this claim per unit time ; the initial process ,\mathbb{r}) ] .the functions and are measurable and integrable in the interval ] , , follows a stochastic differential equation ( sde ) where is the constant instantaneous variance of the return on the firm per unit time . in this case , the equity value should satisfy the following deterministc pde data on stock returns come from the center for research in securty prices(crsp ) database : http://www.crsp.com/ while those on debt values are from the research insight / compustat database ( http://www.compustat.com/ ) .more data include firms that had valid data for all 20 years from 1991 - 2010 and including : 1 .the risk free rate per year , which is the average monthly yield on us t - bills for that year ( the same for all firms each year ) .2 . the standard deviation of daily returns per year for each firm .the number of daily returns used to compute for each firm each year ( this is set to be at least 150 ) .4 . the total book value of debt ( in 1,000,000 s ) . 5 . the total value of the firm s assets ( in 1,000,000 s ) . 6 .the total amount ( in 1,000,000 s ) payout by the firm per unit time to either the shareholders or claims - holders for 10 years ( 2000 - 2010 ) .the total amount ( in 1,000,000 s ) payout per unit time for the debt within 10 years ( 2000 - 2010 ) .in fact the data set we have used include all the parameters that we need to solve either the stochastic differential equations ( [ model ] ) & ( [ modelm ] ) , or the rpde ( [ eqoub ] ) & ( [ eqou ] ) and the pde ( [ eqoum ] ) . all the simulation is performed in matlab 7.7 . in most of our simulations ,the data between 1991 - 2000.5 are used as memory data while those between 2000.5 - 2010 are used as the future data i.e. the data that we want our model to predict . to estimate the volatility function , we use the quadratic or linear interpolation of the memory part of data . as in , the quadratic form of the volatilityis motivated by the fact that the implied volatility in black - scholes model has a parabolic shape .the volatility function can also be estimateed by using the splines interpolation of the memory part of the data .as we only have yearly data set , we use also the interpolation to have more data set if need as the numerical schemes usually need small time step ( then more data set ) to ensure their stabilities . herewe consider the stochastic equations ( [ model ] ) and ( [ modelm ] ) within the time interval ] . as the real firm value of the companiesare already known in that interval , the aim is to see how close are the forecasting firm values ( from numerical methods ) comparing to the real firm values from financial industries .recall that our nonlinear model used the memory data within the interval ] , then .the case where .for the first case ( ) the rpde ( [ eq8nnn ] ) become the deterministic pde since for ] is subdivised into parts that we assume equal without loss the generality .as in center finite volume method , we approximate at the center of each interval . the diffusion part of the equation is approximated using the finite difference while the convection term is approximated using the standard upwinding usual used in porous media flow problems .let being the center of each subdivision .we approximate the diffusion term at each center by this approximation is similar to the one in with central difference on non uniform grid .we approximate the convection term using the standard upwinding technique as following where where reorganizing all previous diffusion and convection approximations lead to the following initial value problem \\ \mathbf{f}(0)=\left(\max(v_{1}-b,0), ... ,\max(v_{n}-b,0)\right)^{t}. \end{array}\right.\end{aligned}\ ] ] where is a tridiagonal matrix and where is the contribution from boundary conditions .the function is not smooth , it important to approximate it by a smooth function .the approximation in is a fourth - order smooth function denoted and defined by where is the transition parameter and this approximation allow us to write let us introduce the time stepping discretization for the ode ( [ ode ] ) based on exponential integrators .classical numerical methods usually used are implicit euler scheme and crank nicolson scheme .following works from the exact solution of ( [ ode ] ) is given by \\ & & \qquad \qquad \tau_{n}= n\,\delta \tau,\;\;\;\;\ ; n=0 , ... , m,\;\;\;\;\ \delta \tau > 0 .\end{aligned}\ ] ] note that ( [ mild ] ) is the exact representation of the solution , to have the numerical schemes , approximations are needed , the first approximations ( using the quadrature rule ) may be using these approximations we therefore have the following second - order approximation \end{aligned}\ ] ] the simple scheme called exponential differential scheme of order 1 ( etd1 ) is obtained by approximating by the constant and is given by .\end{aligned}\ ] ] a second order scheme is given in .following the work in ( * ? ? ?* lemma 4.1 ) , if the the function can be well approximated by the polynomial of degree ( which is the case here since we have the exponential decay at the boundary ) , from ( [ milda ] ) we have where note that to have high order accuracy in time for , the integral in ( [ app ] ) should be approximated more accurately. the magnus expansion may also used in such case .all schemes here can be implemented using krylov subspace technique in the computation the expomential functions presented in those schemes with the matlab functions expmvp.m or phipm.m from .the krylov subspace dimension we use is and the tolerance using in the computation of the expomential functions is .we use and obtain second order accuracy in time as the approximations ( [ app ] ) are second order in time .we used the following frims : * magna international inc ( figure [ fig03 ] ) * first citizens bancshares inc nc ( figure [ fig04 ] ) * coca - cola co ( figure [ fig05 ] ) * one liberty properties inc ( figure [ fig06 ] ) * cisco systems inc ( figure [ fig07 ] ) * c b s corp new ( figure [ fig08 ] ) * nam tai electronics inc ( figure [ fig09 ] ) here again , the time origin corresponds to the year ( 2000 + 1/2 ) , the data before are memory data and we want to predict the data after ( 2000 + 1/2 ) . in the legends of all of our graphs we use the following notation * `` delayed equity '' is for the numerical equity value from our nonlinear delayed model . *`` real equity '' is for the real equity value of the corporate . *`` merton equity '' is for the numerical equity value from merton model . in our surface graphs of the numerical equity value , we plot only the part where the variable is between the minimun and the maximum values of our real market value . in all simulations with our delayed model , we take . in all graphs ,the function ( volatility in delayed model ) is the quadratic interpolation of the standard deviation of daily returns in the memory part while the volatility in the merton model is just the mean of the memory part . for each firm, we plot at the left hand size both the surface graphs of the numerical equity value from our delayed model at and . in those 3d surface graphs ,we also plot the corresponding 3 d graphs ( green curves ) of the real data of the firm equity value as a function of the time ( year ) and . at the right hand size , we plot in 2 d the firm equity value as a function of time ( year ) , corresponding to the surface graphs at the left hand size .those 2 d equity graphs contain the numerical equity value from our delayed model , the numerical equity value of the merton model and the real data equity value of the firm . in our simulations , for a given ,the promised debt is just the real debt value of the firm at time . for firm in figure [ fig03 ], we can observe that both the delayed model and merton model fit well the real market equity value of the firm .the accuracy of the two methods varies within some time interval as we can observe in figure [ fig03b ] and figure [ fig03d ] . for firm in figure [ fig04 ] , comparing to firm the two models fit less . in a wide time interval in figure [ fig04b ] andfigure [ fig04d ] , the delayed model is more close to the real market equity of the firm .we can also observe a good early fit in the merton model .for firm in figure [ fig05 ] , comparing to firm the two models fit less .but for the maturity date in figure [ fig05 ] the fitting is relatively good for the two models .the accuracy of the two methods varies within some time interval as we can observe in figure [ fig05b ] and figure [ fig05d ] .for firm in figure [ fig06 ] , the fitting is relatively bad for the two models .however we can observe in figure [ fig06b ] and figure [ fig06d ] the good early fit in the merton model , and that in the wide time interval the delayed model is more close to the real market equity of the firm than the merton model . for firm in figure [ fig07 ] , the fitting is relatively good for the two models in the early time interval and become relatively bad just after . for firm in figure [ fig08 ] ,the fitting is relatively good for the two models for the maturity date in figure [ fig08b ] at the middle time interval and bad for the maturity date in figure [ fig08d ] . for firm in figure [ fig09 ] ,the fitting is relatively good for the two models in the early time interval but become bad just after .the two models are confused .in this paper , numerical techniques to solve delayed nonlinear model for pricing corporate liabilities are provided .the numerical technique to solve the rpdes modeling debt and equity value combines the finite difference finite volume methods ( discretization respect to the firm value ) and an exponential integrator ( discretization respect to the time ) .the matrix exponential functions are computed efficiently using krylov subspace technique . using financial data from some firms ,we compare numerical solutions from both our nonlinear model and classical merton to the real firm s data .this comparaison shows that our nonlinear model behaves very well .we conclude that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored .we thank dr . david rakwoski from college of business , southern illinois university for finding data for the simulations .antoine tambue was funded by the research council of norway ( grant number 190761/s60 ) .00 a. tambue , g. j.lord , and s. geiger , an exponential integrator for advection - dominated reactive transport in heterogeneous porous media . , 229(10):39573969 , 2010 .s. gryglewicz , a theory of corporate financial decisions with liquidity and solvency concerns , 365384 , 2011 .d. s. bates , testing option pricing models , statistical models in finance . , * 14 * ( 1996 ) , 567611 .l. o. scott , option pricing when the variance changes randomly : theory , estimation and an application ., * 22 * , 419438,1987 .e. kemajou , mohammed , and a. tambue , a stochastic delay model for pricing debt and loan guarantees : theoretical results , , 2012 .p. wilmott , j. dewynne , and s. howison , option pricing : mathematical models and computation . ,oxford , uk , 1993 . zhongdi cen , anbo le , and aimin xu , exponential time integration and second - order difference scheme for a generalized black - scholes equation . , volume 2012 ( 2012 ) , article i d 796814 , doi:10.1155/2012/796814 , 2012 .a. tambue .efficient numerical simulation of incompressible two - phase flow in heterogeneous porous media based on exponential rosenbrock- euler method and lower - order rosenbrock - type method ., 2012 , in press .a. tambue , i. berre , and j. m. nordbotten , efficient simulation of geothermal processes in heterogeneous porous media based on the exponential rosenbrock euler and rosenbrock - type methods . , * 53 * ,250262 , 2013 .
in the accompanied paper , a delayed nonlinear model for pricing corporate liabilities was developed . using self - financed strategy and duplication we were able to derive two random partial differential equations ( rpdes ) describing the evolution of debt and equity values of the corporate in the last delay period interval . in this paper , we provide numerical techniques to solve our delayed nonlinear model along with the corresponding rpdes modeling the debt and equity values of the corporate . using financial data from some firms , we compare numerical solutions from both our nonlinear model and classical merton model to the real corporate data . from this comparison , it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored . corporate claim , debt security , equity , computational finance , exponential integrators
security is an important issue in wireless systems due to the broadcast nature of wireless transmissions . in a pioneering work ,wyner in addressed the security problem from an information - theoretic point of view and considered a wire - tap channel model .he proved that secure transmission of confidential messages to a destination in the presence of a degraded wire - tapper can be achieved , and he established the secrecy capacity which is defined as the highest rate of reliable communication from the transmitter to the legitimate receiver while keeping the wire - tapper completely ignorant of the transmitted messages .recently , there has been numerous studies addressing information theoretic security .for instance , the impact of fading has been investigated in , where it has been shown that a non - zero secrecy capacity can be achieved even when the eavesdropper channel is better than the main channel on average .the secrecy capacity region of the fading broadcast channel with confidential messages and associated optimal power control policies have been identified in , where it is shown that the transmitter allocates more power as the strength of the main channel increases with respect to that of the eavesdropper channel .in addition to security issues , providing acceptable performance and quality is vital to many applications .for instance , voice over ip ( voip ) and interactive - video ( e.g , .videoconferencing ) systems are required to satisfy certain buffer or delay constraints . in this paper, we consider statistical qos constraints in the form of limitations on the buffer length , and incorporate the concept of effective capacity , which can be seen as the maximum constant arrival rate that a given time - varying service process can support while satisfying statistical qos guarantees .the analysis and application of effective capacity in various settings have attracted much interest recently ( see e.g. , and references therein ) .we define the _ effective secure throughput _ as the maximum constant arrival rate that can be supported while keeping the eavesdropper ignorant of these messages in the presence of qos constraints .we assume that the csi of the main channel is available at the transmitter side .then , we derive the optimal power control policies that maximize the effective secure throughput under different assumptions on the availability of the csi of the eavesdropper channel . through this analysis, we find that due to the introduction of qos constraints , the transmitter can not reserve its power for times at which the main channel is much stronger than the eavesdropper channel .also , we note that the csi of the eavesdropper provides little help when qos constraints become more stringent . the rest of the paper is organized as follows .section ii briefly describes the system model and the necessary preliminaries on statistical qos constraints and effective capacity . in section iii, we present our results for both full and only main csi scenarios .finally , section iv concludes the paper .the system model is shown in fig .[ fig : systemmodel ] .it is assumed that the transmitter generates data sequences which are divided into frames of duration .these data frames are initially stored in the buffer before they are transmitted over the wireless channel .the channel input - output relationships are given by &=h_1[i]x[i]+z_1[i]\\ y_2[i]&=h_2[i]x[i]+z_2[i]\end{aligned}\ ] ] where is the frame index , ] and ] s are jointly stationary and ergodic discrete - time processes , and we denote the magnitude - square of the fading coefficients by =|h_j[i]|^2 ] , and we assume that the bandwidth available for the system is . above , the noise component ] for .the additive gaussian noise samples \} ] as the instantaneous transmit power in the frame .now , the instantaneous transmitted snr level for receiver 1 can be expressed as =\frac{p[i]}{n_1 b} ] for receiver 1 . if we denote the ratio between the noise power of the two channels as , the instantaneous transmitted snr level for receiver 2 becomes =\gamma\mu^1[i] ] , which is the time - accumulated service process . , i=1,2,\ldots\} ] , where ] .then , ( [ eq : effectivedefi ] ) can be written as }\}\,\quad \text{bits / s } , \label{eq : effectivedefirate}\end{aligned}\ ] ] where ] vary independently over different frames . the _ effective secure throughput_ normalized by bandwidth is _ et al . _ in investigated the secrecy capacity in ergodic fading channels without delay constraints .they considered two cases : full csi at the transmitter and only main csi at the transmitter . in this section, we also consider these two cases but in the presence of statistical qos constraints . in this part, we assume that the perfect csi of the main channel and the eavesdropper channel is available at the transmitter .the transmitter is able to adapt the transmitted power according to the instantaneous values of and only when .the secrecy capacity is then given by where is the optimal power allocated when and are known at the transmitter . in the presence of qos constraints , the optimal power allocation policy in general depends on the qos exponent to denote the power allocation policy under qos constraints . ] .hence , the secure throughput can be expressed as note that the first term in the function is a constant , and is a monotonically increasing function .the maximization problem in ( [ eq : fullcsimaxp ] ) is equivalent to the following minimization problem it is easy to check that when is a convex function in . according to , non - negative integral preserves convexity ,hence the objective function is convex in .then , we can form the following lagrangian function , denoted as : taking the derivative of the lagrangian function over , we get the following optimality condition : where is the lagrange multiplier whose value is chosen to satisfy the average power constraint with equality . for any channel state pairs , can be obtained from the above condition .whenever the value of is negative , it follows from the convexity of the objective function with respect to that the optimal value of is 0 .there is no closed - form solution to ( [ eq : fullcsioptcond ] ) .however , since the left - hand side of ( [ eq : fullcsioptcond ] ) is a monotonically increasing concave function , numerical techniques such as bisection search method can be efficiently adopted to derive the solution .the secure throughput can be determined by substituting the optimal power control policy for in ( [ eq : fullcsimaxp ] ) . exploiting the optimality condition in ( [ eq : fullcsioptcond ] ) , we can notice that when , we have .meanwhile , thus , we must have for , i.e. , if .hence , we can write the secure throughput as where is the derived optimal power control policy . in this section , we assume that the transmitter has only the csi of the main channel ( the channel between the transmitter and the legitimate receiver ) . under this assumption ,it is shown in that the secrecy rate for a specific channel state pair becomes ^+\end{aligned}\ ] ] where is the optimal power allocated when only is known at the transmitter . in this case , the secure throughput can be expressed as similar to the discussion in section [ sec : fullcsi ] , we get the following equivalent minimization problem : the objective function in this case is convex , and with a similar lagrangian optimization method , we can get the following optimality condition : where is a constant chosen to satisfy the average power constraint with equality .if the obtained power level is negative , then the optimal value of becomes 0 according to the convexity of the objective function in ( [ eq : maincsimaxprev ] ) .now that non - negative integral does not change the concavity , the lhs of ( [ eq : maincsioptcond ] ) is still a monotonic increasing concave function of . in rayleigh channel with . . ]the secure throughput can be determined by substituting the optimal power control policy for in ( [ eq : maincsimaxp ] ) .exploiting the optimality condition in ( [ eq : maincsioptcond ] ) , we can notice that when , we have denote the solution to the above equation as .considering that we must have for , i.e. , if . hence , we can write the secure throughput as where is the derived optimal power control policy . in fig .[ fig : secrecym=1 ] , we plot the effective secure throughput as a function of the qos exponent in rayleigh fading channel with for the full and main csi scenarios . it can be seen from the figure that as the qos constraints become more stringent and hence as the value of increases , little help is provided by the csi of the eavesdropper channel . in fig .[ fig : fixedsnr ] , we plot the effective secure throughput as varies for .not surprisingly , we observe that the availability of the csi of the eavesdropper channel at the transmitter does not provide much gains in terms of increasing the effective secure throughput in the large regime .also , as qos constraints becomes more strict , we similarly note that having the csi of the eavesdropper channel does not increase the rate of secure transmission much even at medium levels . in rayleigh channel with . . ]to have an idea of the power allocation policy , we plot the power distribution as a function of for full csi case when and in fig . [fig : powerdistfull-0 ] . in the figure, we see that for both values of , no power is allocated for transmission when which is expected under the assumption of equal noise powers , i.e. , .we note that when and hence there are no buffer constraints , opportunistic transmission policy is employed .more power is allocated for cases in which the difference is large .therefore , the transmitter favors the times at which the main channel is much better than the eavesdropper channel . at these times, the transmitter sends the information at a high rate with large power .when is small , transmission occurs at a small rate with small power .however , this strategy is clearly not optimal in the presence of buffer constraints because waiting to transmit at high rate until the main channel becomes much stronger than the eavesdropper channel can lead to buildup in the buffer and incur large delays .hence , we do not observe this opportunistic transmission strategy when . in this case, we note that a more uniform power allocation is preferred . in order not to violate the limitations on the buffer length , transmission at a moderate power levelis performed even when is small .db in rayleigh channel with . . ]in this paper , we have anayzed the secrecy capacity in the presence of statistical qos constraints .we have considered the _ effective secure throughput _ as a measure of the performance . with different assumptions on the availability of the full and main csi at the transmitter ,we have investigated the associated optimal power allocation policies that maximize the effective secure throughput . in particular , we have noted that the transmitter allocates power more uniformly instead of concentrating its power for the cases in which the main channel is much stronger than the eavesdropper channel . by numerically comparing the obtained effective secure throughput, we have shown that as qos constraints become more stringent , the benefit of having the csi of the eavesdropper channel at the transmitter diminishes .j. tang and x. zhang , `` cross - layer - model based adaptive resource allocation for statistical qos guarantees in mobile wireless networks , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 6 , pp.2318 - 2328 , june 2008 . l. liu , p. parag , and j .- f .chamberland , quality of service analysis for wireless user - cooperation networks , " _ ieee trans .inform . theory _3833 - 3842 , oct .
in this paper , secure transmission of information over an ergodic fading channel is studied in the presence of statistical quality of service ( qos ) constraints.we employ effective capacity to measure the secure throughput of the system , i.e. , _ effective secure throughput_. we assume that the channel side information ( csi ) of the main channel is available at the transmitter side . under different assumptions on the availability of the csi of the eavesdropper channel , we investigate the optimal power control policies that maximize the _ effective secure throughput_. in particular , when the csi of the eavesdropper channel is available at the transmitter , it is noted that opportunistic transmission is no longer optimal and the transmitter should not wait to send the data at a high rate until the main channel is much better than the eavesdropper channel . moreover , it is shown that the benefits of the csi of the eavesdropper channel diminish as qos constraints become more stringent .
many systems in nature possess a multitude of coexisting stable states for a given set of parameters reflecting environmental conditions .this phenomenon called multistability has been studied for decades because of its dynamical complexity arising from the coexistence of the different states ( for a review cf . and references therein ) .examples can be found in various disciplines of science , such as pattern recognition in neuroscience , nonlinear optics with different phenomena in lasers and coupled lasers or manifested in different outcomes in chemical reactions despite large care taken to realize the same initial conditions . since the multitude of coexisting states can usually be related to different performances of the system , various control strategies have been developed to gear the system from one state to another in a prescribed way or to avoid states which correspond to undesired behavior of a system ( for a review cf . and references therein ) .more recently , during the last decade , the study of systems possessing only two alternative stable states has gained increasing interest due to their importance particularly in climate science and ecology ( cf .reviews and references therein ) .it has been recognized that one of the great challenges to science consists in understanding critical transitions or shifts in dynamics or properties arising in natural systems as a response to global change .such transitions , in ecology often called catastrophic or regime shifts , are in general related to either changes in the dominance of particular species resulting in different ecosystem services or even in loss of biodiversity .more specific alternative stable states have been identified in several limnic and marine ecosystems such as kelp forests , coral reefs , shallow lakes , seagrass meadows , where the alternatives are usually between the dominance of the native species like kelp , corals or seagrass and the undesired states dominated by algae .also in terrestrial ecosystems such as the sahara or more general semiarid ecosystems , in which the two alternative stable states are a vegetated and a desert state . in climate science , where these transitions are often called tipping points ,several components of the climate system have been identified to be possibly vulnerable with respect to certain perturbations .such tipping elements are related to several climate phenomena such as elnino - southern oscillation , the indian monsoon , the arctic and antarctic ice covers .additionally , as one of the first climate components , the thermohaline ocean circulation has been found to possess two alternative stable states , one of which related to the present climate with a large transport of heat to the northern latitudes and the other one corresponding to a shut - down of the circulation consequently ceasing the heat transport . as a part of the thermohaline ocean circulation , local deep ocean convection is also vulnerable to a shut down . both processes ,the shut - down of thc on a global scale and deep - ocean convection on a local scale would have a large impact on the climate in the northern hemisphere leading finally to a cooling in northern- and western europe .the indian monsoon is expected to become more wet or more dry depending on which of the processes responsible for such changes like increasing albedo due to aerosol concentration or stronger el nino s , respectively , are dominant in the future .another highly debated tipping point relates to the tendency of the arctic ice sheets to become thinner and finally to lead to an ice - free state in summer . finally , we mention the different approaches to study the recurrent switchings of ice ages and warmer climates before the holocene , which are attributed to several stable states and transitions between them .different scenarios are discussed in literature that lead to such critical transitions .on the one hand , changes in the environmental forcing , e.g. atmospheric temperatures or altered precipitation patterns can induce such transitions by reaching a critical threshold at which one of the states loses its stability and the system switches to another state . in mathematical terms , this scenario is related to a bifurcation ; the combination of two such bifurcations often comprises a hysteresis allowing for a switching between two alternative states , when a control parameter is varied .when two stable states coexist , then a switching between them is mediated by fluctuations leading to noise - induced transitions . due to the increasing concerns about such critical transitions on our planet earth , there is an urgent need to identify the approach of a regime shift or a tipping point before its occurrence . such early warning signals ,if applicable , can be used to anticipate the transition and to take measures to slow down the approach in the worst case or to avoid it if the expected alternative state is for some reason undesired .such avoidance might not always be possible , particularly not in the climate system , but early warnings could be used to take political actions . during the last decade several methods have been developed to gain more insights into how to predict abrupt changes in the system dynamics , induced by its nonlinearity.one of the earliest measures identified is related to the time which is needed to respond to perturbations . while far away from the critical threshold , such perturbations die out quite quickly , this damping becomes significantly slower in the neighborhood of a threshold .the perturbation applied can be either a single perturbation or some noise which is inevitable in experiments and in natural processes . in case of a single prescribed perturbation ,this measure is easy to implement in experiments and therefore widely used in quantifying the distance to the threshold value . in case of a noisy systemthe approach of the critical threshold can be quantified by an increasing variance and autocorrelation . as a second effect noise leads to an irregular switching process between the two ( or more ) alternative stable states .this process is called flickering , attractor hopping or chaotic itinerancy depending on the context in which it is studied .hence , a second indicator has been introduced measuring this flickering or hopping process , which occurs in a bistable ( or multistable ) region in parameter space in which two or more stable states coexist .it is important to note that the hopping dynamics depends on the noise strength .while a large body of work is devoted to the impact of additive noise , many processes in nature , particularly in ecosystems , are affected by multiplicative noise , which has only rarely been considered .it has to be emphasized that environmental noise in ecosystem dynamics is always multiplicative and plays by far the more important role .however , most of the previous work on indicators for critical thresholds is restricted to additive noise describing the impact of fluctuations on physical processes in the climate system , but being of limited value for ecological problems .many bistable systems considered in nature possess two stable equilibria , i.e. the system is stationary .for the above mentioned example of shallow lakes , the water in the lake loses transparency by shifting abruptly from a clear to a turbid state when a threshold in the level of nutrients is reached . as a result of this eutrophication process ,submerged plants dramatically disappear beyond a tipping point . for the example of desertification ,rainfall patterns are the essential environmental conditions determining the shift from a perennial vegetation via localized vegetation patterns to the state of bare soil . moreover ,taking the grazing pressure by livestock in the sahel zone into account , leads again to a shift from a perennial to an annual vegetation .however , in analyzing the regime shift in the respective ecosystems , the periodic input of nutrients and precipitation due to the seasonal cycle has been neglected , but could have an important impact too .the same argument applies to the analysis of physical processes in the climate system driven by periodic or quasi - periodic changes in the orbital parameters of the sun leading to a variation in solar insolation with periods of about , or years , the well - known milankovitch cycles .particularly the latter are assumed to be the major drivers for the development of the ice - ages before the holocene .recently , these periodic forcings affecting the albedo of the earth are studied to evaluate the impact of this variation on the arctic and antarctic ice cover . in this paperwe focus on tipping points and regime shifts of periodically forced systems . in this class of systems ,abrupt changes of the dynamics at critical transitions do not occur between steady states ( equilibria ) , but between oscillating states ( limit cycles ) .though at first sight , the hysteresis curves which are usually drawn to discuss critical transitions look similar , however , the dynamics is quite different . for this scenariowe show that a transient structure , a so - called channel , occurring in the system s state space beyond the tipping point , creates a short - term dynamical regime with specific properties which attenuates the criticality of the transition .the smoothing of the transition is demonstrated by computing a finite - time measure of the twisting behavior ( rotation property ) of the state space surrounding trajectories while inside the channel domain .this measure indicates that , the trajectories passing the channel beyond the tipping point have residual system properties of the limit cycle destroyed at that tipping point .hence , the channel acts as a ghost " of the destroyed limit cycle , retaining system trajectories in a very similar fashion .this fact is shown by statistical analysis of the intervals of time spent by noisy trajectories in the neighborhood of the limit cycle ( pre - tipping ) and the channel ( post - tipping ) . furthermore , we attribute to the ghost state the inconclusive diagnostic provided by variance and autocorrelation in anticipating tipping points .let us now indicate the differences in the dynamical approach to deal with limit cycles instead of asymptotic equilibria .figure [ schematic1 ] shows the typical bifurcation diagram used to explain the appearance and disappearance of the coexistence of two alternative states .in contrast to the usual diagrams , the lines denote here one point of a limit cycle instead of stationary points .therefore , the y - axis does not show one coordinate of the stationary state of the system , but one coordinate of the poincar section , a special construction which is very useful in analyzing periodic solutions of nonlinear dynamical systems ( see methods section ) .the poincar section in a periodically forced system defines a stroboscopic map in which the system is always analyzed at the same phase of the forcing , i.e. at times .hence , a limit cycle corresponds to a fixed point in this stroboscopic map which makes the similarity between fig .[ schematic1 ] and the well - known sketches of bistability in the stationary case obvious .there are two saddle - node or fold bifurcations of limit cycles denoted by and , at which two limit cycles , a stable and an unstable one , emerge or disappear .crossing those tipping points the dynamics will change dramatically . increasing the parameter value , the continuation of the limit cycle on the upper branch will stop abruptly at and switch to a periodic behavior corresponding to the lower branch ,while decreasing the parameter and continuing the lower branch will result in a transition to the upper one at the critical threshold .our main focus lies in the analysis of the regions around those critical transitions .firstly , we address the question to what extent the usual criteria of critical slowing down and flickering will signal the approach to the transition ( blue region ) .secondly , we show that the critical transitions are hidden due to particular structures in state space , so - called channels , which appear in the neighborhood of the fold bifurcations of limit cycles , preventing a clear identification of the critical transitions . and , which are the tipping points .the parameter regions where the critical slowing down phenomenon happens ( which provides early - warnings to predict tipping points ) are indicated in blue color .the green color regions indicate parameters where a channel associated to fold bifurcations is formed.,width=321,height=245 ]to illustrate our results , we employ a paradigmatic model system , the well - known duffing oscillator and apply a periodic forcing with amplitude and frequency . in mathematical terms , this simple dynamical system reads : the parameter is the amplitude of the system damping .the parameter controls the noise intensity given by the stochastic forcing .the function represents the usual zero mean and unit variance noise with .we use a fourth - order runge - kutta method to integrate eq .( [ ecomodel ] ) , in this process , the time is measured as a function of the period of the external forcing , i.e , . in a certain parameter range ,the system described by eq .( [ ecomodel ] ) exhibits a generic scenario of bistability between two different limit cycles , i.e. two stable periodic solutions exist separated by an unstable one of saddle character .the corresponding bifurcation diagram is shown in the upper panel of fig .[ figure1 ] .though this diagram looks very similar to the general diagram depicted in fig .[ schematic1 ] , it shows only a poincar section of the stable limit cycles occurring for the system described by eq .( [ ecomodel ] ) . to characterize those limit cycles in more detail ,we compute the generalized winding numbers ( gwn ) for each of them along the bistable parameter range , the results are depicted in the bottom panel of fig .[ figure1 ] . in this panel ,the gwn is represented by , this measure quantifies the asymptotic twisting of the local in neighborhood of limit cycles , a better description of this measure is given in the * methods*. ) duffing oscillator showing a bistability of limit cycles .the different colors , blue and yellow , represent each limit cycle , and , respectively .the state variable is the -shift map of the limit cycle variable , .the points and mark the parameters where catastrophic shift occurs , and are the corresponding critical parameter values .the other system parameters are settled in , .( bottom ) the asymptotic generalized winding numbers , , of each limit cycle in the parameter interval .the colors correspond to the respective limit cycles.,width=377,height=302 ] the bifurcation diagram of fig .[ figure1](upper ) shows the dependence of the noise - free duffing oscillator on the forcing amplitude .two stable limit cycles , ( blue ) and ( yellow ) , coexist for a range of parameters bounded by two fold bifurcations of limit cycles at the points and ( tipping points ) .so , the system is subject to catastrophic shifts , tipping points , as the parameter reaches the points or .let us now check whether the autocorrelation coefficient at lag- and the variance of the system indicate the approach of the critical transition and can serve as early warnings . to this end ,we apply now noise to the system and show the resulting behavior in fig .[ figure2 ] .we focus our analysis on the parameter region close to .hence , in fig .[ figure2 ] , we reverse the -axis to better investigate the critical transition at the parameter . in fig .[ figure2 ] ( upper panel ) , the black line indicates the time evolution of a noisy trajectory with the parameter varying in the same interval of the bifurcation diagram also indicated in this panel . in fig .[ figure2 ] ( middle panel ) , we show the autocorrelation at lag- , a measure that usually increases with the approach of critical transitions of systems in equilibrium . in this panel , for limit cycles , we verify a sudden increase in the autocorrelation coefficient as soon as the noisy trajectory starts flickering between the stable limit cycles .but subsequently , it decreases as the system approaches the critical threshold and does not exhibit any significant change as the critical threshold is passed .similar behavior is observed for the standard deviation of the noisy trajectory , shown in fig .[ figure2 ] ( bottom panel ) .therefore , we find that the usual indicators of critical transitions between equilibria may not work for limit cycles .instead we observe a continuous decreasing of the autocorrelation coefficient and the variance , not suitable to serve as an early warning signal .to explain this behavior , we investigate in more detail the state space structure occurring for parameters succeeding the fold bifurcation at . . the forcing amplitude , ,is varied linearly through the bifurcation diagram which the -shift map of the noiseless asymptotic limit cycle is represented by the colors blue and yellow ( same bifurcation diagram of fig .[ figure1](top ) ) .( middle ) the black points represent the autocorrelation coefficient at lag- as the forcing amplitude approaches the critical value .( bottom ) the black points represent the standard deviation of the average value of the noisy time series.,width=377,height=340 ] fold bifurcations of limit cycles are accompanied by the formation of channels in state space through which the trajectory has to go after entering it . to illustrate this behavior which has been first described by manneville in the context of intermittency in turbulence , we show in fig .[ schematic2 ] the generic principle behind the formation of that channel . as outlined above, a limit cycle corresponds to a fixed point in the poincar section . in our case, one point in the poincar section is mapped onto the next point by mapping the limit cycle stroboscopically every period of the forcing , so the fixed points are mapped onto the diagonal of the diagram shown in fig .[ schematic2 ] . in the bistable regionwe have three fixed points , two stable and one unstable separating the former two ( fig . [ schematic2 ] , left panel ) . when the fold bifurcation is reached the stable and the unstable limit cycle merge and form an elliptic point ( middle panel ) , while beyond the fold bifurcation a channel appears in state space through which the trajectory moves when it comes close to the region in state space where previously the two limit cycles have been located . without noise ,the trajectory would finally converge to the only stable limit cycle left in the system , denoted by . due to the noise ,the trajectory is kicked back to the channel from time to time and moving through it again and again . as a consequence of this behavior we observe even beyond the fold bifurcation , that the dynamics returns to the ghost " of the limit cycle manifested as the channel .the resulting dynamics contains phases where the trajectory is close to the ghost " and phases where is comes close to the only stable limit cycle but kicked away again by the noise .this way , the flickering dynamics goes on even though the system is well beyond the critical transition . for the very same reason the widely used indicators for critical transitions such as lag- autocorrelation function , variance as well as flickering can not signal the approach to the critical transition and the shift or tipping points happens with no warning . in our case, the characteristics of the critical slowing down indicators resemble the case of a smooth transition as analyzed in .additionally , we note that the dynamics before and beyond the critical transition is essentially the same , characterized by the hopping between the two stable states before and between the single stable state and the ghost " beyond the tipping point .this behavior is generic and will occur for all fold bifurcations of limit cycles forming a channel after the bifurcation . )two fixed points of the node type ( and ) are coexisting with a saddle .the black arrows indicate how initial conditions dynamically behave in the neighborhood of each fixed points .( ) as a system parameter is varied the node and the saddle collide forming an elliptic point ( saddle - node or fold bifurcation ) .( ) as the parameter crosses the critical bifurcation parameter , the initial conditions ( arrows ) that used to belong to the attraction domain of the extinct node are now converging to the node through a channel formed in the mapping.,width=604,height=226 ] let us now discuss the post - tipping behavior in more detail . to demonstrate that indeed the channel is the most essential structure in the state space deforming the noisy system beyond crossing the critical threshold , we analyze the scaling behavior of the length of the transient time with the distance from the threshold , therefore , we define as a parameter measuring the distance from the critical threshold , i.e. , .then , as a function of this distance , we measure the transient time , ) , for trajectories to reach the remaining stable limit cycle ( yellow ) . for these trajectories ,we choose a set of initial conditions in the state space region previously occupied by the basin of attraction of the limit cycle ( blue ) destroyed in . in fig .[ figure3 ] , we show the results for an ensemble of random initial conditions for each distance .we find the time that trajectories spend to leave the channel scales as a power - law with the distance .the characteristic exponent is equal , and hence , it corresponds to the value known from the type intermittency .hence , the characteristic time enables us to quantify the influence of the channel in the time evolution of trajectories as a function of the parameter distance . , measured in units of the systems forcing , for initial conditions to leave the channel and arrive at the remaining attractor as a function of the distance from the critical parameter point .crosses represent the numerical results and the black straight line a power - law fitting.,width=321,height=245 ] this equality verifies that trajectories starting in the former basin of attraction of the limit cycle destroyed in the fold bifurcation are in fact being trapped in the channel associated to this bifurcation for a characteristic time , . in order to obtain the twisting behavior of trajectories just during the time trapped in the channel, we introduce a finite - time version of the winding number ( ftwn ) represented by .a complete description of this definition is given in section * methods*. in the diagram shown in fig .[ figure4 ] , the color code indicates the ftwn given by , in the -axis we represent the time evolution , , in units of the period of the forcing , while in the -axis we show the distance from the bifurcation point .the red line represents the function obtained from the adjustment in fig .[ figure3 ] for the characteristic time for trajectories to cross the channel .we observe that regardless of the parameter distance , the finite - time winding number has a defined value equal to ( blue in fig .[ figure4 ] ) for times lower than the corresponding . hence , from fig .[ figure4 ] , we conclude that the post - tipping trajectories , while crossing the channel , conserve the twisting behavior ( rotation properties ) of the stable limit cycle destroyed in the tipping point . , i.e. , the time evolution , , for the parameter distance . the color scale represent the finite - time winding numbers .the red line indicates the power - law function , , adjusted in fig .[ figure3 ] for the time spent by trajectories to cross the channel.,width=340,height=264 ] in the following , we confirm the existence of the residual twisting behavior of the destroyed limit cycle by obtaining the ftwn of sets of initial conditions crossing the channel .firstly , we choose the parameter such that the dynamics will take place in the channel , i.e. , minus a small distance , then we compute the ftwn during the time ( ) for a grid of initial conditions . attributing different colors to the ftwn obtained for the trajectories corresponding to each initial condition, we clearly distinguish , in the grid of fig .[ figure5](a ) , two types of dynamic behavior .( i ) the ftwn corresponding to trajectories that cross the channel ( initial conditions in blue , fast twisting ) and ( ii ) the the trajectories converging directly to the remaining stable state ( initial conditions in yellow , slow twisting ) . in order to compare the twisting properties of the channel , measured by ftwn , to the twisting behavior around the stable states in the bistable region , we characterize the twisting around the two stable states , and , by the asymptotic generalized winding number ( gwn ) .[ figure5](b ) shows those twisting properties of trajectories starting on the same grid as in fig .[ figure5](a ) but computed by the asymptotic ( gwn ) for a forcing amplitude before the tipping .we notice the similarity between twisting of trajectories crossing the channel beyond tipping ( blue in fig .[ figure5](a ) ) and around the stable state before tipping ( blue in fig . [ figure5](b ) ) . to illustrate further that observations of the systems trajectories are insufficient to determine whether the system is bistable ( pre - tipping ) or has a dynamical channel ( post - tipping ) , we show in fig .[ figure6]( ) the temporal evolution of a trajectory of the noisy the duffing oscillator as the parameter increases with time in the same interval as in fig .[ figure6]( ) .we notice that , even after the limit cycle marked in blue disappears in , the noisy trajectory ( black line ) is still flickering into the state space region previously occupied by the extinct limit cycle around .this becomes even more obvious when comparing two noisy trajectories with fixed forcing at an amplitude in the bistable region ( pre - tipping ) to a trajectory with a forcing amplitude beyond the tipping point ( red line in fig .[ figure6]( ) . in a statistical sensethose two trajectories are indistinguishable , indicating that the pre - tipping and the post - tipping behavior are very similar , with flickering between two distinct state space regions of and or the ghost " of respectively . as a consequence , time series as the main window to observations in nature , would show the flickering phenomenon before and beyond the tipping points making the transition in the observed data to appear smooth instead of abrupt . in order to verify this statement , we investigate the intervals of time , , that a noisy trajectory elapses in the neighborhood of the stable state ( before the tipping point ) , and in the channel ( beyond the tipping point ) . the idea behindthis study is to extend the notion of escape times to characterize the dynamics beyond the tipping point . in bistable systemsone usually computes the mean escape time or mean first passage time to identify the stability of each stable state in a stochastic sense .while for systems possessing a double well potential , it is possible to compute those escape times analytically , one has to rely on numerical estimations for arbitrary multistable systems .though the vast majority of nonlinear dynamical systems do not possess a potential , the scaling of the escape rates remains valid .specifically , in figure [ figure7 ] , we obtain the distribution of time intervals spent by trajectories in the neighborhood of the stable states and in the channel .the time interval , is also expressed in units of the period of the forcing .specifically , we show in fig .[ figure7 ] the distributions of the time intervals spent by trajectories in the neighborhood of the stable limit cycle which will go extinct at the tipping point ( fig .[ figure7]( ) ) with the distribution of those time intervals spent in the neighborhood of the ghost " of beyond the tipping point ( fig .[ figure7]( ) ) .both distributions are exponential distributions , so that the probability density function can be approximated by where is the mean value of the distribution .while in the bistable parameter region the mean time spent close to the limit cycle is periods of the forcing , it is only slightly shorter ( periods of the forcing ) beyond the tipping point .however , the density function for the dynamics close to the channel is narrower and higher than in the bistable region , indicating that the shorter intervals of time are more frequent .hence , even for the parameter lower than ( beyond tipping point ) , the frequency with which trajectories visit the neighborhood of the extinct state is not zero , i.e. , the flickering phenomenon still occurs after the tipping point .it means that even after the extinction of the limit cycle in , the state space channel keeps retaining trajectories , avoiding their abrupt definitive transition to the unique survival stable state . to emphasize that , the characteristics of the dynamics changes smoothly and not abrupt when crossing the tipping point , we show in fig .[ figure7]( ) the changes in the distribution function when decreasing the forcing from the bistable to the monostable region . in this figure, the color code indicates the probability densities for each parameter shown in the -axis .we observe that the distribution of time intervals smoothly changes as the tipping point is approached and passed , indicating that there is no abrupt transition crossing the threshold .however , as the parameter is passed through the tipping point , a considerable increasing in the density of time interval values around the expected value is observed making the distribution narrower for parameters well beyond the tipping point .in summary , we address tipping points of systems subjected to periodic external forcing . the asymptotic solutions of this class of systems inherently settle into oscillating stable states ( limit cycles ) , a more complex dynamics than the stable steady states ( equilibria ) for which the tipping points have been widely studied . in nature , the most noticeable occurrences of such oscillating attractors are found in ecology and climate sciences where periodic and quasi - periodic variabilities arise from external factors such as seasonality and astronomical forcing . here , for a generic periodically forced system that generates such oscillations , we consider the typical hysteretic scenario to investigate tipping points , i.e. , a bistable parameter region where the tipping is represented by fold bifurcations of limit cycles rather than steady states . asthe parameters are varied and the system reaches a fold bifurcation , in which a stable limit cycle is destroyed leaving a transient structure , a so - called channel , in the state space of the system . hence , for parameters beyond this tipping point , the channel gives rise to a short - term dynamics which possesses similar properties than the destroyed limit cycle and can therefore be attributed to a `` ghost '' of the latter .we find that a residual dynamical property of the limit cycle destroyed in the tipping point , namely its twisting behavior , occurs in the short - term dynamics for parameters in the post - tipping region .this finding indicates that the short - term behavior carries dynamical information of the destroyed oscillating stable state . for system parameters fixed in the post - tipping region ,we obtain the time evolution of the system subject to a stochastic noise . with this, we show that the ghost " attractor retains systems trajectories in a very similar fashion of the stable limit cycle destroyed in the tipping point . additionally , by obtaining the statistics of the time intervals that noise trajectories spend in the neighborhood of the stable limit cycle and in the neighborhood of the ghost " , we find that the pdfs of waiting times in both regions have the same exponential profile and do not differ much in their expected values .therefore , the ghost `` dynamics plays an essential role in attenuating the critical transition in a way that it may be seen as a smooth transition when trying to diagnose it from real - world data .hence , none of the well - known methods like autocorrelation function , variance or flickering are suitable to identify this particular transition properly .the emergence of the ' ' ghost " masks the transition until the system is well beyond the tipping point and makes it to appear smooth instead of catastrophic .as we consider systems whose asymptotic behavior are _ limit cycles _ , the final dynamics are oscillations rather than equilibria .bifurcation analysis are performed by defining a poincar section , which usually is a hyper - surface arranged transversally to the limit cycle , where the whole system dynamics is described by a discrete system .letting be the function that describes the intersections of the limit cycle with the section , for trajectories in three dimensional space , results that , where are the coordinates of the crossing .consequently , on the surface of section , limit cycles are represented by _ fixed points _ of .then , states , as shown in bifurcation diagrams such as of fig .[ schematic1 ] , are defined in the surface of section , and in case the section is chosen to be a plane , they are denoted by the ( , ) plane coordinates . for the duffing oscillator described by eq .( [ ecomodel ] ) , a suitable poincar section is the so - called -shift .the dynamics over the section is represented by discrete variables ( , ) defined as the solution pair ( , ) collected every period , . in case of limit cycles ,a risky bistable configuration occurs when two _ stable limit cycles _ are coexisting with one _ unstable cycle of saddle type_. the emergence of a dynamical channel at this scenario can be described on a suitable poincar section .we show in fig .[ schematic3]( ) that the stable limit cycles yield two fixed points of the _ node type _ , and in the surface of section , while the unstable limit cycle generates a fixed point of _ saddle type_. the stable manifold of the saddle separates the initial conditions attracted by each node ( blue and red in fig . [ schematic3]( ) ) . as the control parameters are varied approaching the fold bifurcation that delimitate the bistability region , such as and in fig .[ schematic1 ] , one of the node fixed points approaches the saddle .when the system is set to the parameters at the fold bifurcation point , for instance , the node and the saddle collide , and they both disappear forming an elliptic fixed point denoted by in fig .[ schematic3]( ) . for post - bifurcation parameters , the schematic of the poincar sectionis shown in fig .[ schematic3]( ) , the node and the saddle no longer exist in the poincar section .however , trajectories starting in the space state region , which used to be the basin of attraction of the destroyed node , converge to the remaining node but not before being attracted by the stable manifold of the unstable elliptic point .effectively , the system s trajectory behaves as if there existed a channel constraining the trajectory and leading it to the remaining stable state .the occurrence of these dynamical channels related to fold bifurcations of limit cycles has been first discussed by pomeau and manneville , and has been argued to be the mechanism responsible for the laminar phase in the type- intermittency scenario . however , in type- intermittency , chaotic bursts re - inject the trajectory in the dynamical channel .the trajectory spends long time intervals to cross the channel ( the laminar phase ) eventually escaping to the chaotic phase space region , producing another chaotic burst . in this work , there is no chaotic process to re - inject the trajectory into the channel , so we introduce a gaussian noise which resets the trajectory to a random configuration belonging to the basin of attraction of the stable state extinct in the fold bifurcation .this procedure successively ejects the trajectory off the neighborhood of the survival stable state , forcing it to successively cross the channel along its time evolution .regardless of the mechanism used to re - inject the trajectory through the channel , the time spent by trajectories to cross depends on the distance from the fold bifurcation as : in general , the state space in the neighborhood of a limit cycle is affected by its presence , commonly , the limit cycle induces a twisting in its neighboring space .this twisting can be quantified by computing the so called _ generalized winding number _( gwn ) . given a limit cycle of the duffing oscillator described by eq .( [ ecomodel ] ) , the gwn of can be obtained by computing the average frequency of the twisting that a neighbor trajectory performs around . defining as the angle between and over the -shift poincar ' e section , the frequencyis given by : and , the gwn is : where is the frequency of the forcing in eq .( [ ecomodel ] ) . hence , eq . ( [ eq : inffreq ] ) and ( [ eq : infwind ] ) allow us to compute the gwn of the two coexisting stable states of the bistable region ( between and in fig . [ figure1 ] )for each stable state a gwn can be calculated considering sets of initial conditions belonging to the basin of attraction of each state .since these states are attractors , trajectories naturally go to the neighborhood of the stable limit cycles , providing a gwn based on the local properties of the limit cycles .notice that eq .( [ eq : inffreq ] ) is defined for an infinitely long time , in a manner that the main contribution to the averaged twisting frequency comes from rotations of the asymptotic stable state .hence , to obtain twisting properties of transitory structures , eq .( [ eq : inffreq ] ) has to be reformulated in a finite time version .so , we define a finite - time twisting frequency as : for short - term trajectories we have to consider possible deviations in the finite - time winding number for different initial conditions .hence , we take an average over an ensemble of initial conditions , resulting in the follow definition for the finite - time winding numbers : where the brackets denote the average over the ensemble of trajectories . as long the time to cross the channel , ,is a function of the parameter distance , we represent the finite - time winding number as , i.e. , also a function of instead of only 48ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1142/s0218127408021233 [ * * , ( ) ] _ _( , , ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.49.1217 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , , ) `` , '' ( , ) chap ., p. * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.59.5253 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )we would like to thank the partial support of this work by the brazilian agencies fapesp ( processes : 2011/19296 - 1 , 2013/26598 - 0 , and 2015/20407 - 3 ) , cnpq and capes .msb acknowledges epsrc ref .ep / i032606/1 .
nonlinear dynamical systems may be exposed to tipping points , critical thresholds at which small changes in the external inputs or in the system s parameters abruptly shift the system to an alternative state with a contrasting dynamical behavior . while tipping in a fold bifurcation of an equilibrium is well understood , much less is known about tipping of oscillations ( limit cycles ) though this dynamics are the typical response of many natural systems to a periodic external forcing , like e.g. seasonal forcing in ecology and climate sciences . we provide a detailed analysis of tipping phenomena in periodically forced systems and show that , when limit cycles are considered , a transient structure , so - called channel , plays a fundamental role in the transition . specifically , we demonstrate that trajectories crossing such channel conserve , for a characteristic time , the twisting behavior of the stable limit cycle destroyed in the fold bifurcation of cycles . as a consequence , this channel acts like a ghost of the limit cycle destroyed in the critical transition and instead of the expected abrupt transition we find a smooth one . this smoothness is also the reason that it is difficult to precisely determine the transition point employing the usual indicators of tipping points , like critical slowing down and flickering .
in recent years deep belief networks have achieved remarkable results in natural language processing , computer vision and speech recognition tasks . specifically , within natural language processing ,modeling information in search queries and documents has been a long - standing research topic .most of the work with deep learning has involved learning word vector representations through neural language models and performing composition over the learned word vectors for classification .the optimal transformation in our case was to map each query document to a single numeric vector by assigning a single numeric value to each unique word across all query documents .a second phase was then employed by mapping the numerically transformed query vectors to a random embedding space having a uniform distribution between -1 and 1 .this helped far more in reducing the distance between queries having similar words while further discriminating queries far on the data space having more dissimilar words .another suitable criteria that is applicable to our problem is proposed by johnson and zhang in 2014 , where they propose a similar model , but swapped in high dimensional ` one - hot ' vector representations of words as cnn inputs .convolutional neural networks ( cnn ) are biologically - inspired variants of multiple layer perceptrons ( mlp ) .they utilize layers with convolving filters that are applied to local features originally invented for computer vision .convolutional neural networks have also been shown to be highly effective for natural language processing and have achieved excellent results in information retrieval , semantic parsing , sentence modeling and other traditional natural language processing tasks . before going into the details of our model architecture and results, we will first narrate the work we did to prepare our query data for modelling .the advertisements in ebay s classifieds platforms are classified according to a pre - defined hierarchy . the first level ( l1 ) of this hierarchy categorizes advertisements into general groupings like ` buy & sell ' , ` cars & vehicles ' , ` real state ' , ` pets ' , ` jobs ' , ` services ' , ` vacation rentals ' and ` community ' . the second level ( l2 )further classifies each l1-category with many subclasses with more specificity .the third level ( l3 ) further classifies and so on .most platforms terminate the hierarchy at a level of three or four . in this paperwe will only demonstrate the results of our work related to l1-category query classification . for each keyword search initiated within a user session at the all - advertisement level (all - advertisement level means a search across all inventory with no category restrictions employed ) , the chain of actions on that search is analysed .when that sequence of actions results in a view of an advertisement within a specific category , that particular category is scored with a dominance point for the given query .there are many noisy factors that must be accounted for when applying this technique . among them include factors like bots , redundant query actions , filtering out conversions to categories that no longer exist and filtering out queries without enough conversions .the dominance of category for each query document in the last 90 days is computed on the basis of the maximum number of collaborative clicks for each l1-category .the category with the highest number of clicks is considered the dominant category for that query .this also enabled us to produce the first highest , second highest and third highest dominant category and their respective conversion rates for each query . the conversion rate per query is calculated by counting the total number of clicks for each category divided by the total number of clicks for that query .finally all query documents for the last 90 days are standardized by transforming them to lower - case , removing duplicate queries , extra spaces , punctuations and all other noise factors .a single pattern from each l1-category of the final preprocessed data ready to be used for learning is shown in table [ preprocessed ] . in table[ preprocessed ] the categoryid feature is used as a label for supervised learning using a deep convolutional neural network .the total distinct query patterns for most of the categories in the last 90 days ranges between 5000 to 7000 . [ cols="^,^,^,^,^",options="header " , ] [ model_result ]results of the proposed model for the dominant category prediction problem compared to other state - of - the - art methods are listed in table [ model_result ] .the proposed well - tuned deep convolutional neural network simply outperformed its variations and other models .we tested the predictive accuracy by first using few days different testing data from training shown in the first row and fourth column of table [ model_result ] for every model type .the cnn model produced a very high training and testing accuracy of 99.9 % and 98.5 % .secondly we tried testing completely different days testing data from training and the resulting outcomes are shown in the second row of table [ model_result ] for every model type .this is our worst case scenario where we have used a completely different testing data for dominant category prediction but still the cnn model has produced a very high testing accuracy of 95.8 % .the major advantage with cnn compared to other state - of - the - art approaches is its added capability to learn invariant features .this capability of cnn to make the convolution process invariant to translation , rotation and shifting helps in approximating to the same class even when there is a slight change in the input query document .the step by step training accuracy and loss of our convolutional neural network model are also shown in figure [ fig : sub1 ] and [ fig : sub2 ] .initially the accuracy was noted very low but gradually it improved at each training step and almost reached to one in the end as shown in figure [ fig : sub1 ] .similarly , the loss was very high in the beginning , but almost reached to zero in the end as shown in figure [ fig : sub2 ] .this clearly shows the convergence of the proposed well - tuned deep convolutional neural network .the multiple layer perceptron model with an empirically evaluated one and two hidden layers of size 200 did not perform effectively well and produced a predictive accuracy of 55.91 % and 54.98 % on both of the testing sets .we also further tried to increase the count of hidden layers to explicitly add the certain level of non - linearity but still the predictive accuracy more or less remained constant .furthermore we tried running long short term memory ( lstm ) recurrent neural networks which are shown to outperform other recurrent neural network algorithms specifically for language modelling .however , in our case there is no sequence to sequence connection between the current and previous activations of the sequential query patterns , the maximum predictive accuracy that lstm recurrent neural network could produce was 63.06 % and 65.19 % for both the testing datasets .the bi - directional recurrent neural network worked a little worse compared to lstm network and produced a predictive accuracy of 52.98 % and 50.05 % on both the testing datasets .in the present work we have described a tuned , fully connected cnn that outperformed its variants and other state - of - the art ml techniques .specifically , in query to category classification across several ebay classifieds platforms .our results integrate to evidence that numeric vector mapping to random uniformly distributed embedding spaces proves more suitable both computationally and performance wise in comparison to word2vec . specifically for datasets having a limited vocabulary corpus ( between 10,000 to 15,000 words ) and few words ( between 2 to 3 ) in each query document .the first and second authors are grateful to johann schweyer for his contribution in query normalization and aggregation .we are also extremely thankful to brent mclean vp , cto , ebay classifieds for his kind support and encouragement throughout this dominant category prediction project .g. e. hinton , n. srivastava , a. krizhevsky , i. sutskever , and r. r. salakhutdinov , `` improving neural networks by preventing co - adaptation of feature detectors , '' _ arxiv preprint arxiv:1207.0580 _ , 2012 .s. deerwester , s. t. dumais , g. w. furnas , t. k. landauer , and r. harshman , `` indexing by latent semantic analysis , '' _ journal of the american society for information science _41 , no ., 1990 .j. gao , j .- y .nie , g. wu , and g. cao , `` dependence language model for information retrieval , '' in _ proceedings of the 27th annual international acm sigir conference on research and development in information retrieval_.1em plus 0.5em minus 0.4emacm , 2004 , pp . 170177 .r. collobert , j. weston , l. bottou , m. karlen , k. kavukcuoglu , and p. kuksa , `` natural language processing ( almost ) from scratch , '' _ journal of machine learning research _ , vol .aug , pp . 24932537 , 2011 .y. shen , x. he , j. gao , l. deng , and g. mesnil , `` learning semantic representations using convolutional neural networks for web search , '' in _ proceedings of the 23rd international conference on world wide web_.1em plus 0.5em minus 0.4emacm , 2014 , pp .373374 .yih , k. toutanova , j. c. platt , and c. meek , `` learning discriminative projections for text similarity measures , '' in _ proceedings of the fifteenth conference on computational natural language learning_.1em plus 0.5em minus 0.4emassociation for computational linguistics , 2011 , pp . 247256 .
deep neural networks , and specifically fully - connected convolutional neural networks are achieving remarkable results across a wide variety of domains . they have been trained to achieve state - of - the - art performance when applied to problems such as speech recognition , image classification , natural language processing and bioinformatics . most of these `` deep learning '' models when applied to classification employ the softmax activation function for prediction and aim to minimize cross - entropy loss . in this paper , we have proposed a supervised model for dominant category prediction to improve search recall across all ebay classifieds platforms . the dominant category label for each query in the last 90 days is first calculated by summing the total number of collaborative clicks among all categories . the category having the highest number of collaborative clicks for the given query will be considered its dominant category . second , each query is transformed to a numeric vector by mapping each unique word in the query document to a unique integer value ; all padded to equal length based on the maximum document length within the pre - defined vocabulary size . a fully - connected deep convolutional neural network ( cnn ) is then applied for classification . the proposed model achieves very high classification accuracy compared to other state - of - the - art machine learning techniques .
complex networks are used to represent and analyze a wide range of systems .models of complex networks usually aim for simplicity and attempt to keep the number of parameters as low as possible . however , real data is more complex than any simple model which makes it difficult to draw clear links between data and models . to capture the increasingly available massive real data , we need high - dimensional models where the number of parameters grows with the number of nodes .an example of such a model is the latent space model where nodes are assigned independent and identically distributed vectors and the probability of a link connecting two nodes depends only on the distance of their vectors .while there are plenty of simple ( and not so simple ) network models , little is known as to which of them are really supported by data .while calibration of complex network models often uses standard statistical techniques , their validation is typically based on comparing their aggregate features ( such as the degree distribution or clustering coefficient see for detailed accounts on network measurements ) with what is seen in real networks ( see for recent examples of this approach ) .the focus on aggregate quantities naturally reduces the discriminative power of model validation which is often further harmed by the use of inappropriate statistical methods . as a result , we still lack knowledge of what is to date the best model explaining the growth of the scientific citation network , for example . we argue that network models need to be evaluated by robust statistical methods , especially by those that are suited to high - dimensional models .this is exemplified in where various low - dimensional microscopic mechanisms for evolution of social networks are compared on the basis of their likelihood of generating the observed data .prohibitive computational complexity of maximum likelihood estimation is often quoted as a reason for its limited use in the study of real world complex networks .however , as we shall see here , even small subsets of data allow to discriminate between models and point clearly to those that are actually supported by the data .this , together with the ever - increasing computational power at our disposal , opens the door to the likelihood analysis of complex network models .we analyze here a recent network growth model which naturally leans itself to high - dimensional analysis .this model generalizes the classical preferential attachment ( pa ; often referred to as the barabsi - albert model in the complex networks literature ) ( * ? ? ?* sections 7 , 8) by introducing node relevance which decays in time and co - determines ( together with node degree ) the rate at which nodes acquire new links .if either the initial relevance values or the functional form of the relevance decay are heterogeneous among the nodes , this model is able to produce various realistic degree distributions .by contrast to which modifies preferential attachment by introducing an additive heterogeneous term , in relevance combines with degree in a multiplicative way which means that once it reaches zero , the degree growth stops .this makes the model an apt candidate for modeling information networks where information items naturally lose their pertinence with time and the growth of their degree eventually stops .( see for a review of work on temporal networks . )this model has been recently used to quantify and predict citation patterns of scientific papers . before methods for high - dimensional parameter estimationare applied to real data , we calibrate and evaluate them on artificial data where one has full control over global network parameters ( size , average degree , etc . ) and true node parameter values are known . for simplicity ,we limit our attention to the case where the functional form of relevance decay is the same for all nodes and only the initial relevance values differ .we present here various estimation methods and evaluate their performance .plain maximum likelihood ( * ? ? ?* chapter 7 ) produces unsatisfactory results , especially in the case of sparse networks which are commonly seen in practice .we enhance the method by introducing an additional term which suppresses undesired correlation between node age and estimates of initial relevance .we then introduce a mean - field approach which allows us to reduce high - dimensional estimation to a low - dimensional one .calibration and evaluation of these parameter - estimation methods is done on artificial data .real data is then used to employ the established framework and compare the statistical evidence for several low- and high - dimensional network models on the given data .analysis of small subsets of input data is shown to efficiently discriminative among the available models .since this work focuses on model evaluation , estimated parameter values are thus of secondary importance to us .necessary conditions for obtaining precise estimates and the potential risk of large errors are therefore left for future research ( see sec . [sec : conclusions ] ) .the original model of preferential attachment with relevance decay ( pa - rd ) has been formulated for an undirected network where the initial node degree is non - zero because of links created by the node on its arrival . to allow zero - degree nodes to collect links , some additive attractiveness or random node selectionneed to be introduced .when these two mechanisms are combined with pa - rd , the probability that a new link created at time attaches to node can be written as here and are degree and relevance of node at time , respectively , is the number of nodes present at time , and is the additive attractiveness term . finally , is the probability that the node is chosen by the pa - rd mechanism ; the node is chosen at random with the complementary probability .when and , a node of zero degree will never attract new links .( [ pard ] ) can be used to model a monopartite network where nodes link to each other as well as a bipartite network where one set of nodes is unimportant and we can thus speak of outside links attaching to nodes . for example , one can use the model to describe the dynamics of item popularity in a user - item bipartite network representing an e - commerce system .there are now two points to make .firstly , the model is invariant with respect to the rescaling of all relevance values , .this may lead to poor convergence of numerical optimization schemes because values can drift in accord without affecting the likelihood value .the convergence problems can be avoided by imposing an arbitrary normalization constraint on the relevance values as we do below .secondly , and act in the same direction : they introduce randomness in preferential attachment - driven network growth ( in particular , as and/or , preferential attachment loses all influence ) .one can therefore expect that and are difficult to be simultaneously inferred from the data .this is especially true for the original preferential attachment without decaying relevance .if node relevance decays to zero , node attraction due to eventually vanishes while the random - attachment part proportional to remains it is therefore possible , at least in principle , to distinguish between the two effects . to better focus on the high - dimensional likelihood maximization of node parameters , we assume in all our simulations .the pa - rd model has been solved in for a case where , , and the initial degree of all nodes equal to one .it was further assumed that is finite for all nodes and the distribution of values among the nodes , , decays exponentially or faster .the probability normalization term then eventually fluctuates around a stationary value and the expected final degree of node can be written as .it has been shown that the network s degree distribution , shaped mainly by , can take on various forms including exponential , log - normal , and power - law .we begin by describing bipartite network data with temporal information .we consider a simplified bipartite case where links arrive from outside and thus only their target nodes matter see fig .[ fig : network]a for illustration .links are numbered with and the times at which they are introduced are .nodes are numbered with and the times at which they are introduced are . at time , there are target nodes in the network. degree of node at time when link is added is andthe target node of link is .the average node degree is ( the factor of two is missing here because we consider a bipartite network where edges point to nodes of interest ) .we use the pa - rd model to create artificial networks with well - defined properties .there are initially nodes with zero degree . after every time steps, a new node of zero degree is introduced in the network . in each time step ,one new link is created and chooses its target node according to eq .( [ pard ] ) .the network growth stops once there are links and nodes in the network . at that point , the average node degree is approximately . it must hold that ; in the opposite case , the average degree can not be achieved because new nodes dilute the network too fast .each node has the relevance decay function ] which in our case performs better than the original form without .note that reports a similar behavior in the popularity growth of stories in digg.com . for simplicity, we assume a similar form of in pa - rd , + r_{\infty}$ ] , which in fact roughly corresponds to the empirical relevance decay results presented in . a non - vanishing absolute term is needed here to allow for links occasionally attaching to old nodes .the log - normal decay form reported in does not yield better fit in our case , perhaps as a result of immediate response of the econophysics forum users which makes the increasing relevance phase provided by log - normal curves unfitting . for pa - rd ,we report results obtained with the penalization term ( ) which , however , differ little from the results obtained with . to maximize the likelihood functions we use the iterative extrapolating approach described in sec .[ sec : mle ] .this procedure is run ten times with independent random initial configurations ; the best result obtained with each method is reported in table [ tab : results ] .in addition , the table shows also the number of model parameters and the corrected akaike information criterion where the maximum is taken over the whole parameter space of model . measures how well model fits the data and corrects for a finite sample size .it can be used to construct model weights in the form \ ] ] where the proportionality factor is obtained by requiring the sum of all model weights to equal one . finally , we report the values of global model parameters that maximize data likelihood for each model .our comparison of models contains several notable outcomes .firstly , both low - dimensional models are clearly insufficient to explain the data .in fact , preferential attachment yields only marginally better fit than random attachment .secondly , high - dimensional models without time decay perform significantly worse than their counterparts with time decay .this is not surprising because we fit the models to an information network where , as argued in , aging of nodes is of prime importance .thirdly , while the log - likelihood values obtained with pa - hd and pa - rd are both substantially better than those obtained for other models , the difference between them is big enough for the akaike information criterion to assign an overwhelming weight to pa - rd .( the resulting weight of pa - hd , which has been truncated to zero in table [ tab : results ] , is around . ) for pa - rd , the effective lifetime corresponding to the obtained relevance decay parameters is where we neglect which is small , yet it formally causes the above - written expression to diverge .this lifetime well agrees with the fact that papers typically spend one week on the front page of the econophysics forum .the value of the additive term is relatively high in comparison with the average node degree of which suggests that in the studied dataset , the influence of preferential attachment ( _ i.e. _ , attachment probability proportional to node degree ) is relatively weak .an alternative explanation is that our assumed relevance decay function disagrees with the data and thus an increased proportion of `` random '' connections is necessary to model the data .a more detailed analysis is necessary to establish what is the real reason behind this apparent randomness . since likelihood computation is costly and during its maximization in numerous variables it needs to be carried out many times , obtaining the results presented in table [ tab : results ] on a standard desktop computer takes several hours .it is thus natural to ask whether significant evidence in favor of one of the models can not be obtained by analyzing subsets of the data which would save considerable computational time . to this end , we evaluated weights of three representative models ( pa , pa - hd , and pa - rd ) on data subsets corresponding to time spans ( which we refer to as subset lengths , ) ranging from 4 to 100 days ; the starting .we generated many subsets for each by choosing their starting day at random .results shown in fig . [ fig : weights ] demonstrate that while particularly short subsets favor the low - dimensional pa model , the situation quickly changes and this model is virtually eliminated as soon as .two high - dimensional models , which enjoy comparable support until , are clearly distinguished at and above . meanwhile ,evaluation of multiple small - scale subsets is fast : the computational time required for one likelihood maximization of pa - rd drops from 10 minutes for the whole 1000-day data spanning to 2 seconds for a 100-day subset .we can conclude that this approach allows us to efficiently discriminate between models even when no particularly efficient approach to likelihood maximization is available .we studied the use of maximum likelihood estimation in analysis of high - dimensional models of growing networks .artificially created networks with preferential attachment and decaying relevance were used to show that a near - flat likelihood landscape makes the standard likelihood maximization rather unreliable and sensitive to the initial choice of model parameters . introducing a penalization term effectively modifies the landscape and helps to avoid `` wrong '' solutions .the resulting mle - based scheme outperforms the standard likelihood maximization for a wide range of model networks . on the other hand ,both original and modified mle overestimate the additive parameter which is crucial in the early stage of a node s degree growth .how to improve on that remains an open question .we then tested the previously developed methods on real data where both preferential attachment and relevance decay are expected to play a role . in this part ,the focus is on comparing various competing network models that may be used to explain the data .we show that the data shows overwhelming evidence in favor of one of the models and that sufficiently strong evidence can be achieved by studying small subsets of the data .model evaluation by such subset sampling is of particular importance to large - scale datasets where straightforward likelihood maximization is prohibitively time - consuming . up to now, models of complex networks have been appraised mostly by comparing aggregate characteristics of the produced networks ( degree distribution or clustering coefficient , for example ) with features seen in real data .the caveat of this approach is that many network characteristics are computed on static network snapshots and are thus of little use for the measurement of growing networks .empirical node relevance is designed especially for growing networks but more metrics , targeted at specific situations and questions , are needed . despite potential improvements in this direction , to gain _ real _ evaluative and discriminative power over network models , robust statistical methods such as maximum likelihood estimation need to be relied on .we have made a step in this direction which , hopefully , will contribute to consolidating and further developing the field of network models .open issues include estimates of parameter uncertainty in the case of real data by bootstrap methods , identification of situations where maximum likelihood estimates converge to true parameter values ( including model misspecification as in which is of particular importance to parameter estimates in complex systems ) , and improvements of the mean - field likelihood estimation which was introduced in sec . [ sec : mf - mle ] .it needs to be stressed that the potential impact of parameter estimation far exceeds the academic problem of model validation : model parameters , once known , can be directly useful in practice . in the case of preferential attachment with relevance decay , for example, the overall rate of relevance / interest decay is closely connected to the most successful strategy in the competition for attention . on the other hand ,the initial , current , or total relevance values of individual items can be used to detect which items deserve to be examined more closely .this work was supported by the eu fet - open grant no .231200 ( project qlectives ) and by the swiss national science foundation grant200020 - 143272 .greedy sequential optimization is possible because the likelihood function in our case does not have a large number of disparate local minima .we explain this fact for the pa - rd model which is parametrized by the initial node relevances and global parameters and . shows that the expected final degree of node grows with which implies that has a unique maximum in when all other parameters are fixed .likelihood of the data thus has a unique maximum in the space of all initial relevance values .similar behavior can be observed for . when , likelihood of the artificial data is small because the model simplifies to random attachment which is obviously at odds with the data .as decreases , the likelihood grows but it eventually saturates and decreases when becomes so small that new nodes can not attract their first links .the case is different for .its extremely small values can be easily refuted by the data as they would imply links always arriving at the latest node . on the other hand, large can be accommodated by an appropriate choice of the initial relevance values which is demonstrated by fig .[ fig : profile ] . to prevent the sequential updating of parameters from converging to a wrong solution, one can for example add a suitable penalization term as we do in eq .( [ llpenalized ] ) .99 s. n. dorogovtsev , j. f. f. mendes , adv . phys . * 51 * , 1079 ( 2002 ) .s. boccaletti , v. latora , y. moreno , m. chavez , d .- u .hwanga , physics reports * 424 * , 175 ( 2006 ) .m. e. j. newman , _ networks : an introduction _ ( oxford university press , 2010 ) .m. c. gonzlez , a .-barabsi , nature physics * 3 * , 224 ( 2007 ) .p. d. hoff , a. e. raftery , m. s. handcock , journal of the american statistical association * 97 * 1090 ( 2002 ) .l. da f. costa , f. a. rodrigues , g. travieso , p. r. u. boas , adv .phys . * 56 * , 167 ( 2007 ) .e. d. kolaczyk , _ statistical analysis of network data _( springer , 2009 ) . f. papadopoulos , m. kitsak , m. .serrano , m. bogua , d. krioukov , nature * 489 * , 537 ( 2012 ) .m. li , h. zou , s. guan , x. gong , k. li , z. di , c .- h .lai , scientific reports * 3 * , 2512 ( 2013 ) .m. p. h. stumpf , m. a. porter , science * 335 * , 665 ( 2012 ) .a. van den bos , _parameter estimation for scientists and engineers _ ( wiley - interscience , 2007 ) .d. a. freedman , _ statistical models : theory and practice _ ( cambridge university press , 2nd edition , 2009 ) .p. bhlmann , s. van de geer , _ statistics for high - dimensional data _( springer , 2011 ) .j. leskovec , l. backstrom , r. kumar , a. tomkins , in kdd 08 : proceedings of the 14th acm sigkdd international conference on knowledge discovery and data mining , 462 ( acm , new york , 2008 ) .j. leskovec , d. chakrabarti , j. kleinberg , c. faloutsos , z. ghahramani , the journal of machine learning research * 11 * 985 ( 2010 ) .m. medo , g. cimini , and s. gualdi , phys .. lett . * 107 * , 238701 ( 2011 ) .r. albert , a .-barabsi , rev .74 * , 47 ( 2002 ) .h . eom , s. fortunato , plos one * 6 * : e24926 ( 2011 ) .p. holme , j. saramki , physics reports * 519 * , 97 ( 2012 ) .d. wang , c. song , a .-barabsi , science * 342 * , 127 ( 2013 ) .h. owhadi , c. scovel , t. sullivan , when bayesian inference shatters , arxiv:1308.6306 ( 2013 ). m. s. shang , l. l , y .- c .zhang , t. zhou , epl * 90 * , 48006 ( 2010 ) .j. han , m. kamber , j. pei , _ data mining : concepts and techniques _ ( morgan kaufmann , 3rd edition , 2011 ) .r. tibshirani , j. r. statist .b * 58 * , 267 ( 1996 ) .see supplementary material for data download .g. bianconi , a .-barabsi , europhys . lett . * 54 * , 436 ( 2001 ). b. a. huberman , journal of statistical physics * 151 * , 329 ( 2013 ) .f. radicchi , s. fortunato , c. castellano , pnas * 105 * , 17268 ( 2008 ) .k. p. burnham ; d. r. anderson , _ model selection and multimodel inference : a practical information - theoretic approach _ ( springer - verlag , 2nd edition , 2002 ) .g. claeksens , n. l. hjort , _ model selection and model averaging _ ( cambridge university press , 2008 ) .a. c. davison , d. v. hinkley , _ bootstrap methods and their application _( cambridge university press , 1997 ) . c. shalizi , american scientist * 98 * , 186 ( 2010 ) .
the abundance of models of complex networks and the current insufficient validation standards make it difficult to judge which models are strongly supported by data and which are not . we focus here on likelihood maximization methods for models of growing networks with many parameters and compare their performance on artificial and real datasets . while high dimensionality of the parameter space harms the performance of direct likelihood maximization on artificial data , this can be improved by introducing a suitable penalization term . likelihood maximization on real data shows that the presented approach is able to discriminate among available network models . to make large - scale datasets accessible to this kind of analysis , we propose a subset sampling technique and show that it yields substantial model evidence in a fraction of time necessary for the analysis of the complete data .
the maxwell s demon paradox suggested that one can lower the entropy of a gas of particles without expending energy , and thus violate the second law of thermodynamics , if one has information about the positions and momenta of the particles . during the resolution of this puzzle it became however clear that thermodynamics imposes physical constraints on information processing .rolf landauer recognized that it is the logically irreversible erasure of information that necessitates a corresponding entropy increase in the environment ; i.e. information erasure from the information - bearing degrees of freedom of a memory register or computer causes entropy to flow to the non - information - bearing degrees of freedom . at inverse temperature ,this entropy increase causes heat to be dissipated , where denotes the entropy decrease in the memory .this consequence is _landauer s principle _ , and the inequality ( [ landauersbasicinequality ] ) is also called the _ landauer bound _ or _limit_. since its inception , the above argument has been controversially discussed on different levels . for example , it has been disputed whether it is necessary to assume the validity of the second law of thermodynamics in order to derive landauer s principle or whether , conversely , the second law itself is actually a consequence of the principle ( see e.g. ) .situations have been reported both theoretically and in experiment which supposedly violate landauer s principle . and it was actually already recognized by bennett that all computation _ can _ be done reversibly , thereby avoiding irreversible erasure and requiring no heat dissipation in principle . on the other hand ,the principle was successful in exorcising maxwell s demon , and a recent experiment approached landauer s limit but could not surpass it .attempts to formulate and prove landauer s principle by more microscopic methods followed later ( e.g. ) , but they still have deficiencies as we discuss more detail in section [ subsectpreworks ] . much of the misunderstanding and controversy around landauer s principle appears to be due to the fact that its general statement has not been written down formally or proved in a rigorous way in the framework of quantum statistical mechanics .to remedy this situation is the first goal of the present work .we formulate in precise mathematical and statistical mechanics terms the setup for landauer s principle .the four assumptions are listed at the beginning of section [ setupsubsection ] ( see also fig .[ figsetup ] for an overview of the setup ) .our formulation encompasses processes more general than `` erasure '' , and the setting is minimal in the sense that landauer s bound can be violated when any one of our assumptions is dropped . our first main result is a proof of landauer s principle in the form of a sharpened equality version ( theorem [ landauereqntheorem]): the mutual information quantifies the correlations built up between system and reservoir during the process and the relative entropy can be physically interpreted as the free energy increase in the reservoir .closer examination reveals that landauer s bound can be tight only if .the landauer bound ( [ landauersbasicinequality ] ) can thus be improved for all non - trivial processes .our second main result is then an explicit improvement of landauer s bound ( section [ finitesizesect ] ) , which will be possible when the thermal reservoir assisting in the process has a finite hilbert space dimension .a paradigmatic result is here ( see theorem [ maintheoremcombined]): is illustrated in fig . [ relentropygraphs ] : for small reservoirs , the necessary heat expenditure lies several ten percent above the landauer limit ( [ landauersbasicinequality ] ) .the main technical tool in deriving these finite - size effects is a tight entropy inequality between relative entropy and entropy difference . in section [ extendednotionssection ] and appendix [ extendednotionssectionapp ], we present a few extensions of the setup from section [ setupsubsection ] .section [ attainingsection ] forms a counterpart to results like eq .( [ paradigmaticfinitiesizeeffect ] ) , as we construct processes that approach landauer s bound ( [ landauersbasicinequality ] ) arbitrarily closely by using a reservoir of unbounded size .here we formalize the exact setting in which we prove and improve landauer s principle .we avoid unnecessary excess structure that is present in some previous works ( discussed in section [ subsectpreworks ] ) , and aim to motivate each necessary ingredient .this is the first step to a rigorous treatment of landauer s principle in sections [ lprinciplesharpened ] and [ finitesizesect ] . our setup and the subsequent statements will be quantum - mechanical , but apply to the classical ( probabilistic ) case as well upon restriction to commuting states and hamiltonians . in section [ extendednotionssection ] and appendix [ extendednotionssectionapp ]we discuss some extensions of the setup described here .as commonly conceived , landauer s process is supposed to `` erase '' or `` reset '' the state of a system by having it `` interact '' with a `` thermal reservoir '' , bringing the system into a `` definite '' state , such as a fixed pure state .we use this conception as a motivation , but our setup will be more general and precise . the four assumptions needed for landauer s principle are as follows ( see also fig . [ figsetup ] ) : 1 .the process involves a `` system '' and a `` reservoir '' , both described by hilbert spaces , 2 .the reservoir is initially in a thermal state , } ] is the `` inverse temperature '' , 3 .the system and the reservoir are initially uncorrelated , , 4 .the process itself proceeds by unitary evolution , .we now discuss each of these four assumptions in more detail , arguing that this setup is minimal .the process acts on two subsystems , and , and we call the `` system '' and the `` reservoir '' .we model these as quantum systems with hilbert spaces of finite dimensions and , respectively ( see fig . [ figsetup ] ) .the extension of our treatment to infinite - dimensional state spaces is discussed in appendix [ sectioninfinitedim ] .secondly , we require a hamiltonian of the reservoir to be given , i.e. a hermitian operator .we furthermore assume that initially , i.e. before the process starts , the reservoir is in a thermal state }}\ ] ] at inverse temperature ] ) is subjected to a unitary evolution , .this replaces by the marginal state ] we denote the final state of the system ( `` state after the process '' ) , by ] and the averaged heat transfer } ] . by _klein s inequality _ , the relative entropy is always non - negative and it vanishes iff . for a state of a bipartite system with _ reduced states _ ] , the _ mutual information _ is defined as which is always non - negative. most often it will be clear from the context for which state the mutual information is evaluated and we omit the subscript , also writing for the mutual information between systems and in the state .we sometimes use similar notation for the entropy itself , e.g. and .we further define the _ conditional entropy ( of conditioned on ) _ in a bipartite state : a _ hamiltonian _ on a system ( which will always be the reservoir in this paper ) is a hermitian operator .the corresponding _ thermal state _ at _ inverse temperature _ ] is the thermal state corresponding to a hamiltonian at inverse temperature ] , i.e. .if , then is supported in the ground state space as well , so that one can continue after line ( [ firstlineinproofofleq ] ) with }-\log\dim(p_g)~=~-d(\rho'_r\|\rho_r)~,\ ] ] yielding that both sides of ( [ landauereqnlong ] ) vanish . if , then has support outside the ground state space of , i.e. outside the support of , so that and both sides of ( [ landauereqnlong ] ) equal each other again . the reasoning in the case is exactly analogous ( or , alternatively , follows from the substitutions , ) .lastly , the landauer bound ( [ lineqineqthm ] ) follows from the fact that the mutual information and the relative entropy are both non - negative .an equality equivalent to eq .( [ landauereqnshort ] ) has been derived in before . there , however , the aim was to identify reversible and irreversible contributions to the entropy change , and no connection to landauer s principle was established .see also the `` note added '' in section [ openquestionsect ] . for extensions of landauer s principle ( theorem [ landauereqntheorem ] ) to infinite - dimensional separable hilbert spaces ,see appendix [ sectioninfinitedim ] .the equality form of landauer s principle ( theorem [ landauereqntheorem ] ) allows us to investigate how tight the landauer bound is ( see eq .( [ lineqineqthm ] ) ) .the basic result here is that landauer s bound holds with equality iff , roughly speaking , the process does not do anything : [ correqualityl]consider a process as described in theorem [ landauereqntheorem ] .then , landauer s bound holds with equality iff there exists a unitary such that equivalently , landauer s bound holds with equality iff by the equality version ( [ landauereqnshort ] ) of landauer s principle and due to the non - negativity of the mutual information and the relative entropy , one has equality in landauer s bound iff .this is equivalent to being a product state and , i.e. to the first and third condition in ( [ conditionsforequality1 ] ) .this then already implies the second condition in ( [ conditionsforequality1 ] ) as follows . by the assumptions on the process , the states and before and after the processare related by a unitary transformation , , and thus have the same spectra ( as multisets , i.e. including multiplicities ) : . as the spectrum of a product state equals the pointwise product of the individual spectra , one has and since has a non - zero eigenvalue , this implies .so , and are two hermitian matrices with identical spectra , and are thus related by a unitary transformation , with . finally , note that the second and third condition in ( [ conditionsforequality1 ] ) imply and , respectively , and thus ( [ conditionsforequality2 ] ) .conversely , ( [ conditionsforequality2 ] ) obviously implies . by corollary [ correqualityl ] ,equality holds only if the process transforms the system in a unitary way and leaves the reservoir untouched , i.e. ( note , however , that possibly when has degenerate eigenvalues , as then the unitary transformation achieving is not unique ) .then there is no change in the information of the system and zero heat flow to the reservoir . in this sense ,only trivial processes satisfy ; this statement remains basically true in infinite dimensions as well ( appendix [ sectioninfinitedim ] ) .considering the converse implication of corollary [ correqualityl ] , landauer s bound is a _ strict _ inequality for any process with nonzero entropy decrease ( ) or nonzero heat flow ( ) . in section [ finitesizesect ]we will in fact derive such non - trivial lower bounds on the difference between the two sides of landauer s bound ( [ lineqineqthm ] ) .more precisely , we will look for a non - negative function satisfying , with for ; similarly , for a function such that , with for .when one fixes ( or puts upper bounds on ) both the system and reservoir dimensions and , then the existence of such functions and follows because the entropy , mutual information and relative entropy are sufficiently continuous and the space of all processes as well as the state space are compact .our functions and will indeed explicitly depend on the dimension of the reservoir .conversely , in section [ attainingsection ] we show that any non - trivial or actually _ has to _ depend on the reservoir dimension , since in the limit of large reservoir sizes we construct explicit processes coming arbitrarily close to attaining the bound . several discussions in the literature formulate landauer s principle for processes havinga _ pure _ final state , i.e. where the system is being brought into a _ definite _ microstate and all information has been `` erased '' .this assumption is for example made in the works aiming to derive landauer s principle .it is also implicit in landauer s original paper as well as in the many references that employ or `` derive '' the ubiquituous claim that an amount of heat has to be dissipated in the `` erasure of a ( qu-)bit '' ( see e.g. several papers reprinted in ) .the latter situation would correspond to on a system of dimension , which automatically forces the final system state to be pure , whereas the initial state must have been completely mixed .here we point out that a landauer process as described above can in general _ not _ reduce the rank of the system state .this is possible only with a reservoir at strictly zero temperature or with a reservoir hamiltonian having formally infinite energies ( see below ) .the following impossibility result thus shows in particular that some previous statements of landauer s principle in the literature are void .this issue is also related to the `` unattainability formulation '' of the `` third law of thermodynamics '' , see also the discussions in .we first analyze quantitatively how the smallest eigenvalue of the system state can change during the process ] ( the extension to negative is trivial ) . plugging back into ( [ lambdaminrhoprimes ] ) gives : [ propositionlambdamin]consider any process as described in theorem [ landauereqntheorem ] , with a reservoir at inverse temperature ] ( and not merely the norm ) is infinite in any process with , which necessarily happens in finite dimensions for any process achieving . at strictly zero temperature ( ), similar rank - decreasing processes can be constructed without infinite . in appendix [ purestateerasure ]we exhibit rank - decreasing processes at finite temperature and having finite heat flow ( and actually coming arbitrarily close to saturating the inequality ) ; such processes however need both an infinite - dimensional reservoir and formally infinite hamiltonian levels .note that the analysis leading up to ( [ boundonlambdamin ] ) and the rank considerations above are not meaningful for infinite - dimensional reservoirs : if , then the thermal state does not exist in infinite dimensions for ( cf .also appendix [ sectioninfinitedim ] ) ; and when , the bound ( [ boundonlambdamin ] ) becomes trivial . in cases where it is sufficient to reach a final state that is only -close to the desired final state , i.e. , the state can be chosen to be of full rank whenever .then , from section [ attainingsection ] ( proposition [ non - rank - decreasing - prop ] ) , one can explicitly construct a process with final state using a finite - dimensional reservoir and such that the heat dissipation is arbitrarily close to . note that , for given and , it is possible to minimize by analytical methods , i.e. to maximize subject to the constraint , using the kuhn - tucker conditions . as for the optimal , the heat expenditure in such a process can be made arbitrarily close to or smaller .note that our impossibility results differ from the one in , where it is investigated whether _ for all _ initial states the output } ] majorizes any other such state obtained by varying .the spectrum of this maximal ( `` purest '' ) state , which is unique up to unitary equivalence , is obtained by listing the eigenvalues of in increasing order and repeatedly summing successive ones , starting from the lowest .this state has also minimal entropy among all possible final system states , but its entropy is nonzero iff has more than nonzero eigenvalues ; in particular , it is nonzero whenever and .a few treatments of landauer s principle in the literature do not require a pure final system state , but do assume a product final state ( such a product state would of course be implied by a pure ) ; cf . e.g. some parts of ( see also section [ subsectpreworks ] ) .similar to the pure final state discussed above , also this product final state assumption is generally not achievable : a generic product state will admit only _ one _ tensor product decomposition ( _ two _ when the dimensions match ) .thus , the condition for generic , implies with unitaries , and so allows only trivial processes with no entropy change as ( or , additionally , in the case , with the swap operator ; cf .example [ swapexample ] ) .the strengthened form of landauer s principle ( theorem [ landauereqntheorem ] ) showed that landauer s bound is sharp only in quite trivial cases ( corollary [ correqualityl ] ) .it can therefore be improved in all interesting cases .of course , the tightest improvement is given by the equality version ( [ landauereqnshort ] ) , but this contains the quantities and which are usually not available as they would for example require knowledge of the full global state .in this section we derive improvements of landauer s bound that are _ explicit _ in the sense that they depend on the quantity that does already appear in the inequality .in fact , the new bounds have to depend on the reservoir dimension as well , because processes can approach landauer s bound in the limit ( see section [ attainingsection ] ) .the inequalities we prove in the present section thus constitute _ finite - size corrections _ to landauer s bound .our main result on finite - size improvements uses the following auxiliary quantities : where and ] in eq .( [ definefunctionm ] ) . for each fixed ,the function is strictly decreasing for and strictly increasing for , strictly convex in ] the initial energy of the reservoir , and denote by } ] . here , ] ( and consequently ) in case , all addends in ( [ firstlinewithaddends])([firsttimeintegralappears ] ) are finite ( even though the integrand can diverge at the boundaries when ) .treating this case with the usual conventions , we can continue : where in the last step we used eq .( [ dbetade ] ) .notice that always ] for to mean the interval ] . furthermore dropping the relative entropy termvar}}_\gamma(h)}\,de'\,de~=~\frac{(\delta q)^2}{2\max_{\gamma\in[\beta,\beta']}{{\rm var}}_\gamma(h)}~.\label{lowerboundondwithdeltaqsquared}\ ] ] we aim for a lower bound on that involves the quantity , which already appears in the usual landauer bound , rather than alone ; at the same time we would like to eliminate the complicated expression in the denominator , which resembles a heat capacity ( cf .( [ heatcapacitybeta ] ) and ( [ heatcapacityt ] ) in appendix [ thermodynamicappendix ] ) . to do this ,assume first to get }\beta^2{{\rm var}}_\gamma(h)}~.\label{aftererweiternmitbeta}\end{aligned}\ ] ] if , then , since the energy is strictly decreasing with the inverse temperature .thus , if and , the denominator in ( [ aftererweiternmitbeta ] ) can be upper bounded by }\gamma^2{{\rm var}}_\gamma(h) ] and the heat dissipation satisfy , then : where is defined in eq .( [ definendinproofpaper ] ) .the right inequality in ( [ inequalityindeltaqleq0thm ] ) is generally wrong if one does not demand , because for any it is easy to construct hamiltonians such that becomes arbitrarily large ( positive , but finite ) , so that the rhs in ( [ inequalityindeltaqleq0thm ] ) becomes arbitrarily negative , whereas is bounded from below .the derivation leading up to theorem [ sharpenlineqfordeltaqleq0tm ] shows how more detailed knowlege about the reservoir ( i.e. about the temperature , the hamiltonian , or its heat capacity ) could be exploited , when available , to obtain better bounds than ( [ simplelowerboundonbetadwithndanddeltaq ] ) or ( [ inequalityindeltaqleq0thm ] ) .with knowledge of only the reservoir dimension , however , the essential bound ( [ upperboundonvargamma ] ) is tight .bounds similar to ( [ simplelowerboundonbetadwithndanddeltaq ] ) or ( [ inequalityindeltaqleq0thm ] ) are possible also in the case if one for example has a lower bound on , i.e. if one knows by how much the temperature can rise at most by the addition of the heat amount .landauer s bound does not forbid values of close to ( see eq .( [ lineqineqthm ] ) ) . in the case it thus `` allows '' some negative values of .but then theorem [ sharpenlineqfordeltaqleq0tm ] gives new constraints and we will use these to prove ( [ inequalitymaintheoremfinited ] ) in the case . assume therefore a process with throughout this section .if , then the inequality ( [ inequalitymaintheoremfinited ] ) holds since ~=~n-\sqrt{n^2 - 2n\delta s}~\leq~0~\leq~\beta\delta q~,\end{aligned}\ ] ] due to and .assume therefore now ( as noted below ( [ simplelowerboundonbetadwithndanddeltaq ] ) , it is always ) .in this case , we use theorem [ sharpenlineqfordeltaqleq0tm ] , multiply this by , and rearrange to get this implies and , via due to lemma [ propsecondlaw ] , the last expression only decreases when is replaced by any , as one verifies easily .this finally proves inequality ( [ inequalitymaintheoremfinited ] ) in the case .more general than in section [ setupsubsection ] , we consider in section [ memorysection ] a setup where initial correlations may be used during the process . in section [ correlationsubsection ]we ask for thermodynamic constraints on the erasure of correlations themselves ( rather than entropy ) .further extensions of the basic setup are described in appendix [ extendednotionssectionapp ] .more generally than in the setup from section [ setupsubsection ] , the agent who aims to modify ( e.g. to `` erase '' ) the system s initial state may have some information about the actual microstate ( e.g. pure state ) of the system . in this case, the desired process may be accomplished with less heat expenditure than given by naive application of landauer s bound ( see e.g. ) .formally , this additional knowledge can be described through an additional memory system that may initially be correlated with the system , and such that the unitary may now act jointly on all three systems , , and .for example , when the system state is ( with orthonormal states ) and the agent had perfect classical knowledge about the microstate of , the situation would be described by , whereas perfect quantum correlation would correspond to a pure ( entangled ) initial state of system and memory . in both examples , if the process is a unitary acting non - trivally only on in such a way that ( with any fixed pure state ) , then one easily verifies i.e. the information from is completely erased ( ) , whereas remains unchanged ( in the first example above with initially perfect classical correlations also remains unchanged , ) ; in particular , no entropy or heat increase occurs in the reservoir , .this seems to contradict the second law lemma ( eq . ( [ shorteqnformof2law ] ) ) and landauer s bound ( eq . ( [ lineqineqthm ] ) ) , but is of course due to the initial correlations with that the process can access . further extending the setup from section [ setupsubsection ] , instead of only unitary interactions ( eq . ( [ unitaryevolsetupsect ] ) ), one may allow for so - called `` noisy operations '' , i.e. unitaries using an additional completely mixed ancilla system , or more generally any unital quantum channel .for this , we use that a unital positive and trace - preserving map does not decrease the entropy .the above points motivate the following setup , which extends the one from section [ setupsubsection ] and to which we can easily generalize our treatment : 1 .the system , reservoir , and memory are initially in a joint quantum state , 2 .the initial reduced reservoir state ] , 3 .the process proceeds by a unital positive trace - preserving map , i.e. , 4 .the entropy and heat changes , , are defined on the marginal states as in fig .[ figsetup ] . a modified second law lemma ( cf .lemma [ propsecondlaw ] ) for this more general situation is then immediately verfied : -\left[s(srm)-s(r')\right]\nonumber\\ & \geq~\left[s(srm)-s(r)\right]-\left[s(s'r'm')-s(r')\right]\label{ineqduetounitaleqn}\\ & = ~\left[s(sm)-i(sm : r)\right]-\left[s(s'm')-i(s'm':r')\right]\nonumber\\ & = ~\left[s(s|m)-s(s'|m')\right]\,+\,i(s'm':r')\,+\,\left[s(m)-s(m')\right]\,-\,i(sm : r)~,\label{verygeneralsecondlawwithmemory}\end{aligned}\ ] ] where from ( [ defineconditionalentropy ] ) is the entropy of conditioned on .the inequality in ( [ ineqduetounitaleqn ] ) is due to and will be an equality if is unitary . if one only considers processes where the memory register is not being altered ( as e.g. in ) , implying , and where the reservoir was initially uncorrelated with the rest , ( see section [ setupsubsection ] ) , then one still has similar to ( [ shorteqnformof2law ] ) .intuitively it is clear that need not hold when either the memory takes on some of the entropy , i.e. when , or when the initial total entropy was reduced due to correlations with , i.e. when ; both possibilities constitute resources that may be exploited for more efficient processes .note that the second law lemma just outlined in ( [ verygeneralsecondlawwithmemory])([secondlawcondeqn ] ) does not require a thermal state nor a hamiltonian for the reservoir ; but when the reservoir is initially thermal ( see condition ( b ) above ) then it is natural to assume no initial reservoir correlations , , see section [ setupsubsection ] and . under the assumptions and ( in addition to ( a)(d ) above ) , one arrives thus at the following form of landauer s principle , generalizing eq .( [ landauereqnshort ] ) , the proof is as in ( [ firstlineinproofofleq ] ) , but now starting from ( [ secondlawcondeqn ] ) .all the finite - size improvements from section [ finitesizesect ] apply to this more general case with memory as well if only is replaced by .one can evaluate all above statements for the two examples given around eq .( [ changeofstates ] ) . in both cases , , , and .furthermore , for the classically correlated case the initial conditional entropy was and the state of the memory did not change , , whereas in the case of maximal quantum correlations is negative and the final memory state is pure .the latter case is the most interesting : the generalization ( [ landauersgeneralizedineq ] ) of landauer s principle is not tight in this case , since the memory state was purified at the expense of the quantum correlations between and ; a subsequent unitary interaction between and may however reduce the reservoir energy to give in the end and . as a final remark ,if there is no memory system but possibly initial correlations in , then ( [ verygeneralsecondlawwithmemory ] ) can be written as now one can formulate a landauer principle in terms of the difference rather than as above ; or alternatively , one can bound the mutual information term , which appears with the `` wrong '' sign , by more traditional quantities like the trace distance , , and this gives corrections to the usual landauer bound ( similarly for the term in ( [ verygeneralsecondlawwithmemory ] ) ) . using similar processes as above with ( around eq .( [ changeofstates ] ) ) , one can see that for initially perfect classical or quantum correlations in , one can achieve while still ( due to ) ; this `` violation '' of landauer s bound is of course explained by , contrary to the assumption ( [ productstateassumptioninsetup ] ) .the common formulation of landauer s principle says that changing the information in a system ( e.g. by `` erasing information '' ) puts constraints on the heat dissipated during the process .this statement is consistent with the mathematical content of theorem [ landauereqntheorem ] when the _ entropy _ is interpreted as the _ amount of information _ in a system in state , and thus is interpreted as the decrease of information in .such an interpretation of entropy is substantiated by the fundamental theorems of asymptotic information theory .this interpretation of entropy also corresponds to the situation where the system has been prepared by someone in any one of the ( orthonormal , and thus perfectly distinguishable ) pure states according to the distribution , such that however the index is unknown to a second agent ( who thus describes the system state as ; see for further discussion ) . in this sense , data or informationis contained _ in _ the system and may be retrieved by the second agent through a measurement in the basis ; this measurement yields the information on average over many independent retrievals .mathematically , is the minimum ( over all complete measurements ) of the averaged measurement outcome information on a state with eigendecomposition .in contrast to the information stored _ in _ a system , which was just quantified by the entropy , one can instead consider the information someone has _ about _ a system .the information that an agent ( with memory register ) has about the state of system is simply the _ correlations between and _ , described by the joint state of the combined system ( see also section [ memorysection ] ) . andthe _ amount _ of correlations between and is quantified by the mutual information ( again , for an averaged or asymptotic scenario ) .this makes sense since is equivalent to , meaning that the agent s memory does not hold any information about the microstate of , whereas iff , such that the agent has ( on average ) perfect classical knowledge about the state of .one may now wonder whether a version of landauer s bound also holds for the change of _ information about _ a system .we show here that a straightforward analogy does not work . forthe setup assume that , besides an initially thermal reservoir that is uncorrelated with the other systems ( see sections [ setupsubsection ] and [ memorysection ] ) , there are a system and a memory register , which may be correlated : }}~.\end{aligned}\ ] ] the information about is thus initially .then the system and reservoir is subjected to a joint unitary process as described in section [ setupsubsection ] , and we examine how the information of the memory about the system changes : the process imagined does not affect the memory ; if it were allowed to , then can be virtually independent of the heat change , so that no version of landauer s bound ( such as possibly ) can hold .note further that it is always in such processes due to the data processing inequality ; this corresponds to `` information erasure '' , whereas the entropy change in sections [ setupsubsection ] and [ memorysection ] could have either sign .but even so , there can not be a straightforward version of landauer s bound involving . to see this ,take any state , and consider a reservoir of the same size as and with initial state ] may assume either sign .namely , becomes negative for example when is any pure state and with any , since ( cf .remark 5 in ) }~=~s(\rho_s)-s(\rho_r)+d(\rho_s\|\rho_r)\\ & = ~-s\left((1-\lambda){|\psi\rangle}{\langle\psi|}+\lambda{\mathbbm{1}}/d\right)+d\left({|\psi\rangle}{\langle\psi|}\,\|\,(1-\lambda){|\psi\rangle}{\langle\psi|}+\lambda{\mathbbm{1}}/d\right)\nonumber\\ & <~-(1-\lambda)s\left({|\psi\rangle}{\langle\psi|}\right)-\lambda s\left({\mathbbm{1}}/d\right)+(1-\lambda)d\left({|\psi\rangle}{\langle\psi|}\,\|\,{|\psi\rangle}{\langle\psi|}\right)+\lambda d\left({|\psi\rangle}{\langle\psi|}\,\|\,{\mathbbm{1}}/d\right)\nonumber\\ & = ~\lambda d\left({|\psi\rangle}{\langle\psi|}\,\|\,{\mathbbm{1}}/d\right)\,-\,\lambda s\left({\mathbbm{1}}/d\right)~=~\lambda\left(\log d-\log d\right)~=~0\nonumber\end{aligned}\ ] ] due to strict concavity of the entropy and convexity of the relative entropy ; one can actually find such that , whereas for any , .again , the inequality does not hold here .on the other hand , is positive by theorem [ landauereqntheorem ] for any , with and can be come arbitrarily big for any fixed ; thus , also a reversed inequality , such as tentatively , can not hold in general .other tentative notions of a landauer principle for correlations can be dismissed similarly .one may for example define _ complete erasure of information _ to mean any process , together with a thermal resource state , which satisfies ~=~\frac{{\mathbbm{1}}_s}{d_s}\otimes\frac{{\mathbbm{1}}_m}{d_s}\qquad\forall~\text{max.~entangled or class.~correlated}~\psi_{sm}~.\end{aligned}\ ] ] such a complete erasure process is necessarily a swap of with a -dimensional completely mixed subsystem of . butthis does not require any heat dissipation , as shown in the first example above where .theorem [ landauereqntheorem ] is a sharpened version of landauer s principle , and theorem [ maintheoremcombined ] makes the sharpening more explicit through dimension - dependent lower bounds on the improvement . given this , one may now wonder about the possibility for dimension - independent improvements of the landauer bound . to answer this, we construct here processes which , for a desired state transformation , approach landauer s bound arbitrarily closely .this is analogous to processes on single systems which come close to extracting the maximal amount of work allowed by the second law from a nonequilibrium system , see e.g. . by section [ boundonpureness ] ,a process is achievable with a finite - dimensional reservoir only if ; this is the case we treat below , formulating our construction as proposition [ non - rank - decreasing - prop ] .the following construction also illustrates that , for any , the reservoir dimension has to grow indefinitely as landauer s bound is approached ( see theorem [ maintheoremcombined ] ) .rank - decreasing processes are the subject of appendix [ purestateerasure ] .[ non - rank - decreasing - prop]let two quantum states be given with and , and let .then there exists a reservoir of finite dimension with hamiltonian and inverse temperature and a unitary , such that the resulting process ( see section [ setupsubsection ] ) satisfies that is , landauer s bound can be approached arbitrarily closely .denote .we construct the reservoir as consisting of subsystems , i.e. , and the reservoir hamiltonian as a sum of local hamiltonians , where each acts nontrivially only on subsystem .the initial thermal reservoir state is thus , where } ] ; this does not change any entropies or cause any heat flow . for any state on , denote by the -dimensional restriction onto the support of .we now define , , and choose intermediate states satisfying ={{\rm supp}}[\rho'_s] ] ) connecting and in the space of states in such a way that is supported on the full subspace ] of , see also fig .[ successiveswaps ] .after steps , the final system state is thus and the system entropy has changed by .the heat dissipation is ( denoting ) : }~=~{{\rm{tr}}\left[(\rho_r-\rho'_r)\log\rho_r\right]}\\ & = ~{{\rm{tr}}\left[\left(\otimes_{i=1}^k\rho_r^{(i)}\,-\,\otimes_{i=1}^k\rho_r^{(i-1)}\right)\log\left(\otimes_{i=1}^k\rho_r^{(i)}\right)\right]}\\ & = ~\sum_{i=1}^k{{\rm{tr}}\left[\left(\rho_r^{(i)}-\rho_r^{(i-1)}\right)\log\rho_r^{(i)}\right]}~=~\sum_{i=1}^k{{\rm{tr}}\left[(\rho_i-\rho_{i-1})\log\rho_i\right]}\label{laststepbeforeriemannsum}~.\end{aligned}\ ] ] we now take any fixed curve , as outlined above , and make the discretization finer as , as in the definition of the riemann integral ( e.g. as in ( [ andersgiovannettiprescription ] ) ). then ( [ laststepbeforeriemannsum ] ) equals }~,\end{aligned}\ ] ] which for converges to }~=~\int_0 ^ 1dt\,{{\rm{tr}}\left[\dot{\rho}(t)\,\log\rho(t)\right]}\\ & ~~=~-\int_0 ^ 1dt\frac{d}{dt}{{\rm{tr}}\left[\rho(t)-\rho(t)\log\rho(t)\right]}~=~-{{\rm{tr}}\left[\rho(1)-\rho(0)\right]}\,-\,s\left(\rho(1)\right)+s\left(\rho(0)\right)\\ & ~~=~s(\rho_s)-s(\rho'_s)~=~\delta s~.\end{aligned}\ ] ] thus , for any , there exists such that for the associated process . for any fixed value of in the preceding proof we can also write , by theorem [ landauereqntheorem ] ( eq .( [ landauereqnshort ] ) ) , since due to the swap processes ( cf .example [ swapexample ] ) . in an upper boundis derived for the sum in ( [ betadeltaqforksteps ] ) when using the prescription ( [ andersgiovannettiprescription ] ) : which is explicitly seen to converge to for when ={\rm rank}[\rho_s] ] , and for ] , the corresponding _ thermal state _ is }}~\qquad(\beta\in[-\infty,+\infty])~,\end{aligned}\ ] ] with the convention that denotes the maximally mixed state on the ground space of .( the latter convention is physically sensible , and furthermore ensures , so that is continuous in ] and its variance by .the _ energy of a thermal state _ is the thermal average of the hamiltonian : }~=~{{\rm{tr}}\left[h\frac{e^{-\beta h}}{{{\rm{tr}}\left[e^{-\beta h}\right]}}\right]}~\qquad(\beta\in[-\infty,+\infty])~.\end{aligned}\ ] ] obviously , is a continuous function of ] is strictly decreasing in ] is constant in ] since is of full rank .continuity gives then strict monotonicity in ] ( by ) , and smooth in the interior of its domain with first derivative ( after some elementary computation ) if , the temperature is a function of the energy by ( [ betafunctionofe ] ) , and ( with some common abuse of notation ) the entropy can also be viewed as a function of the energy of a thermal state .this function is well - defined even in the case , since then for all ] ( note that ( [ definebetaf ] ) need not equal for ) . applied to in eqs .( [ freeenergydiffasd])([writeasdifferenceoffreeenergies ] ) , klein s inequality ( see below eq .( [ definerelentinnotationsection ] ) ) gives several versions of the _ thermodynamic inequality _ :the thermal state is the unique maximizer of the entropy at fixed energy , and ( for ) is the unique minimizer of the energy at fixed entropy .equivalently , the functional is uniquely minimized by , which corresponds to the usual free energy minimization in thermodynamics ( for ) .see for more detailed discussions .here we investigate how tight our finite - size bounds from section [ finitesizesect ] are .let be any process as considered in theorem [ maintheoremcombined ] ( see also section [ setupsubsection ] ) , with a reservoir of dimension . before discussing the tightness of the bound eq .( [ inequalitymaintheoremfinited ] ) , we investigate the range of possible values of the quantity , on which the bound depends .when is fixed , one can put upper and lower bounds on the entropy change of the system . a lower bound is obtained by \\ & = ~s(s)-[s(s)+s(r)+i(s':r')-s(r')]~=~-s(r)-[i(s':r')-s(r')]~,\end{aligned}\ ] ] where we used by unitarity ( [ unitaryevolsetupsect ] ) and the product initial state assumption ( [ productstateassumptioninsetup ] ) .now , for quantum systems , whereas the stronger inequality holds for classical systems .lower bounds on are then obtained by noting : for the upper bounds we used by lemma [ propsecondlaw ] .all inequalities can be attained when only the reservoir dimension is fixed : a swap ( see example [ swapexample ] ) between a pure and a maximally mixed ( of dimension ) attains both upper bounds , whereas swapping a maximally mixed with a pure attains the classical lower bound . for the quantum lower bound , take the system to be composed of two -dimensional subsystems , in a maximally entangled initial state , and the reservoir again initially maximally mixed .then the process that swaps and creates the final state with a maximally entangled state .thus , so that . in this example, , which means there is no heat flow , .we now investigate how tight the inequality ( [ inequalitymaintheoremfinited ] ) from theorem [ maintheoremcombined ] is .specifically , for any given and ( which are the quantities appearing in the bound ) , does there exist a process such that the lower bound ( [ inequalitymaintheoremfinited ] ) on holds with equality ?the answer is in the affirmative when ( which by ( [ upperlowerboundsondeltas ] ) means ) , but not for . to see this ,consider a swap process ( example [ swapexample ] ) between a system with dimensions and the -dimensional reservoir .due to and , , eq .( [ landauereqnshort ] ) gives : now , by , for any given ( with ) and given ] .here we develop a modified version of landauer s principle , with an integral in place of the term from theorem [ landauereqntheorem ] .the derivation requires lemma [ lemmasebeta ] relating entropy , energy and inverse temperature , which is rigorously proven for finite dimensions in appendix [ thermodynamicappendix ] .[ landauerintegralthm]consider processes as described in theorem [ landauereqntheorem ] .denote the energy of the initial reservoir state by } ] .then : (see appendix [ thermodynamicappendix ] for the definition of , in particular eq .( [ betafunctionofe ] ) and lemma [ lemmasebeta ] . ) if , then necessarily and we define the integral to be even though is not well - defined in this case ( see appendix [ thermodynamicappendix ] ) .the statement then follows immediately from ( [ landauereqnshort ] ) since as all thermal states on such a reservoir agree .the proof in the general case starts again with the second law lemma ( lemma [ propsecondlaw ] ) : -\left[s(\rho'_{r , th})-s(\rho'_r)\right]~.\label{intermediateinintegralversion}\end{aligned}\ ] ] denoting by the inverse temperature of ( see beginning of section [ subsectdeltaqbounds ] ) , the last square brackets can , for , be rewritten as ( cf .also ( [ pythagoreantheoremford ] ) and following ) : ~ & = ~-s(\rho'_r)-{{\rm{tr}}\left[\rho'_{r , th}\log\frac{e^{-\beta'h}}{{{\rm{tr}}\left[e^{-\beta'h}\right]}}\right]}\\ & = ~-s(\rho'_r)-{{\rm{tr}}\left[\rho'_r\log\frac{e^{-\beta'h}}{{{\rm{tr}}\left[e^{-\beta'h}\right]}}\right]}~=~d(\rho'_r\|\rho'_{r , th})~.\label{secondintermstepinintegralversion}\end{aligned}\ ] ] for , one can explicitly verify ( [ secondintermstepinintegralversion ] ) , using that \subseteq{{\rm supp}}[\rho'_{r , th}] ] in ( [ intermediateinintegralversion ] ) equals the integral on the rhs of ( [ eqninintegralversion ] ) .this is exactly the same step made in eq .( [ firsttimeintegralappears ] ) , and finally proves ( [ eqninintegralversion ] ) .[ finiteintversionremark]note that statement ( [ eqninintegralversion ] ) of the integral version of landauer s principle and in particular the term is always finite due to \subseteq{{\rm supp}}[\rho'_{r , th}] ] using reservoirs that are initially in thermal states , with hamiltonians and all at the same inverse temperature . by landauer s bound ( [ lineqineqthm ] ), the total heat dissipated in all processes satisfies with and an obvious definition for ( cf .( [ deltaq ] ) ) .the question is now whether there exists a `` joint process '' , acting jointly on all systems and a large reservoir at inverse temperature , such that the total heat dissipation can be less than the lower bound from eq .( [ sumofheats ] ) . for this, we assume the systems to be initially uncorrelated , such that their joint state is furthermore , while the final state ] the thermal state exists and has finite energy ; the latter two conditions are , for , equivalent to }<\infty ] , and they imply that the entropy is finite as well .then , for any joint unitary on , the second law lemma ( lemma [ propsecondlaw ] ) holds as well ( all quantities remain finite , except that the case may occur ) .going through the derivation ( [ firstlineinproofofleq ] ) , one sees that always , since is semi - bounded and had finite energy by assumption .furthermore , implies , which one sees due to .thus , the equality form ( [ landauereqnshort ] ) of landauer s principle holds in the setup of the previous paragraph as well , when employing the usual rules of calculus with and when remembering that in the potentially ambiguous case one has .also , landauer s bound ( see ( [ lineqineqthm ] ) ) always holds .if one considers a process just as above , but now with infinite and finite ( such that an infinite amount of entropy is `` erased '' from the system ) , then one sees , so that landauer s bound also holds .since the conditions for vanishing relative entropy and mutual information ( where defined ) are as in the finite - dimensional case , one can check that the equality considerations from corollary [ correqualityl ] carry over to the above setup ( with either or finite ) in the following way : if , then and eq .( [ conditionsforequality1 ] ) holds with an isometry .but while , even for infinite - dimensional reservoirs , equality in landauer s bound can be attained only in trivial cases , one can approach the bound arbitrarily closely for any given with processes using an infinite - dimensional reservoir ( see section [ attainingsection ] , and also appendix [ purestateerasure ] ) . in appendix [ purestateerasure ]we use hamiltonians that are not merely unbounded but that have formally infinite ( ) energy levels ( see also section [ boundonpureness ] ) .this is done in order to have some unpopulated levels in the initial reservoir state .( thermal states at zero temperature , , may have such unpopulated levels as well , but they are necessarily completely mixed on their support space . ) the calculations involving these hamiltonians are formal . they can be understood as limiting processes , but exact purification as in appendix [ purestateerasure ] is achievable only at the limit ( the approach to the limit is quantified in section [ boundonpureness ] ) . this issue is similar to the case of exactly zero temperature ( ) , whose physical relevance may be questioned as well .in section [ boundonpureness ] we saw that , in finite dimensions , any rank - decreasing process necessarily has , i.e. requires either a zero - temperature reservoir ( ) or infinite heat flow ( via formally infinite hamiltonian levels , in particular implying ) .thus , landauer s bound can not be tight for finite - dimensional processes with . herewe show that rank - decreasing processes can come arbitrarily close to landauer s bound by using an infinite - dimensional reservoir ( with hilbert space ; cf .also appendix [ sectioninfinitedim ] ) . to keep the notation manageable, we assume that a mixed initial qubit state , with , is to be turned into a pure final state . from the argument leading up to proposition [ propositionlambdamin ], one can see that for such a process the initial reservoir state needs to have infinitely many unoccupied energy levels ( see also last paragraph in appendix [ sectioninfinitedim ] ) , the denote the initial eigenvalues of the ( potentially ) non - empty levels , which we will determine below . at finite temperature , , this means that the energy levels of the reservoir hamiltonian corresponding to the unoccupied levels have to be formally .we further choose a unitary that transforms to the final state with and from above .it is clear that such a unitary exists by just permuting the product basis states , since both and have the eigenvalues for , , in addition to countably many eigenvalues .we can now compute the heat flow .for this , denote the hamiltonian energy levels by corresponding to the eigenvalues in ( [ rhorinfinitedim ] ) , i.e. .thus : those computations are justfied since the will later be chosen such that and are normalized states with finite entropies .one can easily see from ( [ firstlinerhoprimerinfinite ] ) that .together with ( [ betadeltaqforinfintite ] ) this gives which coincides with our finite - dimensional equality version of landauer s principle ( theorem [ landauereqntheorem ] ) , considering that here due to a pure ( see also appendix [ sectioninfinitedim ] ) .it is now easy to choose the occupation numbers such that , and thus , is finite .to show moreover that the bound can be arbitrarily sharp , we have to find such that in ( [ samelawforinfinitedimensions ] ) is arbitrarily close to .one way to do this is the following : choose any and define one can see that the so defined is normalized with entropy .and the relative entropy term in ( [ samelawforinfinitedimensions ] ) is which indeed approaches as .note however that , even in infinite dimensions , no process with exists that makes the relative entropy term in ( [ samelawforinfinitedimensions ] ) vanish exactly ( cf .appendix [ sectioninfinitedim ] ) : would mean , which would imply that the state ( due to purity of ) would have to be unitarily equivalent to .but this is possible only when was already rank - deficient . note finally that some state which is -close to a given ( possibly pure ) state can be reached with a finite - dimensional reservoir and with arbitrarily close to , see section [ boundonpureness ] .h. s. leff , a. r. rex , `` maxwell s demon : entropy , infromation , computing '' , _ princeton university press _ ( 1990 ) ; 2nd edition : `` maxwell s demon : entropy , classical and quantum information , computing '' , _ institute of physics publishing _ ( 2003 ) .
landauer s principle relates entropy decrease and heat dissipation during logically irreversible processes . most theoretical justifications of landauer s principle either use thermodynamic reasoning or rely on specific models based on arguable assumptions . here , we aim at a general and minimal setup to formulate landauer s principle in precise terms . we provide a simple and rigorous proof of an improved version of the principle , which is formulated in terms of an equality rather than an inequality . the proof is based on quantum statistical mechanics concepts rather than on thermodynamic argumentation . from this equality version , we obtain explicit improvements of landauer s bound that depend on the effective size of the thermal reservoir and reduce to landauer s bound only for infinite - sized reservoirs .
helping students distinguish between velocity and acceleration has always been challenging for physics instructors . several instructors and researchers have analyzed the difficulties in teaching velocity and acceleration to introductory physics students in different contexts . however , teaching the concept of acceleration effectively still remains elusive .acceleration is the rate of change of velocity with time .since velocity is a vector , the velocity can change because its magnitude ( speed ) changes , its direction changes , or both .for example , if a car moves in a straight line but slows down or speeds up , its only the speed that is changing and not its direction . for uniform circular motion of an object , the speed is constant and it is only the direction of velocity that is changing . in this case , the velocity is tangent to the path , and acceleration , called centripetal acceleration , is towards the center of the circle at each instant .since the speed is not changing , the velocity and centripetal acceleration are always perpendicular to each other . in a more general motion of an object , both the magnitude and direction of velocity could change and we can have two perpendicular non - zero components of acceleration : the tangential component due to the changes in the speed , and the centripetal acceleration due to the changes in the direction of velocity . here , we discuss situations involving centripetal acceleration which are universally challenging . in a recent article , the following two multiple - choice questions about the acceleration of a rolling ball on a ramp of the shape shown in figure 1 were given : a ball rolls on a ramp as shown .as it rolls from a to b its velocity increases and its acceleration : ( a ) increases also , ( b ) decreases , ( c ) remains constant .and when the ball rolls from b to c its acceleration : ( d ) increases , ( e ) decreases , ( f ) remains constant . "the answers were given to be a and e respectively with the explanation acceleration depends on the slope of the ramp .the slope of the ramp gradually increases between points a and b , so acceleration there gradually increases .but between points b and c , the slope gradually decreases .this means acceleration beyond b gradually decreases also . " however , answer e for the second part is incorrect because it fails to account for centripetal acceleration , where is the speed and is the radius of curvature . the second question asks about the acceleration as the particle rolls from point b to c and not about the magnitude of the tangential acceleration alone .therefore , the correct answer should be the magnitude of acceleration from point b to c may increase or decrease depending upon the radius of curvature and the speed of the object at point c " .it should be noted that point c is still part of the curved surface , i.e. , at point c the ball is not rolling on a horizontal surface ( infinite curvature ) .as we shall see below , for the drawing used to illustrate the situation in figure 1 , the magnitude of acceleration actually increases from point b to point c.let s first calculate the magnitude of acceleration at point b based on the drawing in figure 1 .figure 1 shows the estimates obtained for the three critical dimensions in the drawing : cm , the height of the hill , , the angle that the tangent to the path makes with the horizontal at the point of inflection ( b ) , and cm , the radius of curvature from point b to point c. first we focus on calculating the acceleration magnitude at point b. to simplify the calculation , we will assume that the ball is simply sliding down the hill rather than rolling ( later we will restore the rolling motion ) .since there is no curvature at the point of inflection , the centripetal acceleration at point b is zero and the total acceleration is only due to the tangential component .this acceleration has a magnitude where is the magnitude of the acceleration due to gravity .next , we assume that at point c , the slope of the curve is so small that the tangential acceleration can be ignored and the total acceleration is simply the centripetal acceleration .the centripetal acceleration can be calculated as , where we must still determine the speed of the ball .we may assume that the ball starts from rest at point a and ignore air resistance , so that total mechanical energy is conserved along the way between points a and c. hence , and therefore , solving for centripetal acceleration .note that the answer only depends on the ratio h / r , so the answer is independent of the overall scale factor for the drawing .the ratio of the magnitudes of acceleration at points c and b is . because the factor for parameters in figure 1 , we see that regardless of the slope of the incline at point b , the acceleration at point c must be larger than that at point b. now we include the effects due to rolling motion . as it turns out , the rolling has no impact whatsoever on the above result .we assume that the ball has a moment of inertia , where k=2/5 for a uniform solid sphere rolling about its center and is the radius of the ball .analysis of the force and torque equations at point b yields a modified result : . a similar analysis at point c , taking into account both linear and rotational kinetic energies in the conservation of energy equation and using for rolling ( here is the angular speed and the rotational kinetic energy is ) , yields a similarly modified result : . both factors of 1/(1+k ) cancel if we take the ratio of the magnitudes of acceleration at points c and b similar to the case when the object was sliding rather than rolling .thus , the total acceleration at point c will be larger than that at point b if to include changes in direction of an object as being accelerations ( as opposed to just changes in speed ) is widespread among novices and experts alike . in a survey conducted by reif et al . related to acceleration at different points on the trajectory of a simple pendulum , similar difficulties were found .reif et al . asked several physics professors at the university of california , berkeley who had taught introductory physics recently to explain how the acceleration at various points on the trajectory of the simple pendulum changed .a surprisingly large fraction of professors incorrectly noted that the acceleration at the lowest point of the trajectory is zero because they did not account for the centripetal acceleration . when they were explicitly asked to reconsider their responses , approximately half of them noticed that they were forgetting to take into account the centripetal acceleration whereas the other half continued with their initial assertion that the acceleration is zero at the lowest point in the trajectory . in another study , many introductory physics students had similar difficulties when calculating the normal force on an object which is moving along a circular path and is at the highest point .there have been some interesting studies of students perception of motion while watching objects moving on a computer screen .however , further exploration is required to develop a more precise understanding of human perception of velocity and acceleration .one difficulty is that while humans can visually obtain a reasonably good sense of the speed of an object quickly , the magnitude of acceleration of an object is difficult to gauge visually . to illustrate , consider the following situation that occurs commonly at a traffic intersection ( figure 2 ) .car a and its passengers are travelling at a constant speed .car b and its passengers are travelling with speed toward the intersection .the speed of car b is such that if there is no deceleration , there will be a collision between the two cars .this situation is ordinarily disturbing , but only passengers in car a are worried .the reason is that car b is decelerating .passengers in car b can sense the deceleration and combine that information with visual cues about the speed to help determine that a collision is not imminent .passengers in car a receive no such inertial cues quickly , and must use higher - level cognitive tools such as watching car b over a period of time and calculating the change in velocity per unit time implicitly in their minds to deduce that there will not be a collision .the human body is an accelerometer and a person who is accelerating has a feel " for both the magnitude and direction of acceleration .for example , when our car takes off from rest in a straight line , the acceleration is in the same direction as the velocity but our body feels as if it is thrown backwards in a direction opposite to the acceleration .similarly , if the car comes to rest , the acceleration is opposite to the direction of velocity but the body lunges forward in a direction opposite to the acceleration .these are demonstrations of inertia. even a blind - folded person can tell that he / she is accelerating when going up and down the curves of a roller coaster .when making a turn in a circle , the acceleration is towards the center of the circle ( if the tangential component is zero ) but our body feels as if it is thrown outward opposite to the direction of the acceleration . if students are encouraged to imagine what they will feel if they were experiencing a particular motion , they may develop a better intuition for the concept of acceleration .the mass of an object multiplied by its centripetal acceleration is often called the centripetal force " .this jargon is unfortunate because one common misconception that students have about centripetal force is that it is a new physical force of nature rather than a component of the net force .for example , when we asked introductory physics students to find the magnitude of the normal force ( apparent weight ) on a person moving along a curved path at a time when the person is at the top of the curve , many students incorrectly used * equilibrium * application of newton s second law . several students who were systematic in their problem solving , and drew free body diagrams , wrote down the equation for the * equilibrium * application of newton s second law . in their free body diagram, they incorrectly included centripetal force when passing over the highest point on the curve as if it was a separate physical force in addition to the normal force and the gravitational force . using the equilibrium application of newton s second law, students incorrectly obtained that the magnitude of the normal force on the person of mass moving with a speed along a circular curve of radius is rather than the correct expression .hence , they concluded that the apparent weight of the person increases rather than decreases when passing over the bump .it is possible that if students had invoked their intuition and imagined how they will feel when moving over the circular bump , they may have realized that their apparent weight can not be larger than when moving over the bump .another difficulty is that students often consider the pseudo forces , e.g. , the centrifugal force , as though they were real forces acting in an inertial reference frame .it is advisable to avoid discussions of pseudo forces altogether because they are likely to confuse students further .we discussed examples to illustrate that centripetal acceleration is a challenging concept even for physics teachers .sometimes , veteran physics teachers miss centripetal acceleration when calculating the magnitude of total acceleration of an object moving along a curved path .students have additional difficulty with this concept and often believe that mass times centripetal acceleration , termed centripetal force , is a new physical force of nature rather than a net force . tutorials in introductory physics have been found to improve students understanding of concepts related to acceleration .another instructional strategy that can help students with the concept of acceleration is providing them with an opportunity for kinesthetic explorations . since human body can sense acceleration , such explorations can help them understand and remember related concepts and reason intuitively about problems involving acceleration . to derive the expression for the tangential acceleration at point b when the ball is rolling , we can draw a free body diagram on the incline with the gravitational force , normal force and the static frictional force and use newton s second law for linear motion to conclude that . similarly , using newton s second law for rotational motion , we have net torque due to friction about the center of the ball . using and for rolling, we have . plugging the value of static frictional force in ,we obtain as desired .figure 1 : using geometry , the angle that the tangent to the path at point b makes with the horizontal is .the difference in height between points a and c is cm and the radius cm .we note that it is only the ratio that is important for determining the centripetal acceleration at point c. figure 2 : a diagram of the intersection with two cars .car b has a large speed but is decelerating .the passengers in car a can quickly obtain a good feel for the speed of car b but not as quickly for its acceleration ( deceleration in this case ) .
acceleration is a fundamental concept in physics which is taught in mechanics at all levels . here , we discuss some challenges in teaching this concept effectively when the path along which the object is moving has a curvature and centripetal acceleration is present . we discuss examples illustrating that both physics teachers and students have difficulty with this concept . we conclude with instructional strategies that may help students with this challenging concept . 2em -0.25 in 6.9 in 9.62 in 1ex
reconstruction of a high dimensional low - rank matrix from a low dimensional measurement vector is a challenging problem .the low - rank matrix reconstruction ( lrmr ) problem is inherently under - determined and have been receiving attention due to its generality over popular sparse reconstruction problems along with many application scopes .here we consider the lrmr system model where is the measurement vector , is the linear measurement matrix , is the low - rank matrix , is additive noise ( typically assumed to be zero - mean gaussian with covariance ) and is the vectorization operator . with ,the setup is underdetermined and the task is the reconstruction ( or estimation ) of from . to deal with the underdetermined setup ,a typical and much used strategy is to use a regularization in the reconstruction cost function .regularization brings in the information about low rank priors . a typical type iestimator is where is a regularization parameter and is a fixed penalty function that promotes low rank in .common low - rank penalties in the literature are where denotes the matrix trace , denotes determinant , and and .we mention that the nuclear norm penalty is a convex function . in the literature ,lrmr algorithms can be categorized in three types : convex optimization , greedy solutions and bayesian learning .most of these existing algorithms are highly motivated from analogous algorithms used for standard sparse reconstruction problems , such as compressed sensing where in is replaced by a sparse vector .using convex optimization we can solve when is the nuclear norm , which is an analogue of using -norm in sparse reconstruction problems .further , greedy algorithms , such as iteratively reweighted least squares solves by using algebraic approximations . while convex optimization and greedy solutions are popular they often need more a - priori information than knowledge about structure of the signal under reconstruction ; for example , convex optimization algorithms need information about the strength of the measurement noise to fix the parameter , and greedy algorithms need information about rank . in absence of such a - priori information ,bayesian learning is a preferred strategy to use .bayesian learning is capable of estimating the necessary information from measurement data . in bayesian learningwe evaluate the posterior with the knowledge of prior .if has a prior distribution and the noise is distributed as , then the maximum - a - posteriori ( map ) estimate can be interpreted as the type i estimate in . as typei estimation requires more information ( such as ) , type ii estimators are often more useful .type ii estimation techniques use hyper - parameters in the form of latent variables with prior distributions . while for sparse reconstruction problems , bayesian learning via type ii estimation in the form of relevance vector machine and sparse bayesian learning have gained significant popularity , the endeavor to design type ii estimation algorithms for lrmr is found to be limited . in , direct use of sparse bayesian learning was used to realize an lrmr reconstruction algorithm .bayesian approaches were used in for a problem setup with a combination of low rank and sparse priors , called principal component pursuit . in , gaussian and bernoulli variables was used and the parameters were estimated using markov chain monte carlo while in an empirical bayesian approach was used .type ii estimation methods are typically iterative where latent variables are usually treated via variational techniques , evidence approximation , expectation maximization and markov chain monte carlo .our objective in this paper is to develop new type ii estimation methods for lrmr .borrowing ideas from type ii estimation techniques for sparse reconstruction , such as the relevance vector machine and sparse bayesian learning algorithms , we model a low - rank matrix by a multiplication of precision matrices and an i.i.d . gaussian matrix .the use of precision matrices helps to realize low - rank structures .the precision matrices are characterized by hyper - parameters which are treated as latent variables .the main contributions of this paper are as follows . 1. we introduce one - sided and two - sided precision matrix based models . 2 .we show how the schatten s - norm and log - determinant penalty functions are related to latent variable models in the sense of map estimation via type i estimator .3 . for all new type ii estimation methods ,we derive update equations for all parameters in iterations .the methods are based on evidence approximation and expectation - maximization .the methods are compared numerically to existing methods , such as the bayesian learning method of and nuclear norm based convex optimization method .we are aware that evidence approximation and expectation - maximization are unable to provide globally optimal solutions .hence we are unable to provide performance guarantees for our methods .this paper is organized as follows .we discuss the preliminaries of sparse bayesian learning in section [ sec : preliminaries ] . in section [ sec : one_sided ] we introduce one - sided precisions for matrices and derive the relations to type i estimators .two - sided precisions are introduced in section [ sec : two_sided ] and in section [ sec : algorithms ] we derive the update equations for the parameters . in section [ sec : simulations ] we numerically compare the performance of the algorithms for matrix reconstruction and matrix completion . in this section, we explain the relevance vector machine ( rvm ) and sparse bayesian learning ( sbl ) methods for a standard sparse reconstruction problem .the setup is where is the sparse vector to be reconstructed from the measurement vector and is the additive measurement noise .the approach is to model the sparse signal ^{\top} ] and are latent variables that also need to be estimated .we find the posterior if is assumed sharply peaked around , ( this is version of the so - called laplace approximation described in appendix [ appendix : laplace ] ) .assuming the knowledge of and , the map estimate is ^{\top } \leftarrow \arg \max_{\mathbf{x } } p(\mathbf{x}|\mathbf{y } , \boldsymbol{\gamma } , \beta ) = \beta \boldsymbol{\sigma } \mathbf{a}^\top \mathbf{y } \label{eq : map_x_rvm}\end{aligned}\ ] ] where . in the notion of iterative updates , we use to denote the assignment operator .the precisions and are estimated by where and .gamma distributions are typically chosen as hyper - priors for and with the form with , and .the evaluation of leads to coupled equations and are therefore solved approximately as where is the diagonal element of .the parameters of the gamma distributions for and are typically chosen to be non - informative , i.e. . the update solutions of and are repeated iteratively until convergence . in sparse bayesian learning algorithm a standard expectation - maximization framework is used to estimate and .finally we mention that the rvm and sbl methods have connection with type i estimation .if the precisions have arbitrary prior distributions , then the marginal distribution of becomes for some function .given and for a known , the map estimate is if is a gamma prior then is a student- distribution with .one rule of thumb is that a `` more '' concave gives a more sparsity promoting model , see some example functions in figure [ fig : illustration ] . in the figure , corresponds to a laplace distributed variable , to a student- and to a generalized normal distribution .the relation between the sparsity promoting penalty function and the corresponding prior of the latent variable was discussed in , see also and .( 3,0 ) node[right ] ; ( 0,0 ) ( 0,2 ) ; ( -2.5,2.5 ) ( 0,0 ) ( 2.5,2.5 ) node[right ] ; plot ( , ln(+ 1 ) ) node[right ] ; plot ( , sqrt(abs ( ) ) ) node[right ] ;the structure of a low - rank matrix is characterized by the dominant singular vectors and singular values . like the use of precisions in for the standard sparse reconstruction problem via inculcating dominance ,we propose to model the low - rank matrix as where the components of are i.i.d . and is a positive definite random matrix ( which distribution will be described later ) .this is equivalent to denoting and , we evaluate we note that must have the special form for to hold ( as is integrated out ) . as , the resulting map estimator can be interpreted as the type i estimator .next we investigate the relation between the priors and .the motivation is that the relations are necessary for designing practical learning algorithms .from , we note that is the laplace transform of , which establishes the relation .naturally , we can find by the inverse laplace transform as follows where the integral is taken over all symmetric matrices such that where is a real matrix so that the contour path of integration is in the region of convergence of the integrand . while the laplace transform characterizes the exact relation between priors , the computation is non - trivial and often analytically intractable . in practice ,a standard approach is to use the laplace approximation where typically the mode of the distribution under approximation is found first and then a gaussian distribution is modeled around that mode .let have the form ; then the laplace approximation becomes where is the hessian of evaluated at the minima ( which is assumed to exist ) .the derivation of the laplace approximation is shown in appendix [ appendix : laplace ] . denoting and assuming that the hessian is constant ( independent of ) we get that where we absorbed the constants terms into the normalization factor of .we find that is the concave conjugate of .hence , for a given we can recover as if is concave ( which holds under the assumption that is convex ) .further , we can find from followed by solving the prior .using the concave conjugate relation , we now deal with the task of finding appropriate for two example low - rank promoting penalty functions , as follows . 1 . _ for schatten -norm : _ the schatten -norm based penalty function is .we here use a regularized schatten -norm based penalty function as where the use of helps to bring numerical stability to the algorithms in section [ sec : algorithms ] . for the penalty function, we find the appropriate as where .the derivation of is given in appendix [ appendix2 ] .note that , for , becomes the regularized nuclear norm based penalty function 2 ._ log - determinant penalty : _ for the log - determinant based penalty function where is a real number , we find as as , we find that the prior is wishart distributed ( wishart is a conjugate prior the distribution ) . for a scalar instead of a matrix ,the prior distribution becomes a gamma distribution as used in the standard rvm and sbl .we have discussed a left - sided precision based model in this section , but the same strategy can be easily extended to form a right - sided precision based model. then a natural question arises , which model to use ?our hypothesis is that the user choice stems from minimizing the number of variables to estimate .if the low - rank matrix is fat then the left - sided model should be used , otherwise the right - sided model .a further question arises on the prospect of developing a two sided precision based model , which is described in the next section .in this section , we propose to use precision matrices on both sides to model a random low - rank matrix .we call this the two - sided precision based model .our hypothesis is that the two - sided precision helps to enhance dominance of a few singular vectors . for low - rank modeling ,we make the following ansatz where and are positive definite random matrices . using the relation , we find to promote low - rank , we use a prior distribution .the marginal distribution of is we have noticed that the use of in evaluating does not bring out suitable connections between the resulting functions and the usual low - rank promoting functions ( such as nuclear norm , schatten s - norm and log - determinant ) .thus it is non - trivial to establish a direct connection between of and the type i estimator of . instead of a direct connection we can establish an indirect connection by an approximation . for a given and by marginalizing over , we have and hence the corresponding type i estimator cost function is a similar cost function can be found for a given by marginalizing over .we discuss the roles of and in the next section . from, we can see that the column and row vectors of are in the range spaces of and , respectively .further let us interpret this in a statistical sense with the note that a skewed precision matrix comprises of correlated components .let us denote the component of by {ij} ] and {ii} ] between and that is comparably strong to the auto - correlations {ii } \ , \boldsymbol{\alpha}_l^{-1}$ ] .we mention that a low - rank property can be established in a qualitative statistical sense by the presence of columns having strong cross - correlation .the one - sided precision based model can be seen as the two - sided model where .hence the one - sided precision based model is unable to capture information about cross - correlation between columns of .a similar argument can be made for the right sided precision based model where .considering the potential of two - sided precision matrices , the optimal inference problem is which is the map estimator for amenable priors and often connected with the type i estimator in .direct handling of the optimal inference problem is limited due to lack of analytical tractability .therefore various approximations are used to design practical algorithms which are also type ii estimators .this section is dedicated to design new type ii estimators via evidence approximation ( as used by the rvm ) and expectation - maximization ( as used in sbl ) approaches . in the evidence approximation , we iteratively update the parameters as the solution of is the standard linear minimum mean square error estimator ( lmmse ) as using a standard approach ( see equations ( 45 ) and ( 46 ) of or ( 7.88 ) of ) , the solution of can be found as the standard rvm in uses the different update rule which often improves convergence .the update rule has the benefit over of having established convergence properties . in simulations we used the update rule since it improved the estimation accuracy. finally we deal with as follows . 1 ._ for schatten -norm : _ using the schatten -norm prior gives us the update equations where and the matrices and have elements {ij } = \mathrm{tr}(\boldsymbol{\sigma}(\boldsymbol{\alpha}_r \otimes \mathbf{e}_{ij}^{(l ) } ) ) , \\ & [ \tilde{\boldsymbol{\sigma}}_r]_{ij } = \mathrm{tr}(\boldsymbol{\sigma}(\mathbf{e}_{ij}^{(r ) } \otimes \boldsymbol{\alpha}_l ) ) , \end{aligned}\ ] ] and where and are matrices with ones in position and zeros otherwise .log - determinant penalty : _ for the log - determinant prior the update equations become we see that the update rule for the log - determinant penalty can be interpreted as in the limit .the derivations of and are shown in appendix [ appendix2 ] and [ appendix3 ] .the corresponding update equations for the one - sided precision based model are obtained by fixing the other precision matrix to be the identity matrix . in the spirit of the evidence approximation based relevance vector machine , we call the developed algorithms in this section as relevance singular vector machine ( rsvm ) . for schatten - s norm and log - determinant priors ,the methods are named as rsvm - sn and rsvm - ld , respectively .[ [ em ] ] em ~~ in expectation - maximization , the value of the precisions are updated in each iteration by maximizing the cost ( em help function in map estimation ) where are the parameter values from the previous iteration .the function is defined as = \text{constant } \nonumber \\ & - \frac{\beta}{2 } ||\mathbf{y - a}\mathrm{vec}(\hat{\mathbf{x } } ) ||_2 ^ 2 - \frac{1}{2 } \mathrm{tr}(\boldsymbol{\alpha}_l \hat{\mathbf{x } } \boldsymbol{\alpha}_r \hat{\mathbf{x}}^\top ) -\frac{1}{2 } \mathrm{tr}(\boldsymbol{\sigma}^{-1 } \boldsymbol{\sigma } ' ) \nonumber \\ % & \,\,\,\ , & + \frac{q}{2 } \log |\boldsymbol{\alpha}_l| + \frac{p}{2 } \log |\boldsymbol{\alpha}_r| + \frac{m}{2 } \log \beta , \label{eq : lowrank : em_help_function}\end{aligned}\ ] ] where , and denotes the expectation operator .the maximization of leads to update equations which are identical to the update equations of evidence approximation .that means that for the schatten - s norm , the maximization leads to , and , and for log - determinant penalty , the maximization leads to , and . for the noise precision ,em reproduces the update equation .the derivation of and update equations are shown in appendix [ appendix : em ] . unlike evidence approximation , em has monotonic convergence properties and hence the derived update equations are bound to improve estimation performance in iterations .we have found that in practical algorithms , there is a chance that one of the two precisions becomes large and the other small over iterations .a small precision results in numerical instability in the kronecker covariance structure . to prevent the inbalance we rescale the matrix precisions in each iteration such that the a - priori and a - posteriori squared frobeniun norm of are equal , = \mathrm{tr}(\boldsymbol{\alpha}_l^{-1 } ) \mathrm{tr}(\boldsymbol{\alpha}_r^{-1 } ) \\ & = \mathcal{e}[||\mathbf{x}||_f^2 | \boldsymbol{\alpha}_l , \boldsymbol{\alpha}_r,\beta , \mathbf{y } ] = ||\hat{\mathbf{x}}||_f^2 + \mathrm{tr}(\boldsymbol{\sigma}),\end{aligned}\ ] ] and the contribution of the precisions to the norm is equal , the rescaling makes the algorithm more stable and often improves estimation performance .in this section we numerically verify our two hypotheses , and compare the new algorithms with relevant existing algorithms .our objectives are to verify : * the hypothesis that the left - sided precision is better than the right sided precision for a fat low - rank matrix , * the hypothesis that the two - sided precision based model performs better than one - sided precision based model . *the proposed methods perform better than a nuclear - norm minimization based convex algorithm and a variational bayes algorithm . in the simulations we considered low - rank matrix reconstruction and also matrix completion as a special case due to its popularity .to compare the algorithms , the performance measure is the normalized - mean - square - error /\mathcal{e}[||\mathbf{x}||_f^2].\end{aligned}\ ] ] in experiments we varied the value of one parameter while keeping the other parameters fixed . for given parameter values, we evaluated the nmse as follows . 1 . for lrmr ,the random measurement matrix was generated by independently drawing the elements from and normalizing the column vectors to unit norm .for low rank matrix completion , each row of contains a 1 in a random position and zero otherwise with the constraint that the rows are linearly independent .matrices and with elements drawn from were randomly generated and the matrix was formed as .note that is of rank ( with probability ) .3 . generate the measurement , where and is chosen such that the signal - to - measurement - noise ratio is }{\mathcal{e}[||\mathbf{n}||_2 ^ 2 ] } = \frac{rpq}{m\sigma_n^2}.\end{aligned}\ ] ] 4 .estimate using competing algorithms and calculate the error .repeat steps for each measurement matrix number of times .repeat steps for the same parameter values number of times .7 . then compute the nmse by averaging . in the simulations we chose , which means that the averaging was done over 625 realizations .we normalized the column vectors of to make the smnr expression realization independent .finally we describe competing algorithms . for comparison , we used the following nuclear norm based estimator where we used as proposed in .the cvx toolbox was used to implement the estimator . for matrix completionwe also compared with the variational bayesian ( vb ) developed by babacan et .al . . in vb ,the matrix is factorized as and ( block ) sparsity inducing priors are used for the column vectors of and .the vb algorithm was developed for matrix completion ( and robust pca ) , but not for matrix reconstruction .we note that , unlike rsvm and vb , the nuclear norm estimator requires a - priori knowledge of the noise power .we also compared the algorithms to the cramr - rao bound ( crb ) from ( as we know the rank a - priori in our experimental setup ) .we mention that the crb is not always a valid lower bound in this experimental setup because all technical conditions for computing a valid crb are not always fulfilled and the estimators are not always unbiased .the choice of crb is due to absence of any other relevant theoretical bound .our first experiment is for verification of the first two hypotheses . for the experiment, we considered lrmr and fixed , , , db and varied .the results are shown in figure [ fig : single_double_rec ] where nmse is plotted against normalized measurements .we note that rsvm - sn with left precision is better than right precision .same result also hold for rsvm - ld .this verifies the first hypothesis .further we see that rsvm - sn and rsvm - ld with two sided precisions are better than respective one - sided precisions .this result verifies the second hypothesis . in the experiments we used for rsvm - sn as it was found to be the best ( empirically ) .henceforth we fix for rsvm - sn . for low - rank matrix reconstruction . ]the second experiment considers comparison with nuclear - norm based algorithm and the crb for lrmr .the objective is robustness study by varying number of measurements and measurement noise power .we used , and . in figure[ fig : alpha_snr_rec ] ( a ) we show the performance against varying ; the smnr = 20 db was fixed .the performance improvement of rsvm - sn is more pronounced over the nuclear - norm based algorithm in the low measurement region .now we fix and vary the smnr .the results are shown in figure [ fig : alpha_snr_rec ] ( b ) which confirms robustness against measurement noise . andsmnr for low - rank matrix reconstruction .( a ) smnr = 20 db and is varied .( b ) and smnr is varied . ]next we deal with matrix completion where the measurement matrix has a special structure and considered to be inferior to hold information about than the same dimensional random measurement matrix used in lrmr .therefore matrix completion requires more measurements and higher smnr .we performed similar experiments as in our second experiment and the results are shown in figure [ fig : alpha_snr_comp ] . in the experiments the performance of the vb algorithm is included .it can be seen that rsvm - sn is typically better than the other algorithms .we find that the the vb algorithm is pessimistic . and smnr for low - rank matrix completion .( a ) smnr = 20 db and is varied .( b ) and smnr is varied . ] finally in our last experiment we investigated the vb algorithm to find conditions for its improvement and compared it with rsvm - sn . for this experiment , we fixed , , and smnr = 20 db , and varied .the results are shown in figure [ fig : q_completion ] and we see that vb provides good performance when .the result may be attributed to an aspect that vb is highly prone to a large number of model parameters which arises in case is away from a square matrix . for low - rank matrix completion .in this paper we developed bayesian learning algorithms for low - rank matrix reconstruction .the framework relates low - rank penalty functions ( type i estimators ) to the latent variable models ( type ii estimators ) with either left- or right - sided precisions through the matrix laplace transform and the concave conjugate formula .the model was further extended to the two - sided precision based model .using evidence approximation and expectation maximization , we derived the update equations for the parameters .the resulting algorithm was named the relevance singular vector machine ( rsvm ) due to its similarity with the relevance vector machine for sparse vectors .especially we derived the update equations for the estimators corresponding to the log - determinant penalty and the schatten -norm penalty , we named the algorithms rsvm - ld and rsvm - sn , respectively . through simulations , we showed that the two - sided precision based model performs better than the one - sided model for matrix reconstruction .the algorithm also outperformed a nuclear - norm based estimator , even though the nuclear - norm based estimator knew the noise power .the proposed methods also outperformed a variational bayes method for matrix completion when the matrix is not square .the laplace approximation is an approximation of the integral where the integral is over .the function is approximated by a second order polynomial around its minima as where is the hessian of at . the term linear in vanishes and at since we expand around a minima . with this approximation ,the integral becomes in , the integral is given by } d \boldsymbol{\alpha}.\end{aligned}\ ] ] set , where .let denote the minima of and the hessian at .assuming that and are `` large '' in the sense that the integral over can be approximated by the integral over we find that where . the em help function is given by = c + \frac{m}{2 } \log \ , \beta \\ & - \frac{\beta}{2 } \mathcal{e}[||\mathbf{y - a}\mathrm{vec}(\mathbf{x})||_2 ^ 2 - \frac{1}{2 } \mathcal{e}[\mathrm{tr}(\boldsymbol{\alpha}_l \mathbf{x } \boldsymbol{\alpha}_r \mathbf{x}^\top ) ] \\ & + \frac{q}{2 } \log |\boldsymbol{\alpha}_l| + \frac{p}{2 } \log |\boldsymbol{\alpha}_r|,\end{aligned}\ ] ] where is a constant . using that = ||\mathbf{y}||_2 ^ 2 - 2\mathbf{y}^\top \mathbf{a } \mathrm{vec}(\hat{\mathbf{x } } ) \\ & + \mathrm{tr}(\mathbf{a^\top a}(\mathrm{vec}(\hat{\mathbf{x } } ) \mathrm{vec}(\hat{\mathbf{x}})^\top + \boldsymbol{\sigma } ' ) ) \\ & = ||\mathbf{y - a}\mathrm{vec}(\hat{\mathbf{x}})||_2 ^ 2 + \mathrm{tr}(\mathbf{a}^\top\mathbf{a}\boldsymbol{\sigma } ' ) , \end{aligned}\ ] ] and \\ & = \mathrm{tr}((\boldsymbol{\alpha}_r \otimes \boldsymbol{\alpha}_l)(\mathrm{vec}(\hat{\mathbf{x } } ) \mathrm{vec}(\hat{\mathbf{x}})^\top + \boldsymbol{\sigma } ' ) ) \\ & = \mathrm{tr}(\boldsymbol{\alpha}_l \hat{\mathbf{x } } \boldsymbol{\alpha}_r \hat{\mathbf{x}}^\top ) + \mathrm{tr}((\boldsymbol{\alpha}_r \otimes \boldsymbol{\alpha}_l ) \boldsymbol{\sigma}'),\end{aligned}\ ] ] we recover the expression for the em help function .we here set to keep the derivation more general .the regularized schatten -norm penalty is given by for the concave conjugate formula we find that the minimum over occurs when solving for gives us that which results in .the log - determinant penalty is given by for the concave conjugate formula we find that the minimum over occurs when solving for gives by removing the constants we recover . using , we find that the minimum of with respect to for the log - determinant penalty occurs when solving for gives us for . the derivation of the update equation for is found in a similar way .k. yu , j. lafferty , s. zhu and y. gong , `` large - scale collaborative prediction using a nonparametric random effects model , '' proceedings of the 26th annual international conference on machine learning .acm , 2009 .m. tipping and a. faul , `` fast marginal likelihood maximisation for sparse bayesian models , '' proceedings of the ninth international workshop on artificial intelligence and statistics , vol . 1 , no . 3 , 2003 . z. zhang and b.d . rao , `` extension of sbl algorithms for the recovery of block sparse signals with intra - block correlation , '' ieee transactions on signal processing , vol . 61 , no . 8 , pp .2009 - 2015 , april 2013 .z. zhang and b.d .rao , `` sparse signal recovery with temporally correlated source vectors using sparse bayesian learning , '' ieee journal of selected topics in signal processing , vol .912 - 926 , sept . 2011 .m. sundin , s. chatterjee , m. jansson and c.r .rojas , `` relevance singular vector machine for low - rank matrix sensing '' , international conference on signal processing and communications ( spcom ) , indian institute of science , bangalore , india , july 2014 .avaliable online from http://arxiv.org/abs/1407.0013 .
we develop latent variable models for bayesian learning based low - rank matrix completion and reconstruction from linear measurements . for under - determined systems , the developed methods are shown to reconstruct low - rank matrices when neither the rank nor the noise power is known a - priori . we derive relations between the latent variable models and several low - rank promoting penalty functions . the relations justify the use of kronecker structured covariance matrices in a gaussian based prior . in the methods , we use evidence approximation and expectation - maximization to learn the model parameters . the performance of the methods is evaluated through extensive numerical simulations .
many dynamic systems generate outputs with fluctuations characterized by -like scaling of the power spectra , , where is the frequency .these fluctuations are often associated with nonequilibrium dynamic systems possessing multiple degrees of freedom , rather than being the output of a classic `` homeostatic '' process .it is generally assumed that the presence of many components interacting over a wide range of time or space scales could be the reason for the spectrum in the fluctuations .fluctuations exhibiting -like behavior are often termed `` complex '' , since they obey a scaling law indicating a hierarchical fractal organization of their frequency ( time scale ) components rather than being dominated by a single frequency . behavior is common in a variety of physical , biological and social systems .the ubiquity of the scale - invariant phenomenon has triggered in recent years the development of generic mechanisms describing complex systems , independent of their particular context , in order to understand the `` unifying '' features of these systems . to answer the question whether fluctuations in signals generated by integrated physiological systems exhibit the same level of complexity , we analyze and compare the time series generated by two physiologic control systems under multiple - component integrated neural control the human gait and the human heartbeat .we chose these two particular examples because human gait and heartbeat control share certain fundamental properties , e.g. , both originate in oscillatory centers . in the case of the heart , the pacemaker is located in the sinus node in the right atrium . for gait , pacemakers called central pattern generators are thought to be located in the spinal cord . however , these two systems are distinct , suggesting possible dynamical differences in their output .for example , heartbeat fluctuations are primarily controlled by the involuntary ( autonomic ) nervous system .in contrast , while the spontaneous walking rhythm is an automatic - like process , voluntary inputs play a major role .further , gait control resides in the basal ganglia and related motor areas of the central nervous system , while the heartbeat is controlled by the sympathetic and parasympathetic branches of the autonomic nervous system .previous studies show comparable two - point linear correlations and power spectra in heart rate and human gait , suggesting that differences in physiologic control may not be manifested in beat - to - beat and interstride interval fluctuations .recent studies focusing on higher order correlations and nonlinear properties show that the human heartbeat exhibits not only fractal but also multifractal properties .since multifractal signals require many scaling indices to fully characterize their scaling properties , they may be considered to be more complex than those characterized by a single fractal dimension , such as classical noise .although the origins of the multifractal features in heartbeat dynamics are not yet understood , there is evidence that they relate to the complex intrinsic neuroautonomic regulation of the heart .human gait , e.g. , free unconstrained walking , is also a physiological process regulated by complex hierarchical feedback mechanisms involving supra - spinal inputs .moreover , recent findings indicate that the scaling properties of gait fluctuations relate to neural centers on the higher supra - spinal level rather than to lower motor neurons or environmental inputs . thus it would be natural to hypothesize that the fluctuations in healthy unconstrained human gait exhibit similar fractal and multifractal features , and that human gait dynamics may belong to the same `` complexity class '' as cardiac dynamics .we employ two techniques magnitude and sign decomposition analysis and multifractal analysis to probe long - term nonlinear features , and to compare the levels of complexity in heartbeat and interstride interval fluctuations . to this end, we analyze interstride interval time series from 10 young healthy men ( mean age 22 years ) with no history of neuromascular disorders .subjects walked continuously for 1 hour at a self - selected usual pace on level ground around a flat , obstacle - free , approximately oval , 400 m long path .the interstride interval was measured using a ground reaction force sensor ultra - thin force - sensitive switches were taped inside one shoe and data were recorded on an ambulatory recorder using a previously validated method .we compare the results of our gait analysis with results we have previously obtained from 6-hour long heartbeat interval records from 18 healthy individuals ( 13 female and 5 male , mean age 34 years ) during daily activity ( 12:00 to 18:00 ) .as described below , we systematically compare the scaling properties of the fluctuations in human gait with those in the human heartbeat using power spectral analysis , detrended fluctuation analysis ( dfa ) , magnitude and sign decomposition analysis , and wavelet - based multifractal analysis , and we quantify linear and nonlinear features in the data over a range of time scales .the dfa method was developed because conventional fluctuation analyses , such as power spectral , r / s and hurst analysis can not be reliably used to study nonstationary data .one advantage of the dfa method is that it allows the detection of long - range power - law correlations in noisy signals with embedded polynomial trends that can mask the true correlations in the fluctuations of a signal .the dfa method has been successfully applied to a wide range of research fields in physics , biology , and physiology .the dfa method involves the following steps : ( _ i _ ) given the original signal , where and is the length of the signal , we first form the profile function $ ] , where is the mean. one can consider the profile as the position of a random walk in one dimension after steps .( _ ii _ ) we divide the profile into non - overlapping segments of equal length . ( _ iii _ ) in each segment of length ,we fit , using a polynomial function of order which represents the polynomial _ trend _ in that segment .the coordinate of the fit line in each segment is denoted by .since we use a polynomial fit of order , we denote the algorithm as dfa- .( _ iv _ ) the profile function is detrended by subtracting the local trend in each segment of length . in dfa- , trends of order in the original signal are eliminated .thus , comparison of the results for different orders of dfa- allows us to estimate the type of polynomial trends in the time series .( _ v _ ) for a given segment of length , the root - mean - square ( r.m.s . )fluctuation for this integrated and detrended signal is calculated : ^ 2}. \label{f2}\ ] ] ( _ vi _ ) since we are interested in how depends on the segment length , the above computation is repeated for a broad range of scales .a power - law relation between the average root - mean - square fluctuation function and the segment length indicates the presence of scaling : thus , the dfa method can quantify the temporal organization of the fluctuations in a given signal by a single scaling exponent a self - similarity parameter which represents the long - range power - law correlation properties of the signal .if , there is no correlation and the signal is uncorrelated ( white noise ) ; if , the signal is anti - correlated ; if , the signal is correlated .the larger the value of , the stronger the correlations in the signal . for stationary signals with scale - invariant temporal organization, is related to the fourier power spectrum and to the autocorrelation function . for such signals , \label{f4}\ ] ] and is the dfa scaling exponent ( eq . [ f3 ] ) .thus signals with scaling in the power spectrum ( i.e. ) are characterized by dfa exponent . if , the correlation exponent describes the decay of the autocorrelation function : .\label{f5}\ ] ] fluctuations in the dynamical output of physical and physiological systems can be characterized by their magnitude ( absolute value ) and their direction ( sign ) .these two quantities reflect the underlying interactions in a given system the resulting `` force '' of these interactions at each moment determines the magnitude and the direction of the fluctuations .recent studies have shown that signals with identical long - range correlations can differ in the time organization of the magnitude and sign of the fluctuations . to assess the information contained in these fluctuations , the magnitude and sign decomposition method was introduced .this method involves the following steps : ( _ i _ ) given the original signal we generate the increment series , .( _ ii _ ) we decompose the increment series into a magnitude series and a sign series .( _ iii _ ) to avoid artificial trends we subtract from the magnitude and sign series their average .( _ iv _ ) we then integrate both magnitude and sign series , because of limitations in the accuracy of the detrended fluctuation analysis method ( dfa ) for estimating the scaling exponents of anticorrelated signals ( ) .( _ v _ ) we perform a scaling analysis using 2nd order detrended fluctuation analysis ( dfa-2 ) on the integrated magnitude and sign series .( _ vi _ ) to obtain the scaling exponents for the magnitude and sign series we measure the slope of on a log - log plot , where is the root - mean - square fluctuation function obtained using dfa-2 , and is the scale .fluctuations following an identical scaling law can exhibit different types of correlations for the magnitude and the sign e.g. , a signal with anticorrelated fluctuations can exhibit positive correlations in the magnitude .positive correlations in the magnitude series indicate that an increment with large magnitude is more likely to be followed by an increment with large magnitude .anticorrelations in the sign series indicate that a positive increment in the original signal is more likely to be followed by a negative increment .further , positive power - law correlations in the magnitude series indicate the presence of long - term _ nonlinear _ features in the original signal , and relate to the width of multifractal spectrum .in contrast the sign series relates to the _ linear _ properties of the original signal .the magnitude and sign decomposition method is suitable to probe nonlinear properties in short nonstationary signals , such as 1-hour interstride interval time series .previously , analyses of the fractal properties of physiologic fluctuations revealed that the behavior of healthy , free - running physiologic systems may often be characterized as -like .monofractal signals ( such as classical noise ) are homogeneous , i.e. , they have the same scaling properties throughout the entire signal .monofractal signals can therefore be indexed by a single exponent : the hurst exponent . on the other hand ,multifractal signals are nonlinear and inhomogeneous with local properties changing with time .multifractal signals can be decomposed into many subsets characterized by different _ local _hurst exponents , which quantify the local singular behavior and relate to the local scaling of the time series .thus , multifractal signals require many exponents to fully characterize their properties .the multifractal approach , a concept introduced in the context of multi - affine functions , has the potential to describe a wide class of signals more complex than those characterized by a single fractal dimension . the singular behavior of a signal at time for is characterized by the local hurst exponent where and is a polynomial fit of order .to avoid an _ad hoc _ choice of the range of time scales over which the local hurst exponent is estimated , and to filter out possible polynomial trends in the data which can mask local singularities , we implement a wavelet - based algorithm .wavelets are designed to probe time series over a broad range of scales and have recently been successfully used in the analysis of physiological signals . in particular, recent studies have shown that the wavelet decomposition reveals a robust self - similar hierarchical organization in heartbeat fluctuations , with bifurcations propagating from large to small scales . to quantify hierarchical cascades in gait dynamics and to avoid inherent numerical instability in the estimate of the local hurst exponent, we employ a `` mean - field '' approach a concept introduced in statistical physics which allows us to probe the collective behavior of local singularities throughout an entire signal and over a broad range of time scales .we study the multifractal properties of interstride interval time series by applying the _ wavelet transform modulus maxima _ ( wtmm ) method that has been proposed as a mean - field generalized multifractal formalism for fractal signals .we first obtain the wavelet coefficient at time from the continuous wavelet transform defined as : where is the analyzed time series , is the analyzing wavelet function , is the wavelet scale ( i.e. , time scale of the analysis ) , and is the number of data points in the time series . for we use the third derivative of the gaussian , thus filtering out up to second order polynomial trends in the data .we then choose the modulus of the wavelet coefficients at each point in the time series for a fixed wavelet scale .next , we estimate the partition function where the sum is only over the maxima values of , and the powers take on real values . by not summing over the entire set of wavelet transform coefficients along the time series at a given scale butonly over the wavelet transform modulus maxima , we focus on the fractal structure of the temporal organization of the singularities in the signal .we repeat the procedure for different values of the wavelet scale to estimate the scaling behavior in analogy with what occurs in scale - free physical systems , in which phenomena controlled by the same mechanism over multiple time scales are characterized by scale - independent measures , we assume that the scale - independent measures , , depend only on the underlying mechanism controlling the system . thus by studying the scaling behavior of we may obtain information about the self - similar ( fractal ) properties of the mechanism underlying gait control . for certain values of the powers , the exponents have familiar meanings . in particular , is related to the scaling exponent of the fourier power spectra , , as . for positive , reflects the scaling of the large fluctuations and strong singularities in the signal , while for negative , reflects the scaling of the small fluctuations and weak singularities .thus , the scaling exponents can reveal different aspects of the underlying dynamics . in the framework of this wavelet - based multifractal formalism , is the legendre transform of the singularity spectrum defined as the hausdorff dimension of the set of points in the signal where the local hurst exponent is .homogeneous monofractal signals i.e. , signals with a single local hurst exponent are characterized by linear spectrum : where is the global hurst exponent . on the contrary , a nonlinear curve is the signature of nonhomogeneous signals that display multifractal properties i.e ., is a varying quantity that depends upon .in fig . [ data ]we show two example time series : ( i ) an interstride interval time series from a typical healthy subject during hour ( steps ) of unconstrained normal walking on a level , obstacle - free surface ( fig .[ data]a ) ; ( ii ) consecutive heartbeat intervals from hour ( beats ) record of a typical healthy subject during daily activity ( fig .[ data]b ) .both time series exhibit irregular fluctuations and nonstationary behavior characterized by different local trends ; in fact it is difficult to differentiate between the two time series by visual inspection . = 0.95 we first examine the two - point correlations and scale - invariant behavior of the time series shown in fig . [ data ] .power spectra of the gait and heartbeat time series ( fig . [ correlations]a ) indicate that both processes are described by a power - law relation over more than 2 decades , with exponent .this scaling behavior indicates self - similar ( fractal ) properties of the data suggestive of an identical level of complexity as quantified by this linear measure .we obtain similar results for the interstride interval times series from all subjects in our gait database : ( group mean std . dev . ) in agreement with previous results .next , to quantify the degree of correlation in the interstride and heartbeat fluctuations we apply the dfa method , which also provides a linear measure : plots of the root - mean - square fluctuation function _ vs. _ time scale ( measured in stride or beat number ) from a second - order dfa analysis ( dfa-2 ) indicate the presence of long - range power - law correlations in both gait and heartbeat fluctuations ( fig .[ correlations]b ) .the scaling exponent for the heartbeat signal is very close to the exponent for the interstride interval signal , estimated over the scaling range .we obtain similar results for the remaining subjects : ( group mean std . dev . ) for the gait data and for the heartbeat data , in agreement with .the results of both power spectral analysis and the dfa method indicate that gait and heartbeat time series have similar scale - invariant properties suggesting parallels in the underlying mechanisms of neural regulation .= 0.63 = 0.63 = 0.63 to probe for long - term nonlinear features in the dynamics of interstride intervals we employ the magnitude and sign decomposition analysis .previous studies have demonstrated that information about the nonlinear properties of heartbeat dynamics can be quantified by long - range power - law correlations in the magnitude of the increments in heartbeat intervals .further , correlations in the magnitude are associated with nonlinear features in the underlying dynamics , while linear signals are characterized by an absence of correlations ( random behavior ) in the magnitude series . to quantify the correlations in the magnitude of the interstride incrementswe apply the dfa-2 method to the data displayed in fig .our results show that the magnitude series of the interstride increments exhibits close to random behavior with correlation exponent ( denoted by ( ) in fig .[ correlations]c ) .in contrast , the magnitude series of the heartbeat increments ( fig . [ data]b ) exhibits strong positive correlations over more than two decades characterized by exponent ( denoted by ( ) in fig .[ correlations]c ) . a surrogate test eliminating the nonlinearity in the heartbeat time series by randomizing the fourier phases but preserving the power spectrum leads to random behavior ( ) in the magnitude series .thus the striking difference in the magnitude correlations of gait and heartbeat dynamics ( both of which are under multilevel neural control ) raises the possibility that these two physiologic processes belong to different classes of complexity whereby the neural regulation of the heartbeat is inherently more nonlinear , over a range of time scales , than the neural mechanism of gait control .our observation of a low degree of nonlinearity in the gait time series is supported by the remaining subjects in the group : over time scales , we obtain exponent ( group mean std . dev . ) for the gait time series , which is significantly lower than the corresponding exponent obtained for the heartbeat data ( , by the student s t - test ) . to further test the long - term nonlinear features in gait dynamics we study the multifractal properties of interstride time series .we apply the wavelet transform modulus maxima ( wtmm ) method a `` mean - field '' type approach to quantify the fractal organization of singularities in the signal .we characterize the multifractal properties of a signal over a broad range of time scales by the multifractal spectrum .= 0.66 = 0.66 = 0.66 we first examine the time series shown in fig . [for the gait time series , we obtain a spectrum which is practically a linear function of the moment , suggesting that the gait dynamics exhibit _ monofractal _ properties ( fig . [ multifractal]a ) .this is in contrast with the nonlinear spectrum for the heartbeat signal ( fig .[ multifractal]a ) which is indicative of multifractal behavior .further when analyzing the remaining interstride interval recordings we find close to linear spectra for all subjects in the gait group ( fig .[ multifractal]b ) . calculating the groupaveraged spectra we find clear differences : multifractal behavior for the heartbeat dynamics and practically monofractal behavior for the gait dynamics ( fig .[ multifractal]c ) .specifically we find significant differences between the gait and heartbeat spectra for negative values of the moment ; for positive values of , the scaling exponents take on similar values .this is in agreement with the similarity in power spectral and dfa scaling exponents for gait and heartbeat data , which correspond to ( fig .[ correlations ] ) .however , the heartbeart spectrum is visibly more curved for all moments compared with the gait spectrum which may be approximately fit by a straight line , indicative of a low degree of nonlinearity in the interstride time series .thus our results show consistent differences between the nonlinear and multifractal properties of gait and heartbeat time series .previous studies have shown that reducing the level of physical activity under a constant routine protocol does not change the multifractal features of heartbeat dynamics , while blocking the sympathetic or parasympathetic tone of the neuro - autonomic regulation of the heart dramatically changes the multifractal spectrum , thus suggesting that the observed features in cardiac dynamics arise from the intrinsic mechanisms of control .similarly , by eliminating polynomial trends in the interstride interval time series corresponding to changes in the gait pace using dfa and wavelet analyses , we find scaling features which remain invariant among individuals . therefore ,since different individuals experience different extrinsic factors , the observed lower degree of nonlinearity as measured by the magnitude scaling exponent and the close - to - monofractal behaviour characterized by practically linear spectrum appear to be a result of the intrinsic mechanisms of gait regulation .these observations suggest that while both gait and heartbeat dynamics arise from layers of neural control with multiple component interactions , and exhibit temporal organization over multiple time scales , they nonetheless belong to different complexity classes .while both gait and heartbeat dynamics may be a result of competing inputs interacting through multiple feedback loops , differences in the nature of these interactions may be imprinted in their nonlinear and multifractal features : our findings suggest that while these interactions in heartbeat dynamics are of a nonlinear character and are represented by fourier phase interactions encoded in the magnitude scaling and the multifractal spectrum , feedback mechanisms of gait dynamics lead to decreased interactions among the fourier phases .these new findings are supported by our analysis of a second group of gait subjects .we analyze interstride intervals from an additional group of 7 young healthy subjects ( 6 male , 1 female , mean age 28 years ) recorded using a portable accelerometer .subjects walked continuously for hour at a self - selected pace on an unconstrained outdoor walking track in a park environment allowing for slight changes in elevation and obstacles related to pedestrian traffic .the stride interval time series in this case were obtained from peak - to - peak intervals in the accelerometer signal output in the direction of the subjects vertical axis .compatibility of the ground reaction force sensor used for the gait recordings of the first group with the accelerometer device , and strong correlation between outputs of the two devices was reported in ref . .we find that for this second group the two - point correlation exponent , as measured by the dfa method ( group mean std . dev . )is similar to the group average exponent of the first gait group ( ) and also the heartbeat data ( ) .in contrast , we find again a significantly lower degree of nonlinearity , as measured by the magnitude exponent and the spectrum , compared with heartbeat dynamics ( , by the student s t - test ) ( fig .[ correlations]c and fig . [ multifractal]c ) .on the other hand , the group averaged value of is slightly higher compared with the first gait group ( ) , and this is associated with slightly stronger curvature in the spectrum for the second gait group .this may be attributed to the fact that the second group walked in a natural park environment where obstacles , changes in elevation and pedestrian traffic may possibly require the activation of higher neural centers .the present results are related to a physiologically - based model of gait control where specific interactions between neural centers are considered . in this modela lower degree of nonlinearity ( and close - to - linear monofractal spectrum ) reflects increased connectivity between neural centers , typically associated with maturation of gait dynamics in adults .the present results are also consistent with studies that used a different approach to quantify the dynamics of giat , based on estimates of the local hurst exponents , and reported only weak multifractality in gait dynamics .in summary , we find that while the fluctuations in the output of both gait and heartbeat processes are characterized by similar two - point correlation properties and -like spectra , they belong to different classes of complexity human gait fluctuations exhibit linear and close to monofractal properties characterized by a single scaling exponent , while heartbeat fluctuations exhibit nonlinear multifractal properties which in physical systems have been connected with turbulence and related multiscale phenomena .these findings are of interest because they underscore the limitations of traditional two - point correlation methods in characterizing physiologic and physical time series .in addition , these results suggest that feedback on multiple time scales is not sufficient to explain different types of scaling and scale - invariance , and highlight the need for the development of new models that could account for the scale - invariant outputs of different types of feedback systems .we thank y. ashkenazy , a.l .goldberger , z. chen , k. hu and a. yuen for helpful discussions and technical assistance .this work was supported by grants from nih / national center for research resources ( p41 rr13622 ) , nsf , us - israel binational science foundation and mitsubishi chemical co. , yokahama , japan .the accelerometer device we used ( , weight ) was developed by sharp co. the device , attached to subjects back , measures the vertical and anteroposterier acceleration profile during walking .the output signals are digitized at a sampling frequency of , , and are stored on a memory card .when the subjects heel strikes the ground a clear peak in the acceleration along the vertical axis is recorded .the positions of these peaks in time are also controlled and verified independently through matching steepest points in the anteroposterier acceleration signal output .ivanov , l.a.n amaral , a.l .goldberger , h.e .stanley , europhys . lett . * 43 * , 363 ( 1998 ) .lin , r.l .hughson , phys .lett . * 86 * , 1650 ( 2001 ) .mcclintock , a. stefanovska , physica a * 314 * , 69 ( 2002 ) .r. yulmetyev , p. hanggi , f. gafarov , phys .e * 65 * , 046107 ( 2002 ) .
many physical and physiological signals exhibit complex scale - invariant features characterized by scaling and long - range power - law correlations , suggesting a possibly common control mechanism . specifically , it has been suggested that dynamical processes influenced by inputs and feedback on multiple time scales may be sufficient to give rise to scaling and scale invariance . two examples of physiologic signals that are the output of hierarchical , multi - scale physiologic systems under neural control are the human heartbeat and human gait . here we show that while both cardiac interbeat interval and gait interstride interval time series under healthy conditions have comparable scaling , they still may belong to different complexity classes . our analysis of the magnitude series correlations and multifractal scaling exponents of the fluctuations in these two signals demonstrates that in contrast with the nonlinear multifractal behavior found in healthy heartbeat dynamics , gait time series exhibit less complex , close to monofractal behavior and a low degree of nonlinearity . these findings underscore the limitations of traditional two - point correlation methods in fully characterizing physiologic and physical dynamics . in addition , these results suggest that different mechanisms of control may be responsible for varying levels of complexity observed in physiological systems under neural regulation and in physical systems that possess similar scaling .
key establishment protocols are one of the most important cryptographic primitives that have been used in our society . the first unauthenticated key agreement protocol based on asymmetric cryptographic techniques were proposed by diffie and hellman . since this seminal result ,many authenticated key agreement protocols have been proposed and the security properties of key agreement protocols have been extensively studied . in order to implement these authenticated key agreement protocols , one needs to get the corresponding party s authenticated public key .for example , in order for alice and bob to execute the nist recommended mqv key agreement protocol , alice needs to get an authenticated public key for bob and bob needs to get an authenticated public key for alice first , where and are alice and bob s private keys respectively .one potential approach for implementing these schemes is to deploy a public key infrastructure ( pki ) system , which has proven to be difficult .thus it is preferred to design easy to deploy authenticated key agreement systems .identity based key agreement system is such an example . in 1984 , shamir proposed identity based cryptosystems where user s identities ( such as email address , phone numbers , office locations , etc . )could be used as the public keys .several identity based key agreement protocols ( see , e.g. , ) have been proposed since then .most of them are not practical or do not have all required security properties .joux proposed a one - round tripartite non - identity based key agreement protocol using weil pairing .then feasible identity based encryption schemes based on weil or tate paring were introduced by sakai , ohgishi , and kasahara and later by boneh and franklin independently .based on weil and tate pairing techniques , smart , chen - kudla , scott , shim , and mccullagh - barreto designed identity based and authenticated key agreement protocols .chen - kudla showed that smart s protocol is not secure in several aspects .cheng et al . pointed out that chen - kudla s protocol is not secure againt unknown key share attacks .scott s protocol is not secure against man in the middle attacks .sun and hsieh showed that shim s protocol is insecure against key compromise impersonation attacks or man in the middle attacks .choo showed that mccullagh and barreto s protocol is insecure against key revealing attacks .mccullagh and barreto revised their protocol .but the revised protocol does not achieve weak perfect forward secrecy property . in this paper, we propose an efficient identity based and authenticated key agreement protocol achieving all security properties that an authenticated key agreement protocol should have .the advantage of identity based key agreement is that non - pki system is required .the only prerequisite for executing identity based key agreement protocols is the deployment of authenticated system - wide parameters .thus , it is easy to implement these protocols in relatively closed environments such as government organizations and commercial entities .the remainder of this paper is organized as follows . in [ bilinear ]we briefly describe bilinear maps , bilinear diffie - hellman problem , and its variants . in [ idakprotocol ] , we describe our identity based and authenticated key agreement protocol idak . [ securitymodel ] describes a security model for identity based key agreement . in section [ securityproof ] , we prove the security of idak key agreement protocol . in sections [ pfsidak ] and [ kcridak ] , we discuss key compromise impersonation resilience and perfect forward secrecy properties of idak key agreement protocol .in the following , we briefly describe the bilinear maps and bilinear map groups . the details could be found in joux and boneh and franklin .1 . and are two ( multiplicative ) cyclic groups of prime order .2 . is a generator of .3 . is a bilinear map .a bilinear map is a map with the following properties : 1 .bilinear : for all , and , we have .non - degenerate : .we say that is a bilinear group if the group action in can be computed efficiently and there exists a group and an efficiently computable bilinear map as above .concrete examples of bilinear groups are given in .for convenience , throughout the paper , we view both and as multiplicative groups though the concrete implementation of could be additive elliptic curve groups . throughout the paper _ efficient _means probabilistic polynomial - time , _ negligible _ refers to a function which is smaller than for all and sufficiently large , and _ overwhelming _ refers to a function for some negligible .consequently , a function is _ non - negligible _ if there exists a constant and there are infinitely many such that .we first formally define the notion of a bilinear group family and computational indistinguishable distributions ( some of our terminologies are adapted from boneh ) .* bilinear group families * a _ bilinear group family _ is a set of bilinear groups where ranges over an infinite index set , and are two groups of prime order , and is a bilinear map .we denote by the length of the binary representation of .we assume that group and bilinear operations in are efficient in .unless specified otherwise , we will abuse our notations by using as the group order instead of in the remaining part of this paper .* instance generator * an _ instance generator _ , , for a bilinear group family is a randomized algorithm that given an integer ( in unary , that is , ) , runs in polynomial - time in and outputs some random index for , and a generator of , where and are groups of prime order .note that for each , the instance generator induces a distribution on the set of indices .the following bilinear diffie - hellman assumption ( bdh ) has been used by boneh and franklin to show security of their identity - based encryption scheme .* bilinear diffie - hellman problem * let be a bilinear group family and be a generator for , where .the bdh problem in is as follows : given for some , compute .a cbdh algorithm for is a probabilistic polynomial - time algorithm that can compute the function in with a non - negligible probability .that is , for some fixed we have \ge \frac{1}{k^{c}}\ ] ] where the probability is over the random choices of in , the index , the random choice of , and the random bits of .* cbdh assumption*. the bilinear group family _ satisfies _ the cbdh - assumption if there is no cbdh algorithm for .a perfect - cbdh algorithm for is a probabilistic polynomial - time algorithm that can compute the function in with overwhelming probability . _ satisfies _ the perfect - cbdh - assumption if there is no perfect - cbdh algorithm for .[ perfectcbdh ] a bilinear group family satisfies the cbdh - assumption if and only if it satisfies the perfect - cbdh - assumption . * proof . *see appendix . consider joux s tripartite key agreement protocol : alice , bob , and carol fix a bilinear group .they select and exchange , , and .their shared secret is . to _totally break _the protocol a passive eavesdropper , eve , must compute the bdh function : .cbdh - assumption by itself is not sufficient to prove that joux s protocol is useful for practical cryptographic purposes . even though eve may be unable to recover the entire secret, she may still be able to predict quite a few bits ( less than bits for some constant ; otherwise , cbdh assumption is violated ) of information for with some confidence .if is to be the basis of a shared secret key , one must bound the amount of information eve is able to deduce about it , given , , and .this is formally captured by the , much stronger , decisional bilinear diffie - hellman assumption ( dbdh - assumption ) [ dis ] let and be two ensembles of probability distributions , where for each both and are defined over the same domain .we say that the two ensembles are _ computationally indistinguishable _ if for any probabilistic polynomial - time algorithm , and any we have - { { \rm pr}}\left[{\mathcal{d}}\left({\mathcal{y}}_{\mathcal{\rho}}\right)=1\right ] \right|<\frac{1}{k^{c}}\ ] ] for all sufficiently large , where the probability is taken over all , , and internal coin tosses of . in the remainder of the paper, we will say in short that the two distributions and are computationally indistinguishable .let be a bilinear group family .we consider the following two ensembles of distributions : * of random tuples , where is a random generator of ( ) and .* of tuples , where is a random generator of and .an algorithm that solves the bilinear diffie - hellman decision problem is a polynomial time probabilistic algorithm that can effectively distinguish these two distributions .that is , given a tuple coming from one of the two distributions , it should output 0 or 1 , and there should be a non - negligible difference between ( a ) the probability that it outputs a 1 given an input from , and ( b ) the probability that it outputs a 1 given an input from .the bilinear group family _ satisfies the dbdh - assumption _ if the two distributions are computationally indistinguishable .* the dbdh - assumption is implied by a slightly weaker assumption : _ perfect_-dbdh - assumption .a perfect - dbdh statistical test for distinguishes the inputs from the above and with overwhelming probability .the bilinear group family _ satisfies the perfect - dbdh - assumption _ if there is no such probabilistic polynomial - time statistical test .in this section , we describe our identity - based and authenticated key agreement scheme idak .let be the security parameter given to the setup algorithm and be a bilinear group parameter generator .we present the scheme by describing the three algorithms : * setup * , * extract * , and * exchange*. * setup * : for the input , the algorithm proceeds as follows : 1 .run on to generate a bilinear group and the prime order of the two groups and .2 . pick a random master secret .3 . choose cryptographic hash functions and .in the security analysis , we view and as random oracles . in practice , we take as a random oracle ( secure hash function ) from to ( see appendix for details ) .the system parameter is and the master secret key is . *extract * : for a given identification string , the algorithm computes a generator , and sets the private key where is the master secret key . * exchange * : for two participants alice and bob whose identification strings are and respectively , the algorithm proceeds as follows . 1. alice selects , computes , and sends it to bob .2 . bob selects , computes , and sends it to alice .alice computes , , and the shared secret as 4 .bob computes , , and the shared secret as in the next section , we will show that idak protocol is secure in bellare and rogaway model with random oracle plus dbdh - assumption .we conclude this section with a theorem which says that the shared secret established by the idak key agreement protocol is computationally indistinguishable from a random value .[ passiverandom ] let be a bilinear group family , , and be random generators of .assume that dbdh - assumption holds for .then the distributions , , and are computationally indistinguishable , where are selected from uniformly .before we give a proof for theorem [ passiverandom ] , we first prove two lemmas that will be used in the proof of the theorem .[ firstlemma ] ( naor and reingold ) let be a bilinear group family , , be a constant , be a random generator of , and .assume that the dbdh - assumption holds for .then the two distributions and are computationally indistinguishable . here denotes the tuple and .* proof . * using a random reduction , naor and reingold ( * ? ? ?* lemma 4.4 ) ( see also shoup showed that the two distributions and are computationally indistinguishable . the proof can be directly modified to obtain a proof for this lemma .the details are omitted . [ secondlemma ]let be a bilinear group family , , be a random generator of , , and and be two polynomial - time computable functions . if the two distributions and are computationally indistinguishable , then the two distributions and are computationally indistinguishable , where , , and . * proof .* see appendix . * proof of theorem [ passiverandom ] * let lemma [ firstlemma ] , the two distributions are computationally indistinguishable assuming that dbdh - assumption holds for , where is a random generator of and , , , , . since is a fixed function from to and is a prime , it is straightforward to verify that for any , , , and are uniformly ( and independently of each other ) distributed over .it follows that the distribution is computationally indistinguishable from the distribution , where .thus and are computationally indistinguishable .the theorem now follows from lemma [ secondlemma ] .our security model is based on bellare and rogaway security models for key agreement protocols with several modifications . in our model, we assume that we have at most protocol participants ( principals ) : , where is the security parameter .the protocol determines how principals behave in response to input signals from their environment .each principal may execute the protocol multiple times with the same or different partners .this is modelled by allowing each principal to have different instances that execute the protocol .an oracle models the behavior of the principal carrying out a protocol session in the belief that it is communicating with the principal for the time .one given instance is used only for one time .each maintains a variable _ view _ ( or _ transcript _ ) consisting of the protocol run transcripts so far .the adversary is modelled by a probabilistic polynomial time turing machine that is assumed to have complete control over all communication links in the network and to interact with the principals via oracle accesses to .the adversary is allowed to execute any of the following queries : * .this allows the adversary to get the long term private key for a new principal whose identity string is . *this sends message to the oracle .the output of is given to the adversary .the adversary can ask the principal to initiate a session with by a query where is the empty string . * .this asks the oracle to reveal whatever session key it currently holds .* .this asks to reveal the long term private key .the difference between the queries * extract * and * corrupt * is that the adversary can use * extract * to get the private key for an identity string of her choice while * corrupt * can only be used to get the private key of existing principals .let be an initiator oracle ( that is , it has received a message at the beginning ) and be a responder oracle . if every message that sends out is subsequently delivered to , with the response to this message being returned to as the next message on its transcript , then we say the oracle matches .similarly , if every message that receives was previously generated by , and each message that sends out is subsequently delivered to , with the response to this message being returned to as the next message on its transcript , then we say the oracle matches .the details for an exact definition of matching oracles could be found in . for the definition of matching oracles, the reader should be aware the following scenarios : even though the oracle thinks that its matching oracle is , the real matching oracle for could be .for example , if sends a message to and replies with .the adversary decides not to forward the message to .instead , the adversary sends the message to initiate another oracle and does not know the existence of this new oracle .the oracle replies with and the adversary forwards this to as the responding message for . in this case , the transcript of matches the transcript of .thus we consider and as matching oracles . in another word ,the matching oracles are mainly based the message transcripts . in order to define the notion of a secure session key exchange ,the adversary is given an additional experiment .that is , in addition to the above regular queries , the adversary can choose , at any time during its run , a query to a completed oracle with the following properties : * the adversary has never issued , at any time during its run , the query or .* the adversary has never issued , at any time during its run , the query or . *the adversary has never issued , at any time during its run , the query . *the adversary has never issued , at any time during its run , the query if the matching oracle for exists ( note that such an oracle may not exist if the adversary is impersonating the to the oracle ) .the value of may be different from the value of since the adversary may run fake sessions to impersonate any principals without victims knowledge .let be the value of the session key held by the oracle that has been established between and .the oracle tosses a coin . if , the adversary is given .otherwise , the adversary is given a value randomly chosen from the probability distribution of keys generated by the protocol . in the end, the attacker outputs a bit .the advantage that the adversary has for the above guess is defined as -\frac{1}{2}\right|.\ ] ] now we are ready to give the exact definition for a secure key agreement protocol .[ keysecuredef ] a key agreement protocol is br - secure if the following conditions are satisfied for any adversary : 1 . if two uncorrupted oracles and have matching conversations ( e.g. , the adversary is passive ) and both of them are complete according to the protocol , then both oracles will always accept and hold the same session key which is uniformly distributed over the key space . is negligible . in the following ,we briefly discuss the attributes that a br - secure key agreement protocol achieves . ** known session keys*. the adversary may use * reveal* query before or after the query * test* .thus in a secure key agreement model , the adversary learns zero information about a fresh key for session even if she has learnt keys for other sessions . ** impersonation attack*. if the adversary impersonates to , then she still learns zero information about the session key that the oracle holds for this impersonated since there is no matching oracle for in this scenario .thus can use * test * query to test this session key that holds . * * unknown key share*. if establishes a session key with though he believes that he is talking to , then there is an oracle that holds this session key . at the same time, there is an oracle that holds this session key , for some ( normally ) . during an unknown key share attack ,the user may not know this session key .since and are not matching oracles , the adversary can make the query to learn this session key before the query .thus the adversary will succeed for this * test * query challenge if the unknown key share attack is possible .however , the following important security properties that a secure key agreement scheme should have are not implied from the original br - security model . ** perfect forward secrecy*. this property requires that previously agreed session keys should remain secret , even if both parties long - term private key materials are compromised .bellare - rogaway model does not capture this property .canetti and krawczyk s model use the session - key expiration primitive to capture this property .similar modification to bellare - rogaway model are required to capture this property also .we will give a separate proof that the idak key agreement protocol achieves weak perfect forward secrecy .note that as pointed out in , no two - message key - exchange protocol authenticated with public keys and with no secure shared state can achieve perfect forward secrecy .* * key compromise impersonation resilience*. if the entity s long term private key is compromised , then the adversary could impersonate to others , but it should not be able to impersonate others to .similar to wpfs property , bellare - rogaway model does not capture this property .we will give a separate proof that the idak key agreement protocol has this property .before we present the security proof for the idak key agreement protocol , we first prove some preliminary results that will be used in the security proof .[ feedbackbdh ] let be a bilinear group family , , be a random generator of , and be a random oracle .assume dbdh - assumption holds for and let and be two distributions defined as then we have 1 .the two distributions and are computationally indistinguishable if is defined as are chosen from uniformly , or is either chosen from uniformly , and are chosen from within polynomial time according to a fixed distribution given the view without violating dbdh - assumption .2 . for any constant ,the two distributions and are computationally indistinguishable if is defined as : where are uniformly chosen from , are either chosen from uniformly or , and is chosen within polynomial time according to a fixed distribution given the view , , , without violating dbdh - assumption .3 . for any constant ,the two distributions and are computationally indistinguishable if , where is defined as the in the item 2 , and is defined as : where are either chosen from uniformly or , and are chosen within polynomial time according to a fixed distribution given the view , , , without violating dbdh - assumption and with the condition that `` or '' .note that and could have different distributions .* proof . *see appendix. [ securityprooftheorem ] suppose that the functions and are random oracles and the bilinear group family satisfies dbdh - assumption . then the idak scheme is a br - secure key agreement protocol .* proof . *in this section , we show that the protocol idak achieves weak perfect forward secrecy property . perfect forward secrecy property requires that even if alice and bob lose their private keys and , the session keys established by alice and bob in the previous sessions are still secure .krawczyk pointed out that no two - message key - exchange protocol authenticated with public keys and with no secure shared state can achieve perfect forward secrecy .weak perfect forward secrecy ( wpfs ) property for key agreement protocols sates as follows : any session key established by uncorrupted parties without active intervention by the adversary is guaranteed to remain secure even if the parties to the exchange are corrupted after the session key was erased from the parties memory ( for a formal definition , the reader is referred to ) . in the following ,we show the idak achieves wpfs property . using the similar primitive of `` session - key expiration '' as in canetti andkrawczyk s model , we can revise bellare - rogaway model so that wpfs property is provable also . in bellare - rogaway model, the query is allowed only if the four properties in section [ securitymodel ] are satisfied . we can replace the property `` the adversary has never issued , at any time during its run , the query or '' with the property `` the adversary has never issued , before the session is complete , the query or '' .we call this model the wpfsbr model . in the final version of this paper, we will show that the protocol idak is secure in the wpfsbr model .thus idak achieves wpfs property . in the following , we present the essential technique used in the proofit is essentially sufficient to show that the two distributions and are computationally indistinguishable for and uniform at random chosen , , .consequently , it is sufficient to prove the following theorem .[ pfsthm ] let be a bilinear group family , .assume that dbdh - assumption holds for .then the two distributions are computationally indistinguishable for random chosen .* we use a random reduction . for a contradiction ,assume that there is a polynomial time probabilistic algorithm that distinguishes and with a non - negligible probability .we construct a polynomial time probabilistic algorithm that distinguishes and with , where and are uniformly at random in .let the input of be , where is either or uniformly at random in .we construct as follows . chooses random and sets , , , , , , and .let note that if , then are uniform in ( and independent of each other and of ) and .otherwise , are uniform in and independent of each other and of .therefore , by the definitions , = \pr\left[{\mathcal{d}}({\mathcal{x}})=1\right]\\ \mbox{and}\quad\quad&\pr\left[{\mathcal{a}}\left({\mathcal{r}},{\hat{e}}(g , g)^{t}\right)=1\right ] = \pr\left[{\mathcal{d}}({\mathcal{y}})=1\right ] \end{array}\ ] ] thus distinguishes and with .this is a contradiction .though theorem [ pfsthm ] shows that the protocol idak achieves weak perfect forward secrecy even if both participating parties long term private keys were corrupted , idak does not have perfect forward secrecy when the master secret were leaked .the perfect forward secrecy against the corruption of could be achieved by requiring bob ( the responder in the idak protocol ) to send in addition to the value and by requiring both parties to compute the shared secret as where is the shared secret established by the idak protocol .in this section , we informally show that the protocol idak has the key compromise impersonation resilience property .that is , if alice loses her private key , then the adversary still could not impersonate bob to alice . for a formaly proof of kci, we still need to consider the information obtained by the adversary by * reveal * , * extract * , * send * , * corrupt * queries in other sessions .this will be done in the final version of this paper . in order to show kci for idak ,it is ( informally ) sufficient to show that the two distributions and are computationally indistinguishable for , where are chosen uniform at random , and is chosen according to some probabilistic polynomial time distribution . since the value is known , it is sufficient to prove the following theorem .[ kcrthm ] let be a bilinear group family , .assume that dbdh - assumption holds for .then the two distributions are computationally indistinguishable for random chosen , where is chosen according to some probabilistic polynomial time distribution .* proof . * since is chosen uniform at random , and is a random oracle , we may assume that is uniformly distributed over when is chosen according to any probabilistic polynomial time distribution .thus the proof is similar to the proof of theorem [ pfsthm ] and the details are omitted .the theorem could also be proved using the splitting lemma which was used to prove the fork lemma .briefly , the splitting lemma translates the fact that when a subset is `` large '' in a product space , it has many large sections . using the splitting lemma, one can show that if can distinguish and , then by replaying with different random oracle , one can get sufficient many tuples such that ( 1 ) ; ( 2 ) distinguishes and ( respectively and ) when is uniformly chosen but other values takes the values from the above tuple with ( respectively ) . since .thus , for the above tuple , we can distinguish from for random chosen .this is a contradiction with the dbdh - assumption . 99 m. bellare , r. canetti , and h. krawczyk .keying hash functions for message authentication . in : _ advances in cryptology ,crypto 96 _ , pages 115 , 1996 .m. bellare , r. canetti , and h. krawczyk . a modular approach to the design and analysis of authentication and key exchange protocols . in : _30th annual acm symposium on theory of computing _ , 1998 .m. bellare and p. rogaway .random oracles are practical : a paradigms for designing efficient protocols . in : _ proc .1st acm conference on computer communication security _ , pages 6273 , acm press , 1993 .m. bellare and p. rogaway . entity authentication and key distribution . in : _ advances in cryptology ,crypto 93 _ , lncs 773 ( 1993 ) , 232249 .m. blum and s.micali . how to generate cryptographically strong sequence of pseudo - random bits ._ siam j. comput . _ * 13*:850864 , 1984 . d. boneh .the decision diffie - hellman problem . in : _ ants - iii _ , lncs 1423 ( 1998 ) , 4863 .d. boneh and m. franklin .identity - based encryption from the weil pairing ._ siam j. computing _ * 32*(3):586615 , 2003 .r. canetti .universally composable security : a new paradigm for cryptographic protocols . in : _42nd focs _ , 2001 .r. canetti and h. krawczyk .analysis of key - exchange protocols and their use for building secure channels . in : _ advances in cryptology ,eurocrypt 01 _ , lncs 2045 ( 2001 ) , 453474 .full version available from cryptology eprint archive 2001 - 040 ( http://eprint.iacr.org/ ) .r. canetti and h. krawczyk .universally composable notions of key exchange and secure channels . in : _eurocrypt 02_.l. chen and c. kudla .identity based authenticated key agreement protocols from pairing . in : _16th ieee security foundations workshop _ , pages 219233 .ieee computer society press , 2003 .z. cheng and l. chen . on the security proof of mccullagh - barreto s key agreement protocol and its variants .http://eprint.iacr.org/2005/201.pdf z. cheng , m. nistazakis , r. comley , and l. vasiu .on indistinguishability - based security model of key agreement protocols - simple cases . in _ proc . of acns 04_ , june 2004 .k. choo .revisit of mccullagh - barreto two party id - based authentication key agreement protocols .w. diffie and m. hellman .new directions in cryptography . _ ieee transactions on information theory _ , *6*(1976 ) , 644654 .a. fiat and a. shamir .how to prove yourself : practical solutions of identification and signature problems . in : _ advances in cryptology ,crypto 86 _ , lncs 263 ( 1987 ) , 186194 .m. girault and j. pailles .an identity - based scheme providing zero - knowledge authentication and authenticated key exchange . in : _ proc .esorics 90 _ , pages 173184 .a. joux .a one round protocol for tripartite diffie - hellman . in : _algorithmic number theory symposium , ants - iv _ , lncs 1838 , pages 385394 , 2000 .h. krawczyk .hmqv : a high - performance secure diffie - hellman protocol . in : _ proc .crypto 05 _ , springer , 2005 .l. law , a. menezes , m. qu , j. solinas , and s. vanstone .an efficient protocol for authenticated key agreement ._ designs , codes and cryptography _ , * 28*(2):119134 .s. li , q. yuan , and j. li . towards security two - part authenticated key agreement protocols .http://eprint.iacr.org/2005/300.pdf .p. mccullagh and p. barreto .a new two - party identity - based authenticated key agreement ._ proc . of ct - rsa 2005_ , pages 262 - 274 , lncs 3376 , springer verlag , 2005 .p. mccullagh and p. barreto . a new two - party identity - based authenticated key agreement .m. naor and o. reingold .number - theoretic constructions of efficient pseudo - random functions . in : _38th annual symposium on foundations of computer science _ , ieee press , 1998 .v. nechaev .complexity of a determinate algorithm for the discrete logarithm ._ mathematical notes _ , * 55*(1994 ) , 165172 .nist special publication 800 - 56 : recommendation on key establishment schemes , draft 2.0 , 2003 .http://csrc.nist.gov/cryptotoolkit/kms/keyschemes-jan03.pdf .e. okamoto .proposal for identity - based key distribution system . _ electronics letters _ * 22*:12831284 , 1986 .d. pointcheval and j. stern .security arguments for digital signatures and blind signatures ._ j. cryptology _ * 13*(3):361396 , 2000 .e. ryu , e. yoon , and k. yoo .an efficient id - based authenticated key agreement protocol from pairing . in : _ networking 2004 _ , pages 14581463 , lncs 3042 , springer verlag , 2004 .r. sakai , k. ohgishi , and m. kasahara .cryptosystems based on pairing . in : _ 2000 symp . on cryptography and information security ( scis 2000 ) _ , okinawa , japan 2000 .m. scott .authenticated id - based key exchange and remote log - in with insecure token and pin number .http://eprint.iacr.org/2002/164.pdf a. shamir .identity - based cryptosystems and signature schemes . in : _ advances in cryptology ,crypto 84 _ , lncs 196 , pages 4753 , springer verlag 1984 .k. shim .efficient id - based authenticated key agreement protocol based on the weil pairing ._ electronics letters _* 39*(8):653654 , 2003 .v. shoup .lower bounds for discrete logarithms and related problems . in : _ advances in cryptology ,eurocrypt 97 _ , lncs 1233 ( 1997 ) , 256266 . v. shoup . on formal models for secure key exchange .ibm technical report rz 3120 , 1999 .n. p. smart .identity - based authenticated key agreement protocol based on weil pairing ._ electronics letters _* 38*(13):630632 , 2002 .s. sun and b. hsieh .security analysis of shim s authenticated key agreement protocols from pairing . http://eprint.iacr.org/2003/113.pdf k. tanaka and e. okamoto .key distribution system for mail systems using id - related information directory ._ computers and security _ * 10*:2533 , 1991 .cryptanalysis of noel mccullagh and paulo s. l. m. barreto s two - party identity - based key agreemenet .http://eprint.iacr.org/2004/308.pdf g. xie .an id - based key agreement scheme from pairing .the fact that the cbdh - assumption implies the perfect - cbdh - assumption is trivial .the converse is proved by the self - random - reduction technique ( see ) .let be a cbdh oracle .that is , there exists a such that ( [ cbdhe ] ) holds with replaced with .we construct a perfect - cbdh algorithm which makes use of the oracle .given , algorithm must compute with overwhelming probability .consider the following algorithm : select ( unless stated explicitly , we use to denote that is randomly chosen from in the remainder of this paper ) and output one can easily verify that if , then . consequently , standard amplification techniques can be used to construct the algorithm .the details are omitted . fora contradiction , assume that there is a probabilistic polynomial - time algorithm that distinguishes the two distributions and with non - negligible probability .in the following we construct a probabilistic polynomial - time algorithm to distinguish the two distributions and . is defined by letting for all , and . by this definition, we have = { { \rm pr}}\left[{\mathcal{d}}_r({\mathcal{x}}_2)=1|{\mathcal{r}},r\right] ] .thus we have -{{\rm pr}}\left[{\mathcal{d}}^\prime({\mathcal{y}}_1)=1\right]\right|\\ = & \left|\sum_{{\mathcal{r}},r}{{\rm pr}}[{\mathcal{r}},r]\cdot\left ( { { \rm pr}}\left[{\mathcal{d}}^\prime_r({\mathcal{x}}_1)=1|{\mathcal{r}},r\right]- { { \rm pr}}\left[{\mathcal{d}}^\prime_r({\mathcal{y}}_1)=1|{\mathcal{r}},r\right]\right)\right|\\ = & \left|\sum_{{\mathcal{r}},r}{{\rm pr}}[{\mathcal{r}},r]\cdot\left ( { { \rm pr}}\left[{\mathcal{d}}_r({\mathcal{x}}_2)=1|{\mathcal{r}},r\right]- { { \rm pr}}\left[{\mathcal{d}}_r({\mathcal{y}}_2)=1|{\mathcal{r}},r\right]\right)\right|\\ = & \left|{{\rm pr}}\left[{\mathcal{d}}({\mathcal{x}}_2)=1\right]- { { \rm pr}}\left[{\mathcal{d}}({\mathcal{y}}_2)=1\right]\right|\\ > & \delta_k .\end{array}\ ] ] hence , distinguishes the distributions and with non - negligible probability .this contradicts the assumption of the lemma .the lemma could be proved using complicated version of the splitting lemma by pointcheval - stern ( see the proof of theorem [ kcridak ] ) . in the following ,we use the random reduction to prove the lemma . \1 . fora contradiction , assume that there is a polynomial time probabilistic algorithm that distinguishes and .we construct a polynomial time probabilistic algorithm that distinguishes , and with , where are uniformly at random in .let the input of be , where is either or uniformly at random in . chooses uniformly at random , sets , , , chooses uniformly at random or lets , chooses within polynomial time according to any distribution given the view ( the distributions for and could be different ) . since and are uniformly chosen from , we may assume that the values of and are unknown yet .without loss of generality , we may assume that and take values and respectively , where and are uniformly chosen from .in a summary , the value of could be computed from efficiently . then sets can compute using the values of , , , .let , where is obtained from by replacing with and taking the remaining values as defined above . note that if , then , and is distributed according to the distribution .that is , are uniform in and independent of each other and of , ( , , ) is chosen according to the specified distributions without violating dbdh - assumption .otherwise , is distributed according to the distribution , and is uniform in and independent of .therefore , by definitions , = \pr\left[{\mathcal{d}}({\mathcal{x}})=1\right]\\ \mbox{and}\quad\quad & \pr\left[{\mathcal{a}}\left(g , g^u , g^v , g^w,{\hat{e}}(g , g)^{a}\right)=1\right ] = \pr\left[{\mathcal{d}}({\mathcal{y}})=1\right ] \end{array}\ ] ] thus distinguishes and with , where is uniform at random in .this is a contradiction .this part of the lemma could be proved in the same way .the details are omitted .\3 . since `` or '', we may assume that the values of and are unknown yet . by the random oracle property of , this part of the lemmacould be proved in the same way as in item 1 .the details are omitted .* proof . * by theorem [ passiverandom ] , the condition 1 in the definition [ keysecuredef ] is satisfied for the idak key agreement protocol . in the following , we show that the condition 2 is also satisfied .for a contradiction , assume that the adversary has non - negligible advantage in guessing the value of after the * test * query .we show how to construct a simulator that uses as an oracle to distinguish the distributions and in the item 3 of lemma [ feedbackbdh ] with non - negligible advantage , where denotes the number of distinct -*queries * that the algorithm has made .the game between the challenger and the simulator starts with the challenger first generating bilinear groups by running the algorithm * instance generator*. the challenger then chooses and .the challenger gives the tuple to the algorithm where if and otherwise . during the simulation, the algorithm can ask the challenger to provide randomly chosen . may then choose ( with the help of perhaps ) within polynomial time according to any distribution given the view and sends to the challenger .the challenger responds with . at the end of the simulation ,the algorithm is supposed to output its guess for .it should be noted that if , then the output of the challenger together with the values selected by the simulator is the tuple of lemma [ feedbackbdh ] , and is the tuple of lemma [ feedbackbdh ] if .thus the simulator could be used to distinguish and of lemma [ feedbackbdh ] .the algorithm selects two integers randomly and works by interacting with as follows : * setup : * algorithm gives the idak system parameters where are parameters from the challenger , and are random oracles controlled by as follows .-*queries * : at any time algorithm can query the random oracle using the queries or . to respond to these queriesalgorithm maintains an that contains a list of tuples .the list is initially empty .when queries the oracle at a point , responds as follows : 1 .if the query appears on the in a tuple , then responds with .2 . otherwise , if this is the -th new query of the random oracle , responds with , and adds the tuple to the .if this is the -th new query of the random oracle , responds with , and adds the tuple to the .3 . in the remaining case, selects a random , responds with , and adds the tuple to the .-*queries * : at any time the challenger , the algorithm , and the algorithm can query the random oracle . to respond to these queriesalgorithm maintains a that contains a list of tuples .the list is initially empty .when queries the oracle at a point , responds as follows : if the query appears on the in a tuple , then responds with .otherwise , selects a random , responds with , and adds the tuple to the .technically , the random oracle could be held by an independent third party to avoid the confusion that the challenger also needs to access this random oracle also .* query phase : * responds to s queries as follows . for a query , runs the -*queries * to obtain a such that , and responds with . for an query for the long term private key , if or , then reports failure and terminatesotherwise , runs the -*queries * to obtain , and responds . for a query ,we distinguish the following three cases : 1 . .if or , asks the challenger for a random ( note that does not know the discrete logarithm of with base ) , otherwise chooses a random and sets . lets reply with .that is , we assume that is carrying out an idak key agreement protocol with and sends the first message to .2 . and the transcript of the oracle is empty . in this case, is the responder to the protocol and has not sent out any message yet .if or , asks the challenger for a random , otherwise chooses a random and sets . lets reply with and marks the oracle as completed .3 . and the transcript of the oracle is not empty . in this case , is the protocol initiator and should have sent out the first message already .thus does not need to respond anything .after processing the query , marks the oracle as completed . for a query ,if and , computes the session key , and responds with , here is the message received by .note that the message may not necessarily be sent by the oracle for some since it could have been a bogus message from . otherwise , or . without loss of generality , we assume that . in this case , the oracle dose not know its private key .thus it needs help from the challenger to compute the shared session key .let and be the messages that has sent out and received respectively . gives these two values to the challenger and the challenger computes the shared session key . then responds with . for a query , if or , then reports failure and terminatesotherwise , responds with . for the * test* query , if or , then reports failure and terminates .otherwise , assume that and .let be the message that sends out ( note that the challenger generated this message ) and be the message that receives ( note that could be the message that the challenger generated or could be generated by the algorithm ) . gives the messages and to the challenger .the challenger computes and gives to . responds with .note that if , then is the session key . otherwise , is a uniformly distributed group element . * guess : * after the * test* query , the algorithm may issue other queries before finally outputs its guess .algorithm outputs as its guess to the challenger .* claim : * if does not abort during the simulation then s view is identical to its view in the real attack . furthermore ,if does not abort , then -\frac{1}{2}\right| > \delta_k ] . suppose makes a total of -queries .we next calculate the probability that does not abort during the simulation .the probability that does not abort for * extract * queries is .the probability that does not abort for * corrupt * queries is .the probability that does not abort for * test * queries is .therefore , the probability that does not abort during the simulation is .this shows that s advantage in distinguishing the distributions and in lemma [ feedbackbdh ] is at least which is non - negligible . to complete the proof of theorem [ securityprooftheorem ], it remains to show that the communications between and the challenger are carried out according to the distributions and of lemma [ feedbackbdh ] . for a query ,the challenger outputs to the algorithm .let , , and .then is chosen uniform at random from , is chosen uniform at random from when or when , and the value of is chosen by the algorithm or by the algorithm or by the challenger in probabilistic polynomial time according to the current views .for example , if is chosen by the algorithm , then may generate as the combination ( e.g. , multiplication ) of some previously observed messages / values or generate it randomly .thus the communication between the challenger and the algorithm during queries is carried out according to the distributions and of lemma [ feedbackbdh ] . the case for queriesis the same .for the * test* query , the challenger outputs to the algorithm , where and .let and .then is chosen uniform at random from and the value of is chosen by the algorithm or by the challenger in probabilistic polynomial time according to the current views .similarly , may choose as the combination ( e.g. , multiplication ) of some previously observed messages / values .the communication between the challenger and the algorithm during the query is carried out according to the distributions and of lemma [ feedbackbdh ] .it should be noted that after the * test* query , the adversary may create bogus oracles for the participants and and send bogus messages that may depend on all existing communicated messages ( including messages held by the oracle ) and then reveal session keys from these oracles .in particular , the adversary may play a man in the middle attack by modifying the messages sent from to and modifying the messages sent from to .then the oracles and are not matching oracles .thus can reveal the session key held by the oracle before the guess . in the part in the distributions and of lemma [ feedbackbdh ], we have the condition `` or '' ( this condition holds since the algorithm has not revealed the matching oracles for ) .if both and , then the oracle is a matching oracle for and is not allowed to reveal the session key held by the oracle .thus the communication between the challenger and the algorithm during these query is carried out according to the distributions and of lemma [ feedbackbdh ] . in the summary ,all communications between the challenger and are carried out according to the distributions and of lemma [ feedbackbdh ] .this completes the proof of the theorem .* is a random oracle ( secure hash function ) from to ( e.g. , ) . *if are points on an elliptic curve , then let where .that is , is the exclusive - or of the second half parts of the first coordinates of the elliptic curve points and .* is a random oracle that the output only depends on the the first input variable or any of the above function restricted in such a way that the output only depends on the the first input variable . in another word , .it should be noted any function , for which lemma [ feedbackbdh ] holds , can be used in the idak protocol .though we do not know whether lemma [ feedbackbdh ] holds for functions that we have listed above , we have strong evidence that this is true .first , if we assume that the group is a generic group in the sense of nechaev and shoup . then we can prove that lemma [ feedbackbdh ] holds for the above functions .secondly , if the distribution in lemma [ feedbackbdh ] is restricted to the distribution : then we can prove that lemma [ feedbackbdh ] holds for the above functions .we may conjecture that the adversary algorithm can only generate and according to the above distribution unless cdh - assumption fails for .thus , under this conjecture ( without the condition that is a generic group ) , the above list of functions can be used in idak protocol securely .our analysis in this section will be based on the assumption that is a random oracle ( secure hash function ) from to . since the computational cost for alice is the same as that for bob . in the following, we will only analyze alice s computation .first , alice needs to choose a random number and compute in the group .in order for alice to compute , she needs to do exponentiation in , one multiplication in , and one pairing .thus in total , she needs to do exponentiation in , one multiplication in , and one pairing .alternatively , alice can compute the shared secret as .thus for the entire idak protocol , alice needs to do exponentiation in ( one for and for ) , one multiplication in , one pairing , and one exponentiation in .the idak protocol could be sped up by letting each participant do some pre - computation .for example , alice can compute the values of and before the protocol session . during the idak session, alice can compute the shared secret as which needs exponentiation in ( for and for ) , multiplications in , and one pairing .alternatively , alice can compute the shared secret as which needs exponentiation in , one multiplication in , one pairing , and one exponentiation in . in a summary ,figure [ performancefigure ] lists the computational cost for alice ( an analysis of all other identity based key agreement protocols shows idak is the most efficient one , details will be given in the final version of this paper ) .
several identity based and implicitly authenticated key agreement protocols have been proposed in recent years and none of them has achieved all required security properties . in this paper , we propose an efficient identity - based and authenticated key agreement protocol idak using weil / tate pairing . the security of idak is proved in bellare - rogaway model . several required properties for key agreement protocols are not implied by the bellare - rogaway model . we proved these properties for idak separately .
this paper studies the multi - agent average consensus problem , where a group of agents seek to agree on the average of their initial states . due to its numerous applications in networked systems ,many algorithmic solutions exist to this problem ; however , a majority of them rely on agents having continuous or periodic availability of information from other agents .unfortunately , this assumption leads to inefficient implementations in terms of energy consumption , communication bandwidth , congestion , and processor usage .motivated by these observations , our main goal here is the design of a provably correct distributed event - triggered strategy that prescribes when communication and control updates should occur so that the resulting asynchronous network executions still achieve average consensus ._ literature review : _ triggered control seeks to understand the trade - offs between computation , communication , sensing , and actuator effort in achieving a desired task with a guaranteed level of performance .early works consider tuning controller executions to the state evolution of a given system , but the ideas have since then been extended to consider other tasks , see and references therein for a recent overview . among the many references in the context of multi - agent systems , specifies the responsibility of each agent in updating the control signals , considers network scenarios with disturbances , communication delays , and packet drops , and studies decentralized event - based control that incorporates estimators of the interconnection signals among agents .several works have explored the application of event - triggered ideas to the acquisition of information by the agents . to this end , combine event - triggered controller updates with sampled data that allows for the periodic evaluation of the triggers . drop the need for periodic access to information by considering event - based broadcasts , where agents decide with local information only when to obtain further information about neighbors .self - triggered control relaxes the need for local information by deciding when a future sample of the state should be taken based on the available information from the last sampled state .team - triggered coordination combines the strengths of event- and self - triggered control into a unified approach for networked systems .the literature on multi - agent average consensus is vast , see e.g. , and references therein . introduce a continuous - time algorithm that achieves asymptotic convergence to average consensus for both undirected and weight - balanced directed graphs . build on this algorithm to propose a lyapunov - based event - triggered strategy that dictates when agents should update their control signals but its implementation relies on each agent having perfect information about their neighbors at all times . the work uses event - triggered broadcasting with time - dependent triggering functions to provide an algorithm where each agent only requires exact information about itself , rather than its neighbors .however , its implementation requires knowledge of the algebraic connectivity of the network .in addition , the strictly time - dependent nature of the thresholds makes the network executions decoupled from the actual state of the agents .closer to our treatment here , propose an event - triggered broadcasting law with state - dependent triggering functions where agents do not rely on the availability of continuous information about their neighbors ( under the assumption that all agents have initial access to a common parameter ) .this algorithm works for networks with undirected communication topologies and guarantees that all inter - event times are strictly positive , but does not discard the possibility of an infinite number of events happening in a finite time period .we consider here a more general class of communication topologies described by weight - balanced , directed graphs .the works present provably correct distributed strategies that , given a directed communication topology , allow a network of agents to find such weight edge assignments ._ statement of contributions : _ our main contribution is the design and analysis of novel event - triggered broadcasting and controller update strategies to solve the multi - agent average consensus problem over weight - balanced digraphs .with respect to the conference version of this work , the present manuscript introduces new trigger designs , extends the treatment from undirected graphs to weight - balanced digraphs , and provides a comprehensive technical treatment .our proposed law does not require individual agents to have continuous access to information about the state of their neighbors and is fully distributed in the sense that it does not require any a priori knowledge by agents of global network parameters to execute the algorithm .our lyapunov - based design builds on the evolution of the network disagreement to synthesize triggers that agents can evaluate using locally available information to make decisions about when to broadcast their current state to neighbors . in our design ,we carefully take into account the discontinuities in the information available to the agents caused by broadcasts received from neighbors and their effect on the feasibility of the resulting implementation .our analysis shows that the resulting asynchronous network executions are free from zeno behavior , i.e. , only a finite number of events are triggered in any finite time period , and exponentially converge to agreement on the average of all agents initial states over weight - balanced , strongly connected digraphs .we also provide a lower bound on the exponential convergence rate and characterize the asymptotic convergence of the network under switching topologies that remain weight - balanced and are jointly strongly connected .lastly , we propose a periodic implementation of our event - triggered design that has agents check the triggers periodically and characterize the sampling period that guarantees correctness .various simulations illustrate our results .this section introduces some notational conventions and notions on graph theory .let , , , and denote the set of real , positive real , nonnegative real , and positive integer numbers , respectively .we denote by and the column vectors with entries all equal to one and zero , respectively .we let denote the euclidean norm on .we let . for a finite set ,we let denote its cardinality . given , young s inequality states that , for any , a weighted directed graph ( or weighted digraph ) is comprised of a set of vertices , directed edges and weighted adjacency matrix . given an edge , we refer to as an out - neighbor of and as an in - neighbor of .the sets of out- and in - neighbors of a given node are and , respectively .the weighted adjacency matrix satisfies if and otherwise . a path from vertex to is an ordered sequence of vertices such that each intermediate pair of vertices is an edge .a digraph is strongly connected if there exists a path from all to all .the out- and in - degree matrices and are diagonal matrices where respectively .a digraph is weight - balanced if .the ( weighted ) laplacian matrix is .based on the structure of , at least one of its eigenvalues is zero and the rest of them have nonnegative real parts . if the digraph is strongly connected , is simple with associated eigenvector .the digraph is weight - balanced if and only if if and only if is positive semidefinite .for a strongly connected and weight - balanced digraph , zero is a simple eigenvalue of . in this case, we order its eigenvalues as , and note the inequality for all .the following property will also be of use later , this can be seen by noting that is diagonalizable and rewriting , where is a diagonal matrix containing the eigenvalues of .we consider the multi - agent average consensus problem for a network of agents .we let denote the weight - balanced , strongly connected digraph describing the communication topology of the network . without loss of generality, we use the convention that an agent is able to receive information from neighbors in and send information to neighbors in .we denote by the state of agent .we consider single - integrator dynamics for all .it is well known that the distributed continuous control law drives each agent of the system to asymptotically converge to the average of the agents initial conditions . in compact form , this can be expressed by where is the column vector of all agent states and is the laplacian of .however , in order to be implemented , this control law requires each agent to continuously access state information about its neighbors and continuously update its control law . here , we are interested in controller implementations that relax both of these requirements by having agents decide in an opportunistic fashion when to perform these actions . under this framework, neighbors of a given agent only receive state information from it when this agent decides to broadcast its state to them .equipped with this information , the neighbors update their respective control laws .we denote by the last broadcast state of agent at any given time .we assume that each agent has continuous access to its own state .we then utilize an event - triggered implementation of the controller given by letting and , we write as note that although agent has access to its own state , the controller uses the last broadcast state .this is to ensure that the average of the agents initial states is preserved throughout the evolution of the system .more specifically , using this controller , one has where we have used the fact that is weight - balanced .our aim is to identify triggers that prescribe in an opportunistic fashion when agents should broadcast their state to their neighbors so that the network converges to the average of the initial agents states . given that the average is conserved by , all the triggers should enforce is that the agents states ultimately agree .in this section we synthesize a distributed triggering strategy that prescribes when agents should broadcast state information and update their control signals .our design builds on the analysis of the evolution of the network disagreement characterized by the following candidate lyapunov function , where corresponds to agreement at the average of the states of all agents .the next result characterizes a local condition for all agents in the network such that this candidate lyapunov function is monotonically nonincreasing . [ pr : event ] for , let and denote by the error between agent s last broadcast state and its current state at time .then , .\end{aligned}\ ] ] note that , since the average is preserved , cf . , under the control law , .the function is continuous and piecewise continuously differentiable , with points of discontinuity of corresponding to instants of time where an agent broadcasts its state . whenever defined, this derivative takes the form where we have used that the graph is weight - balanced in the last equality .let be the vector of errors of all agents .we can then rewrite as expanding this out yields .\end{aligned}\ ] ] using young s inequality for each product with yields , \\ & = - \frac{1}{2 } \sum_{i=1}^n \sum_{j \in { \mathcal{n}_i^{\operatorname{out } } } } w_{ij } \left [ ( 1-a_i)({\widehat}{x}_i-{\widehat}{x}_j)^2 - \frac{e_i^2}{a_i } \right ] , \end{aligned}\ ] ] which concludes the proof . from proposition [ pr : event ] , a sufficient condition to ensure that the proposed candidate lyapunov function is monotonically decreasing is to maintain \geq 0,\end{aligned}\ ] ] for all at all times .this is accomplished by ensuring for all .the maximum of the function in the domain is attained at , so we have each agent select this value to optimize the trigger design . as a consequence of the above discussion , we have the following result .[ co : trigger ] for each , let and define if each agent enforces the condition at all times , then ( note that the latter quantity is strictly negative for all because the graph is strongly connected ) . for each , we refer to the function defined in corollary [ co : trigger ] as the _ triggering function _ and to the condition as the _ trigger_. note that the design parameter affects how flexible the trigger is : as the value of is selected closer to , the trigger is enabled less frequently at the cost of agent contributing less to the decrease of the lyapunov function .an important observation is that , since the triggering function depends on the last broadcast states , a broadcast from a neighbor of might cause a discontinuity in the evaluation of , where just before the update was received , , and immediately after , .such event would make agent miss the trigger .thus , rather than prescribing agent to broadcast its state when , we instead define an event by either or where for convenience , we use the shorthand notation we note the useful equality .the reasoning behind these triggers is the following .the inequality makes sure that the discontinuities of do not make the agent miss an event .the trigger makes sure that the agent is not required to continuously broadcast its state to neighbors when its last broadcast state is in agreement with the states received from them .the triggers and are a generalization of the ones proposed in . however , it is unknown whether they are sufficient to exclude the possibility of zeno behavior in the resulting executions . to address this issue , we prescribe the following additional trigger .let be the last time at which agent broadcast its information to its neighbors .if at some time , agent receives new information from a neighbor , then immediately broadcasts its state if here , is a design parameter selected so that where .our analysis in section [ se : guarantees ] will expand on the role of this bound and the additional trigger in preventing the occurrence of zeno behavior . in conclusion ,the triggers - form the basis of the event - triggered communication and control law , which is formally presented in table [ tab : algorithm ] . each timean event is triggered by an agent , say , that agent broadcasts its current state to its out - neighbors and updates its control signal , while its in - neighbors update their control signal .this is in contrast to other event - triggered designs , see e.g. , , where events only correspond to updates of control signals because exact information is available to the agents at all times .here we analyze the properties of the control law in conjunction with the event - triggered communication and control lawof section [ se : design ] .our first result shows that the network executions are guaranteed not to exhibit zeno behavior .its proof illustrates the role played by the additional trigger in facilitating the analysis to establish this property . [prop : zeno ] given the system with control law executing the event - triggered communication and control lawover a weight - balanced , strongly connected digraph , the agents will not be required to communicate an infinite number of times in any finite time period .we are interested in showing here that no agent will broadcast its state an infinite number of times in any finite time period .our first step consists of showing that , if an agent does not receive new information from neighbors , its inter - event times are lower bounded by a positive constant .assume agent has just broadcast its state at time , and thus . for , while no new information is received , and remain constant .given that , the evolution of the error is simply where , for convenience , we use the shorthand notation .since we are considering the case when no neighbors of broadcast information , the trigger is irrelevant .we are then interested in finding the time when occurs , triggering a broadcast of agent s state .if , no broadcasts will ever happen ( ) because for all .hence , consider the case when , which in turn implies .using , the trigger prescribes a broadcast at the time satisfying or , equivalently , using the fact that for any and ( which readily follows from the cauchy - schwarz inequality ) , we obtain therefore , we can lower bound the inter - event time by ( incidentally , this explains our choice in ) .our second step builds on this fact to show that messages can not be sent an infinite number of times between agents in a finite time period .let time be the time at which agent has broadcast its information to neighbors and thus .if no information is received by time , there is no problem since , so we now consider the case that at least one neighbor of broadcasts its information at some time . in this caseit means that at least one neighbor has broadcast new information , thus agent would also rebroadcast its information at time due to trigger .let denote the set of all agents who have broadcast information at time ( we refer to these agents as synchronized ) .this means that , as long as no agent sends new information to any agent in , the agents in will not broadcast new information for at least seconds , which includes the original agent .as before , if no new information is received by any agent in by time there is no problem , so we now consider the case that at least one agent sends new information to some agent at time . by trigger, this would require all agents in to also broadcast their state information at time and agent will now be added to the set .reasoning repeatedly in this way , the only way for infinite communications to occur in a finite time period is for an infinite number of agents to be added to the set , which is not possible because there are only a finite number of agents .we note here that the introduction of the trigger is sufficient to ensure zeno behavior does not occur but it is an open problem to determine whether it is also necessary . the design in ( * ? ? ?* corollary 2 ) specifies triggers of a nature similar to - for undirected graphs and guarantees that no agent undergoes an infinite number of updates at any given instant of time , but does not discard the possibility of an infinite number of updates in a finite time period , as proposition [ prop : zeno ] does .next , we establish global exponential convergence .[ th : exp - convergence ] given the system with control law executing the event - triggered communication and control lawover a weight - balanced strongly connected digraph , all agents exponentially converge to the average of the initial states , i.e. , . by design , we know that the event - triggers - ensure that , cf. corollary [ co : trigger ] , we show that convergence is exponential by establishing that the evolution of towards is exponential .define to further bounding by given this inequality , our next step is to relate the value of with .note that where we have used in the inequality .now , where and we have used in the second inequality . on the other hand , where we have used in the second inequality .putting these bounds together , we obtain with . using this expression in the bound for the lie derivative , we get this , together with the fact that is continuous and piecewise differentiable implies , using the comparison lemma , cf . , that and hence the exponential convergence of the network trajectories to the average state .the lyapunov function used in the proof of theorem [ th : exp - convergence ] does not depend on the specific network topology .therefore , when the communication digraph is time - varying , this function can be used as a common lyapunov function to establish asymptotic convergence to average consensus .this observation is key to establish the next result , whose proof we omit for reasons of space .let be the set of weight - balanced digraphs over vertices .denote the communication digraph at time by .consider the system with control law executing the event - triggered communication and control lawover a switching digraph , where is piecewise constant and such that there exists an infinite sequence of contiguous , nonempty , uniformly bounded time intervals over which the union of communication graphs is strongly connected .then , assuming all agents are aware of who its neighbors are at each time and agents broadcast their state if their neighbors change , all agents asymptotically converge to the average of the initial states .here we propose an alternative strategy , termed periodic event - triggered communication and control law , where agents only evaluate triggers and periodically , instead of continuously . specifically , given a sampling period , we let , where , denote the sequence of times at which agents evaluate the decision of whether to broadcast their state to their neighbors . this type of design is more in line with the constraints imposed by real - time implementations , where individual components work at some given frequency , rather than continuously .an inherent and convenient feature of this strategy is the lack of zeno behavior ( since inter - event times are naturally lower bounded by ) , making the need for the additional trigger superfluous .the strategy is formally presented in table [ tab : algorithm2 ] .each time an agent broadcasts , this resets the error to zero , .however , because triggers are not evaluated continuously , we no longer have the guarantee at all times but , instead , have for .the next result provides a condition on that guarantees the correctness of our design .let be such that where and .then , given the system with control law executing the periodic event - triggered communication and control lawover a weight - balanced strongly connected digraph , all agents exponentially converge to the average of the initial states . since is only guaranteed at the sampling times under the periodic event - triggered communication and control law , we analyze what happens to the lyapunov function in between them . for ,note that substituting this expression into , we obtain for all . for a simpler exposition, we drop all arguments referring to time in the sequel .following the same line of reasoning as in proposition [ pr : event ] yields using , we bound hence , for , under , a reasoning similar to the proof of theorem [ th : exp - convergence ] using leads to finding such that which implies the result .this section illustrates the performance of the proposed algorithms in simulation . figure [ fig : sim1 ] shows a comparison of the event - triggered communication and control lawwith the algorithm proposed in for undirected graphs over a network of agents .both algorithms operate under the dynamics with control law , and differ in the way events are triggered .the algorithm in requires all network agents to have knowledge of an a priori chosen common parameter , which we set here to .figure [ fig : sim1](a ) shows the evolution of the lyapunov function and figure [ fig : sim1](b ) shows the number of events triggered over time by each strategy .[ ll][ll] [ ll][ll]proposed [ cc][cc ] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] ( -104,5.5) ( -65,0)time ( -185,0)time ( -235,68) ( -120,65) ( -104,5.5) figure [ fig : sim2 ] shows an execution of event - triggered communication and control lawover a network of agents whose communication topology is described by a weight - balanced digraph . in this case , we do not compare it against the algorithm in because the latter is only designed to work for undirected graphs .[ ll][ll] [ ll][ll]proposed [ cc][cc ] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc ] [ cc][cc ] [ cc][cc ] [ cc][cc ] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] ( -104,5.5) ( -65,0)time ( -185,0)time ( -240,58) ( -124,62) ( -104,5.5) we have also compared the periodic event - triggered communication and control lawwith a periodic implementation of laplacian consensus , cf. . for the latter ,trajectories are guaranteed to converge if the timestep satisfies , where is the maximum degree of the graph .figure [ fig : sim3 ] shows this comparison using and also demonstrates the effect of on the executions of the periodic event - triggered communication and control law . for simplicity ,we have used to be the same for all agents in each execution .one can observe the trade - off between communication and convergence rate for varying : higher results in less communication but slower convergence compared to smaller values of .[ ll][ll] [ ll][ll]proposed [ cc][cc ] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc ] [ cc][cc ] [ cc][cc ] [ cc][cc ] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] [ cc][cc] ( -104,5.5) ( -65,0)time ( -185,0)time ( -235,60) ( -121,65) ( -104,5.5) ( -45,35) ( -36,25) ( -36,16) have proposed novel event - triggered communication and control strategies for the multi - agent average consensus problem . among the novelties of our first design, we highlight that it works over weight - balanced directed communication topologies , does not require individual agents to continuously access information about the states of their neighbors , and does not necessitate a priori agent knowledge of global network parameters to execute the algorithm .we have shown that our algorithms exclude the possibility of zeno behavior and identified conditions such that the network state exponentially converges to agreement on the initial average of the agents state .we have also provided a lower bound on the convergence rate and characterized the network convergence when the topology is switching under a weaker form of connectivity .finally , we have developed a periodic implementation of our event - triggered law that relaxes the need for agents to evaluate the relevant triggering functions continuously and provided a sufficient condition on the sampling period that guarantee its the asymptotic correctness .future work will explore scenarios with more general dynamics and physical sources of error such as communication delays or packet drops , the extension of our design and results to distributed convex optimization and other coordination tasks , and further analysis of trigger designs that rule out the possibility of zeno behavior .this research was supported in part by nsf award cns-1329619 .k. j. astrm and b. m. bernhardsson .comparison of riemann and lebesgue sampling for first order stochastic systems . in _ieee conf . on decision and control _ ,pages 20112016 , las vegas , nv , december 2002 .
this paper proposes a novel distributed event - triggered algorithmic solution to the multi - agent average consensus problem for networks whose communication topology is described by weight - balanced , strongly connected digraphs . the proposed event - triggered communication and control strategy does not rely on individual agents having continuous or periodic access to information about the state of their neighbors . in addition , it does not require the agents to have a priori knowledge of any global parameter to execute the algorithm . we show that , under the proposed law , events can not be triggered an infinite number of times in any finite period ( i.e. , no zeno behavior ) , and that the resulting network executions provably converge to the average of the initial agents states exponentially fast . we also provide weaker conditions on connectivity under which convergence is guaranteed when the communication topology is switching . finally , we also propose and analyze a periodic implementation of our algorithm where the relevant triggering functions do not need to be evaluated continuously . simulations illustrate our results and provide comparisons with other existing algorithms . discrete event systems , event - triggered control , average consensus , multi - agent systems , weight - balanced digraphs
to fully understand chemical dynamics phenomena it is necessary to know the underlying potential energy surfaces ( pes ) .surfaces can be obtained by two means : _ ab initio _calculations and the inversion of suitable laboratory data .this paper is concerned with an emerging class of laboratory data with special features for inversion purposes .traditional sources of laboratory data for inversion produce an indirect route to the potential requiring the solution of schrdinger s equation in the process. an alternative suggestion has been put forth to utilize ultrafast probability density data from diffraction observations or other means to extract adiabatic potential surfaces .such data consists of the absolute square of the wavefunction .although the phase of the overall wavefunction is not available , there is sufficient information in this data to extract the potential fully quantum mechanically _ without _ the solution of schrdinger s equation . instead , the proposed procedure rigorously reformulates the inversion algorithm as a linear integral equation utilizing ehrenfest s theorem for the position operator .additional attractive features of this algorithm are ( a ) the procedure may be operated non - iteratively , ( b ) no knowledge is required of the molecular excitation process leading to the data and ( c ) the regions where the potential may be reliably extracted are automatically revealed by the data .extensive efforts are under way to achieve the necessary temporal and spatial resolution of the probability density data necessary for inversion processes as well as for other applications . in anticipation of these developmentsa number of algorithmic challenges require attention to provide the means to invert such data .this paper aims to build on the previous work and address some of these needs . in particular this paper will consider ( i ) optimal choices for regularizing the inversion procedure , ( ii ) incorporation of multiple data sets and ( iii ) inclusion of data sampled at discrete time intervals .these concepts are developed and illustrated for the simulated inversion of a double well potential .the paper is organized as follows . the basic inversion procedure and the model systemare given in section [ sec : inversion_scheme ] .based on the inversion algorithm derived in ref . an extended regularization procedure is presented in section [ sec : regularization ] followed by a discussion of a modified time integration scheme applicable to different types of experimental data sampling .this development naturally leads to consideration of an optimal combination of data from different measurements . a proof on how to optimally combine the data is given in appendix [ sec : optimality_proof ] .the stability of this data combination procedure under the influence of noise is discussed as well .section [ sec : summary ] summarizes the findings of this paper .the algorithms developed in this paper will be illustrated for a one - dimensional system but the generalization to higher dimensions is straightforward : the major difference with higher dimensions is the additional computational effort involved .atomic units are used throughout this work . for a systemwhose dynamics is governed by the schrdinger equation \psi(x , t)\ ] ] the time evolution of the average position obeys ehrenfest s theorem where and . in this work the probability density is assumed to be observed in the laboratory and the goal is to determine the potential energy surface ( pes ) from the gradient .following , eq.([eq : ehrenfest ] ) can be used to construct a gaussian least squares minimization problem to determine the pes gradient ^ 2 { \text{d}}t\;. \label{eq : j0_def}\ ] ] the time averaging acts as a filtering process to increase inversion reliability by gathering together more data .this will generally increase reliability which in principle is only limited by the exploratory ability of the wavepacket . beyond some point in time little information on the potential may be gained by taking further temporal data starting from any potential initial condition .variation with respect to results in a fredholm integral equation of the first kind with righthand side ( rhs ) and symmetric , positive semidefinite kernel treated as an inverse problem , eq.([eq : orig_inverseproblem ] ) produces the desired pes gradient as its solution . for numerical implementationwe resort to the matrix version and its formal solution here the integral in eq.([eq : orig_inverseproblem ] ) is evaluated at points of equal spacing .this approach to seeking the pes has a number of attractive features .the formulation requires no knowledge of any preparatory steps to produce a specific which evolves freely to produce .the generation of and depends only on and begins when the observation process is started .moreover , although this is a fully quantum mechanical treatment there is no need to solve schrdinger s equation to extract the pes .the dominant entries of and automatically reveal the portions of the pes that may be reliably extracted .the linear nature of eq.([eq : orig_inverseproblem ] ) is very attractive from a practical perspective . notwithstanding these attractions ,a principal problem to manage is the generally singular nature of the kernel of the integral equation in eq.([eq : orig_inverseproblem ] ) .the kernel s nullspace makes it difficult to solve the inverse problem and leads to an unstable and ambiguous solution , two characteristics that generally define the ill - posedness of inverse problems .there are two major reasons for the ill - posedness of the inverse problem in eqs .( [ eq : orig_inverseproblem ] ) and ( [ eq : orig_inverseproblem_matrix ] ) .firstly , it is not possible to continuously monitor the wavepacket with arbitrary accuracy and information is lost due to discrete data sampling in space and time . secondly , the ill - posedness is due to the wavepacket only exploring a subspace of the pes . in regions untouched by the wavepacket with for all observation times kernel entries vanish as .hence these regions correspond to zero - entry rows and columns in the kernel matrix and constitute its nontrivial nullspace . in general , the solution will only be reliable in regions where has significant magnitude during its evolution. the inversion procedure can manage the null space with the help of a suitable regularization procedure .singular value decomposition and iterative solution schemes are available ( cf . for an overview ) , but here we will employ extended tikhonov regularization ( see section [ sec : regularization ] ) . the procedures developed in this paper are applied to a simulated inversion with a system taken to have a slightly asymmetric double well potential with parameters in the work of n. doli _ et al . _ this pes represents a one dimensional model for the intramolecular proton transfer in substituted malonaldehyde ( see fig . [fig : malonaldehyd ] ) .the particle mass is accordingly that of hydrogen .the wavepacket propagations to obtain the simulated data employed the split operator method ( cf . ) . for propagation as well as inversionwe used a grid with 8192 points over the range .a time step was chosen and total propagation time was .the small values of and ensured good convergence of the numerical propagation procedure .the initial wavefunctions were normalized gaussian wavepackets of width .as stated earlier , the inversion algorithm requires no knowledge of how these packets were formed , but generally one may assume that a suitable external laser field was applied for times .the initial packets were placed at the left ( l ) and right minimum ( r ) of the pes , on top of the barrier ( t ) , and at a location high on the potential ( h ) .the wavepacket positions are illustrated in fig .[ fig : malonaldehyd ] and their exact values , the associated average energies and the classical turning points at these energies are given table [ tab : energy ] .the inversion process employed a time step and grid spacing that differed from those used in the propagation , as high spatial and temporal resolution is difficult to attain in the laboratory .hence , we employed only a portion of all the available propagation data in time and space .we will present inversion results using every 16th propagation grid point ( i.e. , ) and every fifth available snapshot ( i.e. , ) ; even fewer snapshots could be used over a longer period of time with the criterion that roughly the same total amount of data is retained .the inversion results from these lower resolution data are very encouraging .the kernel matrices for condition h and t are shown in fig .[ fig : kernel ] ; similar plots apply to the cases l and r. the kernels are symmetric with respect to and their values cover a large dynamic range from down to on the plotted domain .significant entries are found predominantly on the matrix diagonal , close to the origin of the wavepacket , and also in the vicinity of the classical turning points . beyond the classical turning points at a distance of approximately the kernel values fall off very rapidly for both configurations . for configuration h in fig .[ fig : kernel]a the initial narrow gaussian is peaked at the hydrogen distance with corresponding large entries around .the wavepacket starts to spread and acquires momentum as it slides down the pes , which results in the broadening diagonal trace observed as the central structure in fig .[ fig : kernel]a . when the wavepacket reaches its lefthand turning point it spreads further ( star structure around ) before it returns .this pattern coincides with the motion of the average position displayed for configuration h in fig .[ fig : individual_reconstruction]a .even higher symmetry can be observed for configuration t s kernel matrix in fig .[ fig : kernel]b .the initial gaussian remains centered around and spreads to the left and righthand well only .this is further supported by the motionless average position in fig .[ fig : individual_reconstruction]a .hence large entries in result in the vicinity of and the wavepacket s symmetrical spread to the left and righthand side of the pes produces the spikes along the -axis for . due the kernel s symmetrythese spikes reappear as lines along the -axis for .large contributions for will again lead to a pronouced diagonal and add to the snowflake appearance of fig .[ fig : kernel]b . the features of the kernels in fig . [ fig : kernel ]coincide with the nature of the inverse problem mentioned earlier : symmetry , ill - posedness , and automatic identification of the range where the pes may be be reliably extractable ( i.e. , where the kernel entries are large ) . for configurationh the relevant range is and for configuration t only the vicinity of the barrier top should yield reliable pes information . in both caseswe can not expect reasonable solutions beyond , which coincides with the classical turning points given in table [ tab : energy ] .tikhonov regularization is straightforward to implement with simple control provided by suitable weight parameters .it provides a well defined means to stabilize the inversion and extract reliable pes information in those regions allowed by the data .this investigation goes beyond the initial work to carefully explore various regularization options .regularization has the goal of improving the accuracy of the solution , assuring stability and ease of use including computational simplicity .the functional was augmented by a regularization term involving a set of increasingly higher order differential operators acting on ^ 2 \,{\text{d}}x\ ; , \label{eq : j1_def}\ ] ] with real coefficients and a reference length . in practice be thought of as the spatial resolution of the data and in the present numerical simulation it was taken as . for a multidimensional system , and will become direction dependent tensors .the parameter acts to ensure that all the new terms added to have the same units as ^ 2 ] , with being the heaviside step function , will reduce to .variation of eq.([eq : weighted_j0 ] ) leads to a modified inverse problem with the new kernel and rhs the weight does not alter the regularization terms in eq.([eq : new_reg_inverseproblem ] ) .if is rewritten using partial integration over time , then the weight function must be considered in this process .the above equations were applied to two generic cases .first , we considered data gathered as snapshots in time i.e. , , and evaluated eqs .( [ eq : kernel_matrix_weighted ] ) and ( [ eq : rhs_weighted ] ) with this weight .this procedure simply reduced all time integrations to sums over the sampled data .next , we considered the case in which the measurement process has been divided into two continuous time intervals of length and separated by a period of time . a reasonable choice of weights would either be or the choice depends on the desired emphasis to be given to the two data intervals .here we chose to give the longer interval a larger contribution in than the shorter one , and this can be better achieved with using eq.([eq : equal_weighting_intervals ] ) ; this choice is reasonable , provided the measured data in both intervals are of comparable quality .clearly many other issues can be incorporated into the choice of dictated by what is known about the nature of the data and the information sought about the pes .the kernel is now and the rhs reads the interpretation of the weight in eq.([eq : equal_weighting_intervals ] ) is associated with performance of the inversion with an interrupted gathering of data from a _ single _ experiment . to explore this point furtherit is useful to rewrite eqs.([eq : kernel_intervals ] ) and ( [ eq : rhs_intervals ] ) as u(x')\,{\text{d}}x ' \label{eq : combination } = b_1(x)+b_2(x)\;,\ ] ] where the indices `` 1 '' and `` 2 '' denote the evident two data time domains . in this formthe gathering of data from _one interrupted _ experiment can also be interpreted as finding the simultaneous solution to the inverse problem of _ two different _ experiments .these two experiments could possibly be prepared with distinct controls could , for example , explore different regions of the pes .we found that it is optimal to simply combine these sets of data by addition as indicated in eq.([eq : combination ] ) .this procedure will yield an inverse solution with accuracy greater than a linear combination of separate solutions to the individual problems `` 1 '' and `` 2 '' as explained below .consider two experiments that yield two different inverse solutions satisfying their respective system equation naturally there should be only a unique exact for the physical system .hence both system solutions in eq.([eq:2_experiments_ansatz ] ) can be decomposed into the exact solution and contamination pieces from the kernel s nullspace the functions and are associated with the nullspace of the two kernels with being the contamination from the common nullspace of and and the residual contribution unique to the respective kernel .the goal is to use the data to find an optimal solution with the smallest possible nullspace contribution . exploiting the linearity of the inverse problem, we may add the two pieces of eq.([eq:2_experiments_ansatz ] ) to get this does nt fully satisfy eq.([eq : combination ] ) and it is in general not possible to construct the optimal solution as a linear combination with constant coefficients . to elucidate this point, we insert into eq.([eq : combination ] ) and with the help of eqs.([eq:2_experiments_ansatz ] ) and ( [ eq : solution_decomposition ] ) we get the cross terms where the prefactors , have been omitted . hence is not an optimal solution of eq.([eq : combination ] ) since it leaves errors that can not be eliminated .however , by employing eq .( [ eq : combination ] ) and adding the kernels and rhss we can improve the quality of the inversion .no error terms like will appear since by construction the resulting can be decomposed as .a contribution from as in eq.([eq : solution_decomposition ] ) will not arise , as proved in appendix [ sec : optimality_proof ] .thus , the solution of the combined problem will gain in quality by virtue of the reduced nullspace of the new kernel .these optimality results are rigorous but it must be added that in general any combination of a finite amount of data will not fully eliminate the nullspace .however in the cases under comparison here the assumption that a similar degree of robustness can be attained certainly holds true .as argued above , we chose the weighting function in eq.([eq : equal_weighting_intervals ] ) to result in observation - duration proportional entries in and .hence it is quite natural to add .however , choosing the approach eq.([eq : weighting_intervals ] ) normalizes each data set independently .this logic naturally leads to considering the optimal combination of data to form where and are positive constants .this specially weighted form , or a positive definite combination with , might be useful especially in the presence of different degrees of noise in the two data sets .an iterative numerical scheme to optimize could then help to improve the solution by minimizing the effects of nullspace contamination .the optimal combination of data by addition of kernels and rhss presented above was applied to the double well system with results for the gradient and pes shown in fig .[ fig : combi_reconstruction ] .information was successively added to the kernel by combining the data sets to form lt , ltr , and ltrh with the notation based on the initial conditions shown in fig .[ fig : malonaldehyd ] . in each caseall configurations are weighted equally .the optimal values employed and defect measures are given in table [ tab : scan ] . while the individual inverse problem solutions based on l , t , r , and h reproduce the potential in their respective neighborhoods quite well , they fail to give adequate results for the other portions of the potential . on the other hand ,the reconstruction of large parts of the pes is successful if we optimally combine the data of the three experiments ltr .however , contrary to intuition , we observe that the solution is less satisfactory from combining all the data ltrh ; some additional oscillations appear along with a dip in the vicinity of the initial wavepacket for h. apparently the nullspace of the expanded domain can not be fully managed by regularization alone ; no attempt was made to simultaneously introduce and regularization .several other schemes for combining the raw density data can be envisioned , apart from the approach in section [ sec : optimal_rec ] .one candidate would be the direct combination of data from different experiments . as an illustration we will treat the case of two different s with and being a positive constant .this combination is physically acceptable , as ehrenfest s theorem in eq.([eq : ehrenfest ] ) is linear in the probability density .insertion of this sum into the functional and variation with respect to will yield a formulation analogous to the one describing inversion under the influence of noise in the data ( see section [ sec : noise ] ) in eq.([eq : noise_inverse ] ) upon comparison of eqs.([eq : noise_ansatz ] ) and ( [ eq : rho_ansatz ] ) .the terms proportional to and will exactly correspond to what was found earlier in eq.([eq : combination ] ) .however , the terms proportional to represent a cross correlation between and .these cross terms can be significant , and they act to introduce an element of undesirable structure , often oscillatory , in the equations determining . on physical groundsit is also artificial to directly correlate the independent experimental data and when seeking .hence , the scheme of adding together the bare -data is expected to produce unreliable results . to support this argument we present a test on such a -combination consisting of the sum of all four densities of the initial configurations l , t , r , and h the corresponding inverted gradient and pes respectivelyare shown in figs .[ fig : combi_reconstruction]a and [ fig : combi_reconstruction]b .the solution is rather poor and far worse than the ltrh combination using the same data .this result should not be taken to construe that other combinations of data might not give satisfactory results .however , the combination of and in section [ sec : optimal_rec ] is quite natural and produces excellent inversion results .any real -data will always be contaminated by some degree of noise . in an additive modelthis noise contaminated data can be represented as where is a ordering parameter and the noise is described by the spatio - temporal function .we assume that is a randomly varying function with vanishing average contribution and free from systematic error such that for any function of bounded norm over time that is not correlated with . inserting the ansatz in eq.([eq : noise_ansatz ] ) into the functional in eq.([eq : j0_def ] ) and taking the first variation ,the equation determining is obtained the terms proportional to recover the original unperturbed system in eqs.([eq : orig_inverseproblem]-[eq : a_def ] ) . assuming the data noise level to be small , the terms in on both sides of eq.([eq : noise_inverse ] ) can be neglected .we first turn to the kernel side of eq.([eq : noise_inverse ] ) and denote all terms in as the error kernel each term involves the computation of two - point spatial correlations between functions .however , the functions and are uncorrelated , and the temporal integral of their product is expected to result in only small random contributions to the kernel over and , especially for longer time integration as follows from eq .( [ eq : noise_condition ] ) . following similar logic , the terms proportional to on the rhs of eq.([eq : noise_inverse ] ) should be negligible , especially for long time integration .neglecting the terms finally leaves only the first term proportional to on the rhs .hence , the functional exhibits some inherent capability to deal with slightly noisy data .the time integration process averages out these noise effects so that they should have a decreasing impact on the inverse solution .longer periods of temporal data should make their behavior better .these results are also in accordance with the stability analysis presented in .resorting to the matrix version of the inverse problem ( cf ., eq.([eq : orig_reg_inverseproblem_matrix ] ) ) the authors proved ( eq.(25 ) in ref . ) that the relative error in the solution after regularization is bounded by the relative errors in the data and . moreover it was found ( eqs.(41 ) and ( 49 ) ) that small perturbations in the noise will result in small proportional perturbations in and , which is excellent behavior for any application with finite time integration .these results can now be extended to the long time integration limit where the terms in eq.([eq : a - error ] ) should further diminish in significance for .similar arguments apply to the rhs .equation ( [ eq : a - error ] ) also demonstrates why the direct combination of bare data discussed in section [ sec : rho_combi ] performs less satisfactory than the optimal combination scheme in section [ sec : optimal_rec ] .in contrast to the slightly perturbed system cross term above , the analogous term arising from directly combining the data will not vanish .this will introduce an undesirable error contribution to the inverse problem .in contrast , the optimal combination scheme for different sets of data in section [ sec : optimal_rec ] should profit from the inherent stability of the inversion procedure to deal with slightly noisy systems since this technique involves a sequence of separate time integrations .this paper presented new results that improve and extend a recently suggested procedure to extract potential energy surfaces ( pes ) from the emerging experimentally observable probability density data .the results of this paper should also be applicable to the more general case of extracting the dipole function from the additional observation of the applied laser electric field . an easy to implement regularization schemewas introduced , which increases the accuracy of the computed pes without loss of numerical stability .furthermore an optimal reconstruction method was presented which combines data from different measurements .this scheme was argued to be optimal in the sense of reducing the nullspace of the inverse problem and hence increasing the domain of the extracted pes .evidence was presented that this scheme is stable under the influence of noise , but further investigations will be necessary to fully confirm these results .we hope that the developments in this paper stimulate the generation of appropriate probability density data for inversion implementation .the authors would like to acknowledge karsten sundermann who shared interest in this subject from its inception .rdvr thanks `` fonds der chemischen industrie '' and hr would like to acknowledge the department of energy .lk acknowledges dfg s financial support through the project `` spp femtosekundenspektroskopie '' .he also would like to thank angelika hofmann for the propagation code and jens schneider as well as berthold - georg englert for discussions .this section presents the lemma and its proof underlying the optimal combination of data from different measurements .[ lemma ] given two hermitian , positive semidefinite operators acting on the hilbert space and their sum with coefficients , it then holds that for finite dimensional ranges this implies that in other words : adding two positive semidefinite , hermitian operators will reduce the nullspace of the combined operator to that of the intersection of both nullspaces . the generalization to a finite sum of operators with constant is evident .neither positivity nor hermiticity can be omitted . without the former criterion ,a counter example is , with . as an example , without the latter criterion , the two operators with ranks 3 , 2 , and 1 lead to the contradiction .proof : as both operators and are hermitian , they have diagonal representations with respect to their eigenvectors and . without loss of generalitywe choose the normalized eigenvectors as the basis of .clearly , can be decomposed in the following two ways into orthogonal subspaces and also in a similar fashion we can partition the spectrum of , and hence s basis , into all eigenvectors that form a basis of and those that generate . since is a complete linear space and are linear operators , it is sufficient to consider the basis states only . for any such state find where we define the mean .this quantity is always positive ( or zero ) by virtue of being positive semidefinite . in accordance with the decomposition in eqs.([eq : decompose_space_a ] ) and ( [ eq : decompose_space_b ] ) four different cases are to be distinguished : therefore only ( basis ) vectors that lie in _ both _ nullspaces will belong to the nullspace of , which proves the first part of the lemma .the second part follows from the linear algebraic dimension relation where `` + '' on the lefthand side denotes all linear combinations of the vectors in both ranges .now , any vector that lies either in or in will , with an argument similar to eq.([eq : fall_differentiation ] ) , always be in .we are thus allowed to replace which completes our proof .we note that the lemma s first part could have been proved without using a basis .the decomposition eq.([eq : decompose_space_a ] ) and the differentiation of eq.([eq : fall_differentiation ] ) into or for any suffices .however , the second part of the lemma requires the basis vectors . for ,a related issue pointed out in is the stability of in view of the need to take the second time derivative of the probability density .an approach based on partial integration over time has been proposed calling for a first order time derivative only .however , a check of the inversion performance based on partial integration produced unsatisfactory results .it will always be extremely difficult to reliably compute the terms at only a few snapshots in time .one inevitably needs to work with one - sided derivatives at and , which significantly diminishes the accuracy .ldcdd configuration index & & & + & & & left & right + h & 1.75 & 0.081 & -2.1563 & 2.1534 + r & 0.9977&0.055 & -2.0013 & 1.9978 + t & 0.0052&0.061 & -2.0403 & 2.0370 + l&-1.002 & 0.054 & -1.9996 & 1.9961 the configuration indicesh , r , t , and k corresponding to the locations of wavepacket initial positions are shown in fig .[ fig : malonaldehyd ] .all wavepackets start with equal width and are initially at rest centered at the respective starting position .the average energy of each packet as well as the corresponding turning points of an equivalent classical particle of the same energy are given .[ tab : energy ] lddddd configuration & & & & & + h & 3.3 & -4.0&4.0 & 384.58 & 0.03 + h & 1.0 & -2.0&2.0 & 11.52 & 23.46 + r & 0.033 & -1.5&1.5 & 7.16 & 1.06 + t & 0.007 & -1.5&1.5 & 9.02 & 0.05+ l & 0.033 & -1.5&1.5 & 6.53 & 1.07 + & 100.0 & -1.5&1.5 & 9.53 & 111.63 + ltrh & 0.333 & -1.5&1.5 & 3.83 & 12.42 + ltr & 0.01 & -1.5&1.5 & 2.78 & 0.70 + lt & 0.01 & -1.5&1.5 & 3.10 & 0.49 in this numerical case study the optimal regularization parameter value was identified by scanning its effect on the solution defect .the inversion domains are .the system defect is .the first five rows apply to the individual pes reconstructions shown in fig .[ fig : individual_reconstruction ] , and the last four rows refer to measurement combinations shown in fig .[ fig : combi_reconstruction ] . see the text for details .[ tab : scan ] .( a ) configuration h and ( b ) configuration t. the numerical values for the matrix entries range from on the diagonal to on the boundaries .the contour levels correspond to : 1 ( outer line ) , 31 , 61 , , 211.,title="fig:",width=302 ] parameter scans performed with configuration h. panels ( a ) and ( b ) display the solution defect with respect to two different inversion ranges : and , respectively .panel ( c ) shows the system defect for the entire domain .,title="fig:",width=302 ] fig .[ fig : individual_reconstruction ] .extractions of the potential under the conditions given in table [ tab : scan ] .( a ) the time evolution of the position average accompanied by the left- and righthand variance ( i.e. , shaded regions bounded by eqs.([eq : left_variance ] ) and ( [ eq : right_variance ] ) ) to indicate the regions predominantly covered by the probability densities .the grey domains on the extreme left and right mark classically forbidden areas ( cf .table [ tab : energy]).(b ) the reconstructed and the corresponding potential in ( c ) with a suitably chosen additive constant . for comparison the exact solutions are included as dashed lines .the individual curves have been offset for graphical reasons and the detailed presentation of is restricted to since the boundary regions will not be extracted correctly due to lack of data sampling there .-combined data .see the text and table [ tab : scan ] for details .the curves for the derivative in ( a ) and the pes in ( b ) have been offset for graphical clarity and exact solutions ( dashed lines ) added for comparison .for optimal combinations of the data the original and reconstructed pes are almost indistinguishable.,title="fig:",width=264 ]
a novel algorithm was recently presented to utilize emerging time dependent probability density data to extract molecular potential energy surfaces . this paper builds on the previous work and seeks to enhance the capabilities of the extraction algorithm : an improved method of removing the generally ill - posed nature of the inverse problem is introduced via an extended tikhonov regularization and methods for choosing the optimal regularization parameters are discussed . several ways to incorporate multiple data sets are investigated , including the means to optimally combine data from many experiments exploring different portions of the potential . in addition , results are presented on the stability of the inversion procedure , including the optimal combination scheme , under the influence of data noise . the method is applied to the simulated inversion of a double well system to illustrate the various points .
monte carlo simulations , and in particular markov chain based methods , have matured over the last decades into a highly versatile and powerful toolbox for studies of systems in statistical and condensed - matter physics , ranging from classical spin models over soft - matter problems to quantum systems .their competitiveness with other approaches such as , e.g. , field - theoretic expansions for the study of critical phenomena , is largely based on the development and refinement of a number of advanced simulation techniques such as cluster algorithms and generalized - ensemble methods .equally important to the generation of simulation data , however , is their correct and optimal analysis . in this field , a number of important advances over the techniques used in the early days have been achieved as well .these include , e.g. , the finite - size scaling ( fss ) approach , turning the limitation of simulational methods to finite system sizes into a systematic tool for accessing the thermodynamic limit , reweighting techniques , lifting the limitation of numerical techniques to the study of single points in parameter space to allow for continuous functions of estimates to be studied , as well as advanced statistical tools such as the jackknife and other resampling schemes of data analysis . of these techniques , the statistical data analysis appears to have received the least attention .hence , while fss analyses , even including correction terms , are quite standard in computer simulation studies , a proper analysis and reduction of statistical errors and bias appears to be much less common . here, resampling methods turn out to be very valuable .although such techniques offer a number of benefits over more traditional approaches of error estimation , their adoption by practitioners in the field of computer simulations has not yet been as universal as desirable .it is our understanding that this is , in part , due to a certain lack in broadly accessible presentations of the basic ideas which are , in fact , very simple and easy to implement in computer codes , as is demonstrated below .more specifically , data generated by a monte carlo ( mc ) simulation are subject to two types of correlation phenomena , namely ( a ) _ autocorrelations _ or temporal correlations for the case of markov chain mc ( mcmc ) simulations , which are directly related to the markovian nature of the underlying stochastic process and lead to an effective reduction of the number of independently sampled events and ( b ) _ cross correlations _ between different estimates extracted from the same set of original time series coming about by the origin of estimates in the same statistical data pool .the former can be most conveniently taken into account by a determination of the relevant autocorrelation times and a blocking or binning transformation resulting in an effectively uncorrelated auxiliary time series .such analyses are by now standard at least in seriously conducted simulational studies . on the contrary, the effects of cross correlations have been mostly neglected to date ( see , however , refs . ) , but are only systematically being discussed following our recent suggestion . in this article , we show how such cross correlations lead to systematically wrong estimates of statistical errors of averaged or otherwise combined quantities when a nave analysis is employed , and how a statistically correct analysis can be easily achieved within the framework of the jackknife method .furthermore , one can even take benefit from the presence of such correlation effects for significantly reducing the variance of estimates without substantial additional effort .we demonstrate the practical relevance of these considerations for a finite - size scaling study of the ising model in two and three dimensions . the rest of this article is organized as follows . in sec .ii we give a general recipe for a failsafe way of monte carlo data analysis , taking into account the effects of autocorrelations and cross correlations mentioned above . after discussing the complications for the more conventional analysis schemes ( but not the jackknife method ) introduced by histogram reweighting and generalized - ensemble simulation techniques in sec .iii , we outline the role of cross correlations in the process of averaging over a set of mc estimates in sec .iv and discuss the choice of an optimal averaging procedure . in sec .v , these ideas are applied to a simulational study of the critical points of the two- and three - dimensional ising models .finally , sec .vi contains our conclusions .compared to the task of estimating the uncertainty in the result of a lab experiment by simply repeating it several times , there are a number of complications in correctly determining and possibly even reducing statistical fluctuations in parameter estimates extracted from mcmc simulations .firstly , due to the memory of the markovian process , subsequent measurements in the time series are correlated , such that the fluctuations generically appear smaller than they are .this issue can be resolved by a _ blocking _ of the original time - series data .secondly , one often needs to know the precision of parameter estimates which are complicated ( and sometimes non - parametric ) functions of the measured observables .such problems are readily solved using resampling techniques such as the _jackknife_. consider a general monte carlo simulation with the possible values of a given observable appearing according to a probability distribution .this form , of course , implies that the system is in thermal equilibrium , i.e. , that the underlying stochastic process is stationary .the probability density could be identical to the boltzmann distribution of equilibrium thermodynamics as for the importance - sampling technique , but different situations are conceivable as well , see the discussion in sec .[ sec : histo ] below .if we assume ergodicity of the chain , the average for a time series of measurements is an unbiased estimator of the mean in contrast to , the estimator is a random number , which only coincides with in the limit . under these circumstances ,simulational results are only meaningful if in addition to the average we can also present an estimate of its variance .note that , although the distribution of individual measurements might be arbitrary , by virtue of the central limit theorem the distribution of the averages must become gaussian for .hence , the variance is the ( only ) relevant parameter describing the fluctuations of .if subsequent measurements , , are uncorrelated , we have which can be estimated without bias from i.e. , .this is what we do when estimating the statistical fluctuations from a series of independent lab experiments .markov chain simulations entail the presence of temporal correlations , however , such that the connected autocorrelation function , is non - zero in general ( see , e.g. , ref .stationarity of the chain implies that .then , the variance of becomes .\ ] ] monte carlo correlations decline exponentially , i.e. , to leading order , defining the _ exponential autocorrelation time _ . due to this exponential decay , for the deviations of the factors of eq .( [ eq : sigma_autocorr ] ) from unity can be neglected , and defining the _ integrated autocorrelation time _ as one has in view of the reduction of variance of the average relative to a single measurement in eq .( [ eq : sigma_noautocorr ] ) , eq .( [ eq : tau_reduction ] ) states that the _ effective _ number of independent measurements in the presence of autocorrelations is reduced by a factor of .the autocorrelation times and are not identical , but one can show that the latter is a lower bound of the former , . (a ) . in the jackknifing analysis , the blocks consist of the whole series _ apart _ from the entries of a single block ( b ) .[ fig : block ] , width=283 ] as long as the autocorrelation time is finite , the distribution of averages still becomes gaussian asymptotically , such that for the variance remains the relevant quantity describing fluctuations . to practically determine from eq .( [ eq : sigma_autocorr ] ) , an estimate for the autocorrelation function is required .this can be found from the definition ( [ eq : autocorrelation_function ] ) by replacing expectation values with time averages .it turns out , however , than upon summing over the contributions of the autocorrelation function for different time lags in eq .( [ eq : sigma_autocorr ] ) divergent fluctuations are incurred , enforcing the introduction of a cut - off time .several approximation schemes have been developed using such estimators , but they turn out to have severe drawbacks in being computationally expensive , hard to automatize and in that estimating their statistical accuracy is tedious ( see ref . ) . a more efficient and very intuitive technique for dealing with autocorrelations results from a blocking transformation in the spirit of the renormalization group ( in fact, this idea was already formulated by wilson ) .much like block spins are defined there , one combines adjacent entries of the time series , and defines block averages cf .[ fig : block](a ) .this procedure results in a shorter effective time series with entries .( we assume for simplicity that is an integer multiple of . ) obviously , the average and its variance are invariant under this transformation . under the exponential decay ( [ eq : exponential_autocorr ] ) of autocorrelations of the original series it is clear ( and can be shown explicitly ) , however , thatsubsequent block averages , are less correlated than the original measurements and .furthermore , the remaining correlations must shrink as the block length is increased , such that asymptotically for ( while still ensuring ) an uncorrelated time series is produced .consequently , the nave estimator ( [ eq : variance_of_mean_notautocorr ] ) can be legally used in this limit to determine the variance of the average . for the finite timeseries encountered in practice , a block length and must be used .this is illustrated in figure [ fig : bins ] showing the estimate ( [ eq : variance_of_mean_notautocorr ] ) for a blocked time series with autocorrelation time as a function of the block length .it approaches the true variance from below , eventually reaching a plateau value where any remaining pre - asymptotic deviations become negligible compared to statistical fluctuations . if the available time series is long enough ( as compared to ), it is often sufficient to simply lump the data into as few as some hundred blocks and restrict the subsequent data analysis to those blocks . as a rule of thumb , in practical applications it turns out that a time series of length is required for a reliable determination of statistical errors as well as autocorrelation times . from eqs .( [ eq : variance_of_mean_notautocorr ] ) and ( [ eq : tau_int ] ) it follows that the integrated autocorrelation time can be estimated from within this scheme , where needs to be chosen in the plateau regime of fig .[ fig : bins ] . of the variance of the average according to eq .( [ eq : variance_of_mean_notautocorr ] ) for a re - blocked time series as a function of the block length .[ fig : bins ] ] apart from providing an estimate of for simple quantities , the blocking procedure has the advantage of resulting in an effectively uncorrelated auxiliary time series which can then be fed into further statistical machinery , much of which is restricted to the case of independent variables .resampling schemes such as the jackknife provide error and bias estimates also for non - linear functions of observables without entailing truncation error or requiring assumptions about the underlying probability distributions . while can be directly computed from the blocked time series of via the estimator ( [ eq : variance_of_mean_notautocorr ] ), this approach fails for non - linear functions of expectation values , , such as , e.g. , susceptibilities or cumulants .a standard approach for such cases is the use of error propagation formulas based on taylor expansions , = { \ensuremath{\frac{\partial f}{\partial \l a\r}}}\sigma^2(a)+{\ensuremath{\frac{\partial f}{\partial \l b\r}}}\sigma^2(b)+\cdots.\ ] ] apart from the truncation error resulting from the restriction to first order in the expansion , this entails a number of further problems : if the averages , etc .are correlated due to their origin in the same simulation , cross - correlation terms need to be included as well .even worse , for the case of non - parametric parameter estimates , such as determining the maximum of some quantity by reweighting ( see below ) or extracting a critical exponent with a fitting procedure , error propagation can not be easily used at all .such problems are avoided by methods based on repeated sampling from the original data pool , using the properties of these meta samples to estimate , reduce bias etc .these are modern techniques of mathematical statistics whose application only became feasible with the general availability of computers .most straightforwardly applicable is the jackknife procedure , where the meta samples consist of all of the original time series apart from one data block , cf .[ fig : block](b ) .assume that a set of simulations resulted in a collection of time series , , , for different observables , system sizes , temperatures etc .applying the blocking procedure described above , it is straightforward to divide the series in effectively uncorrelated blocks .it is often convenient to use the same number of blocks for all series ( e.g. , 100 ) which can easily be arranged for by the blocking transformation as long as is larger than some minimum value ( e.g. , ) for each simulation and observable .if then denotes the block over all series according to eq .( [ eq : block_definition ] ) , where for a constant number of blocks the block lengths might vary between the different series under consideration , one defines the corresponding _ jackknife block _ as the complement cf .[ fig : block ] .considering now an estimator for some parameter depending on ( some or all of ) the different series , we define the corresponding estimates restricted to jackknife block , .\ ] ] the variation between these estimates taken from the same original data can be used to infer the sample variance .if one denotes the average of the jackknife block estimators ( [ eq : jackknife_blocks ] ) as an estimate for the sample variance of the estimator is given by ^ 2.\ ] ] this is very similar to the simple estimate ( [ eq : variance_of_mean_notautocorr ] ) for the variance of the average , but it comes with a different prefactor which serves a twofold purpose : it reweights the result from the effective jackknife series of length to the original length and takes care of the fact that all of the jackknife block estimates are strongly correlated due to them being based on ( almost ) the same data . the general eq .( [ eq : jackknife_variance ] ) forms a conservative and at most weakly biased estimate of the true variance , which lacks the truncation error of schemes based on eq .( [ eq : error_propagation ] ) and is applicable to non - parametric parameter estimates . in a slight generalization of eq . ( [ eq : jackknife_variance ] ) it is possible to also estimate covariances . for a number of estimators , , , , a robust jackknife estimator of the covariance matrix is given by \left[\hat{\theta}_{j(s)}-\hat{\theta}_{j(\cdot)}\right].\ ] ] in a similar way the bias of estimators can be reduced , i.e. , deviations between the mean of an observable and the expectation value of some estimator that disappear with increasing sample length .for a detailed discussion we refer the reader to refs . .a general procedure for the analysis of simulation data based on blocking and jackknife techniques hence has the following form : 1 .decide on the number of jackknife blocks to be used . for most purposes , of the order of blocksare sufficient .2 . for each original time series recorded in a collection of simulations , examine the block averages ( [ eq : block_averages ] ) as a function of the block length . if the result for blocks is in the plateau regime of fig .[ fig : bins ] everything is fine ; otherwise , one needs to record a longer time series ( and possibly take measurements less frequently to keep the amount of data manageable ) .3 . for each parameter to be estimated , compute the jackknife block estimates ( [ eq : jackknife_blocks ] ) as well as the average ( [ eq : jackknife_average ] ) andcombine them to calculate the variance ( [ eq : jackknife_variance ] ) . for a number of different parameter estimates , the jackknife block estimates can also be used to calculate the covariance ( [ eq : jackknife_covariance ] ) .an increasing number of successful monte carlo techniques rely on reweighting and the use of histograms .this includes the ( multi-)histogram method of refs . as well as the plethora of generalized ensemble techniques ranging from multicanonical simulations to wang - landau sampling .such methods are based on the fact that samples taken from a known probability distribution can always be translated into samples from another distribution over the same state space .assume , for simplicity , that states are labeled as appropriate for a spin system .if a sequence , , , was sampled from a stationary simulation with probability density , an estimator for the expectation value of the observable relative to the _ equilibrium _ distribution is given by for a finite simulation this works as long as the sampled and the equilibrium distributions have sufficient _ overlap _ , such that the sampled configurations can be representative of the equilibrium average at hand . for simple samplingone has and hence must weight the resulting time series with the boltzmann factor where denotes the energy of the configuration and is the partition function at inverse temperature . for importance sampling , on the other hand , , such that averages of time series are direct estimates of thermal expectation values . if samples from an importance sampling simulation with should be used to estimate parameters of , eq .( [ eq : general_estimate ] ) yields the familiar ( temperature ) reweighting relation where .completely analogous equations can be written down , of course , for reweighting in parameters other than temperature .similarly , canonical averages at inverse temperature are recovered from multicanonical simulations via using eq .( [ eq : general_estimate ] ) with and .reliable error estimation ( as well as bias reduction , covariance estimates etc . ) for reweighted quantities is rather tedious with traditional statistical techniques such as error propagation .resampling methods , on the other hand , allow for a very straightforward and reliable way of tackling such problems .for the jackknife approach , for instance , one computes jackknife block estimates of the type ( [ eq : general_estimate ] ) by simply restricting the set of time series to the jackknife block . with the jackknife average ( [ eq : jackknife_average ] ) , e.g. ,the variance estimate ( [ eq : jackknife_variance ] ) with can be straightforwardly computed .similar considerations apply to covariance estimates or bias reduced estimators .extremal values of thermal averages can be determined to high precision from the continuous family of estimates ( [ eq : temperature_reweighting ] ) , where error estimates again follow straightforwardly from the jackknife prescription .temporal correlations resulting from the markovian nature of the sampling process have been discussed in sec .[ sec : autocorr ] above , and we assume that they have been effectively eliminated by an appropriate binning procedure . extracting a number of different parameter estimates , , , from the same number of original simulations it is clear , however , that also significant _ cross correlations _ between estimates and can occur .these have profound consequences for estimating statistical error and reducing it by making the best use of the available data .if a given parameter estimate depends on several observables of the underlying time series that exhibit cross correlations , this fact is _ automatically _ taken into account correctly by the jackknife error estimate ( [ eq : jackknife_variance ] ) .this is in contrast to error analysis schemes based on error propagation formulae of the type ( [ eq : error_propagation ] ) , where any cross correlations must be taken into account explicitly .insofar the outlined approach of data analysis is failsafe .we want to go beyond that , however , in trying to _ optimize _ statistical precision of estimates from the available data . if we attempt to estimate a parameter , we ought to construct an estimator which is a function of the underlying time series with the property that ( at least for ) .obviously , there usually will be a large number of such functions and it is not possible , in general , to find the estimator of minimal variance .we therefore concentrate on the tractable case where is a linear combination of other estimators , , there are different possibilities to ensure the condition : 1 .all estimators have the same expectation , , and .one estimator is singled out , say , , and the rest has vanishing expectation , , arbitrary , .3 . more complicated situations . the first type describes the case that we have several different estimators for the same quantity and want to take an average of minimum variance .the second case is tailored for situations where existing symmetries allow to _ construct _ estimators with vanishing expectation whose cross correlations might reduce variance . to optimize the analysis , the parameters in ( [ eq : linear_combination ] )should be chosen such as to minimize the variance \equiv \sum_{i , j = 1}^k \alpha_i\alpha_j\gamma_{ij}(\hat{\theta})\ ] ] for case one above , we introduce a lagrange multiplier to enforce the constraint , and the optimal choice of is readily obtained as {ij } } { \sum_{i , j=1}^k [ \gamma(\hat{\theta})^{-1}]_{ij}},\ ] ] leading to a minimum variance of {ij}}.\ ] ] very similarly , case two leads to the choice {ij}\gamma(\hat{\theta})_{j1},\ ] ] where denotes the submatrix of {ij} ] . here, the second error estimate in square brackets refers to the sensitivity of the result for to the uncertainty in indicated above , which turns out to be symmetric with respect to upwards and downwards deviations of here . as an alternative to the two - step process of first determining from the relations ( [ eq : cumulant_scaling ] ) and ( [ eq : logmagnderiv_scaling ] ) and only afterwards estimating from eq .( [ eq : shift_exponent ] ) , one might consider direct fits of the form ( [ eq : shift_exponent ] ) to the maxima data of the 8 observables listed above determining and in one go . here, again , fits on the range neglecting any corrections to the leading scaling behavior are found to be sufficient .the results for the plain , error - weighted and covariance - weighted averages for both parameters , and , are collected in table [ tab : three_parameter ] .consistent with the previous results , it is seen that neglecting correlations in error estimation leads to a sizable underestimation of errors and , on the other hand , using the optimal weighting scheme of eq .( [ eq : covariance_weighted ] ) statistical errors are significantly reduced , an effect which is also nicely illustrated by the very good fit of the resulting parameter estimates with the exact values .clt4t4t4t4 & & & & & + & & 0.1219 & 0.0027 & 1.0085 & 0.0117 + & & & 0.0021 & & 0.0213 + & & 0.1261 & 0.0016 & 1.0048 & 0.0082 + & & & 0.0013 & & 0.0136 + & & 0.1250 & 0.0010 & 1.0030 & 0.0096 + exact & & 0.1250 & & 1.0000 & + crrt4t4t2@% t2t2rt4t4t4t4t4 & + & & & & & & & & & & & & & + & 8 & 128 & 0.6358 & 0.0127 & 0.91 & 0.45 & 0.61 & 5 & 1.0000 & 0.9809 & 0.9490 & 0.4401 & 0.4507 + & 8 & 128 & 0.6340 & 0.0086 & 0.63 & 0.46 & 0.71 & 5 & 0.9809 & 1.0000 & 0.9910 & 0.4357 & 0.4630 + & 8 & 128 & 0.6326 & 0.0062 & 0.39 & 0.40 & 0.77 & 5 & 0.9490 & 0.9910 & 1.0000 & 0.4363 & 0.4639 + & 32 & 128 & 0.6313 & 0.0020 & 0.20 & 0.62 & 0.54 & 3 & 0.4401 & 0.4357 & 0.4363 & 1.0000 & 0.9267 + & 32 & 128 & 0.6330 & 0.0024 & 0.46 & 1.20 & 0.77 & 3 & 0.4507 & 0.4630 & 0.4639 & 0.9267 & 1.0000 + & & 0.6334 & 0.0038 & 0.52 & 0.85 & & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 + & & & 0.0067 & 0.52 & 0.49 & & & & & & + & & 0.6322 & 0.0015 & 0.33 & 1.35 & & 0.0106 & 0.0254 & 0.0503 & 0.5315 & 0.3823 + & & & 0.0024 & 0.33 & 0.84 & & & & & & + & & 0.6300 & 0.0017 & -0.01 & -0.05 & & 0.2485 & -1.5805 & 1.6625 & 0.7948 & -0.1253 + finally , we turn to the determination of the remaining critical exponents . as outlined above, we do this by combining different estimates using covariance analysis to improve the results for the scaling dimensions , thus ensuring that the scaling relations are fulfilled exactly . from a glance at eq .( [ eq : scaling_dimensions ] ) one reads off that the magnetic scaling dimension can be determined from and .we therefore determine from the fss of the ( modulus of the ) magnetization at its inflection point and estimate from the fss of the susceptibility maxima , resulting in and , respectively .as the correlation analysis reveals , the two resulting estimates of are _ anti-_correlated to a considerable degree with correlation coefficient . as a consequence , conventional error analysis neglecting correlations _over-_estimates statistical fluctuations .still , choosing optimal weights according to eq .( [ eq : covariance_weighted ] ) is able to reduce variance , resulting in a combined estimate right on top of the exact result , cf . the data collected in table [ tab : scaling_dimension ] .the energetic scaling dimension , on the other hand , might be computed from as well as .we therefore use the five individual estimates of listed in table [ tab:2d_nu ] as well as the fss of the maximum of the specific heat to estimate .the latter fits are somewhat problematic due to the logarithmic singularity of the specific heat corresponding to , and it turns out that a fit of the form including a scaling correction is necessary to describe the data . combining all individual estimates in an optimal way , we arrive at , well in agreement with the exact result , cf . the right hand side of table [ tab : scaling_dimension ] .cluster - update simulations of the ferromagnetic ising model in three dimensions ( 3d ) were performed for simple - cubic lattices of edge lengths , , , , , , , and .all simulations were performed at the coupling reported in a high - precision study as estimate for the transition point , since it turned out that the maxima of the various quantities under consideration were all within the reweighting range of this chosen simulation point for the system sizes and lengths of time series at hand . for determining the correlation - length exponent again considered the scaling of the logarithmic magnetization derivatives for , and and the derivatices of the cumulants and .we find scaling corrections to be somewhat more pronounced than for the two - dimensional model for the system sizes studied here .for the logarithmic magnetization derivatives we therefore performed fits of the form ( [ eq : logmagnderiv_scaling ] ) including the correction term on the full range , where the resulting values of the effective correction exponent were ( ) , ( ) and ( ) , respectively .for the cumulants and , on the other hand , corrections were too small to be fitted reliably with our data , such that they were effectively taken into account by dropping the small lattice sizes instead , while using fits of the form ( [ eq : cumulant_scaling ] ) with fixed .the corresponding fit data are collected in table [ tab:3d_nu ] .the estimated standard deviations of the individual estimates are again found to be very heterogeneous , but the correlations between the different estimates are somewhat smaller than in two dimensions , in particular between the magnetization derivatives and the cumulants , cf .table [ tab:3d_nu ] . comparing to the case of fits without corrections, it is seen that this latter effect is partially due to the use of two different fit forms for the two types of quantities .( the fits for and also include a reduced range of lattice sizes which could lead to a decorrelation , but this effect is found to be much less important than the difference in the fit forms . )considering the averages of individual estimates , as a result of these smaller correlations the underestimation of statistical errors in the nave approach as well as the reduction of variance through the optimized estimator ( [ eq : covariance_weighted ] ) is somewhat less dramatic than for the two - dimensional model , but the qualitative behavior appears to be very much the same . as our final estimate we quote , very well in agreement with the reference value taken from a survey of recent literature estimates compiled in ref . . in a second step we determined the transition coupling from fits of the functional form ( [ eq : shift_exponent ] ) to the maxima of the quantities listed in table [ tab:2d_betac ] . as forthe fits , however , the inclusion of an effective correction term as indicated in eq .( [ eq : shift_exponent ] ) turned out to be necessary for a faithful description of the scaling data .the plain , error - weighted and covariance - weighted averages of the corresponding estimates are listed in the first two data columns of table [ tab : various_exponents_3d ] together with their standard deviations , the results being consistent with the reference value .we also tried non - linear three - parameter fits of the form ( [ eq : shift_exponent ] ) to the data , determining and simultaneously . for this case , the precision of the data is not high enough to reliably include corrections to scaling .still , the improved results are well consistent with the reference values of refs . , cf .the middle columns of table [ tab : various_exponents_3d ] .clt8t8t4t4t7t7t4t4t4t4 & & & & + & & & & & & & & & & & + & & 0.22165681 & 0.00000108 & 0.6020 & 0.0105 & 0.2216530 & 0.0000025 & 0.51364 & 0.00401 & 1.4137 & 0.0138 + & & & 0.00000170 & & 0.0150 & & 0.0000032 & & 0.00435 & & 0.0184 + & & 0.22165741 & 0.00000059 & 0.6247 & 0.0062 & 0.2216550 & 0.0000008 & 0.51489 & 0.00381 & 1.4180 & 0.0038 + & & & 0.00000114 & & 0.0077 & & 0.0000016 & & 0.00413 & & 0.0061 + & & 0.22165703 & 0.00000085 & 0.6381 & 0.0044 & 0.2216552 & 0.0000011 & 0.51516 & 0.00412 & 1.4121 & 0.0043 + reference & & 0.22165459 & 0.00000006 & 0.6301 & 0.0004 & 0.22165459 & 0.00000006 & 0.51817 & 0.00058 & 1.4130 & 0.0010 + finally , we also considered the scaling dimensions and .for the magnetic scaling dimension , we find that the determinations from and are only very weakly correlated , such that the error - weighted and covariance - weighted averages are very similar , see the right hand side of table [ tab : various_exponents_3d ] .larger correlations are present again between the different estimates of the energetic scaling dimension from the various estimates of via and the scaling of the specific heat via , leading to a considerable improvement in precision of the optimal average over the plain and error - weighting schemes .the results for both scaling dimensions are well compatible with the values and extracted from the reference values of ref . .time series data from markov chain monte carlo simulations are usually analyzed in a variety of ways to extract estimates for the parameters of interest such as , e.g. , critical exponents , transition temperatures , latent heats etc .as long as at least some of these estimates are based on the same simulation data , a certain degree of cross correlations between estimators is unavoidable .we have shown for the case of a finite - size scaling analysis of the ferromagnetic nearest - neighbor ising model on square and cubic lattices that more often than not , such correlations are very strong , with correlation coefficients well above 0.8 . while such correlations , although their existence is rather obvious , have been traditionally mostly neglected even in high - precision numerical simulation studies , it was shown here that their presence is of importance at different steps of the process of data analysis , and neglecting them leads to systematically wrong estimates of statistical fluctuations as well as non - optimal combination of single estimates into final averages . as far as the general statistical analysis of simulation data is concerned , it has been discussed that traditional prescriptions such as error propagation have their shortcomings , in particular as soon as non - parametric steps such as the determination of a maximum via reweighting or fitting procedures come into play .these problems are circumvented by resorting to the class of non - parametric resampling schemes , of which we have discussed the jackknife technique as a conceptually and practically very simple representative . using this technique ,we have outlined a very general framework of data analysis for mcmc simulations consisting of ( a ) a transformation of the original set of time series into an auxiliary set of `` binned '' series , where successive samples are approximately uncorrelated in time and ( b ) a general jackknifing framework , where the required steps of computing a parameter estimate possibly including reweighting or fitting procedures etc . are performed on the full underlying time series apart from a small window cut out from the data stream allowing for a reliable and robust estimate of variances and covariances as well as bias effects without any non - stochastic approximations .while this technique of data analysis is not new , we feel that it still has not found the widespread use it deserves and hope that the gentle and detailed introduction given above will contribute to a broader adoption of this approach .a particular example of where the presence of cross correlations comes into play occurs when taking averages of different estimates for a parameter from the same data base .neglecting correlations there leads to ( a ) systematically wrong , most often too small , estimates of statistical errors of the resulting averages and ( b ) a sub - optimal weighting of individual values in the average leading to larger - than - necessary variances .correct variances can be estimated straightforwardly from the jackknifing approach , while optimal weighting involves knowledge of the covariance matrix which is a natural byproduct of the jackknife technique as well .we have discussed these concepts in some detail for the case of a finite - size scaling analysis of the critical points of the 2d and 3d ising models .it is seen there that the plain and error - weighted averages most oftenly used in fact can have larger fluctuations than the most precise single estimates entering them , but this flaw is not being detected by the conventional analysis due to the generic underestimation of variances .on the contrary , by using the truly optimal weighting of individual estimates an often substantial reduction of statistical fluctuations as compared to the error - weighting scheme can be achieved .for some of the considered examples , a threefold reduction in standard deviation , corresponding to saving an about tenfold increase in computer time necessary to achieve the same result with the conventional analysis , can be achieved with essentially no computational overhead . in view of these results , heuristic rules such as ,e.g. , taking an error - weighted average using the smallest single standard deviation as an error estimate are clearly found to be inadequate .we therefore see only two statistically acceptable ways of dealing with the existence of several estimates for the same quantity : ( a ) select the single most precise estimate and discard the rest or ( b ) combine all estimates in a statistically optimal way taking cross correlations into account .needless to say , the latter approach is generally preferable in that it leads to more precise results at very low costs .we suggest to use the existence of scaling relations between the critical exponents for the case of a continuous phase transition to improve the precision of estimates by considering the scaling dimensions as the parameters of primary interest . performing the corresponding analysis taking cross correlations into account , results in a set of critical exponents with reduced statistical fluctuations that fulfill the scaling relations exactly .an application of this type of approach initially suggested in ref . for using mean - value relations such as callen identities or schwinger - dyson equations instead of scaling relations has been discussed in ref . .while the examples discussed were specific , it should be clear that the method itself is rather generic , and should apply to all data sets generated from mcmc simulations . in particular , it is easy to envisage applications in the theory of critical phenomena , reaching from classical statistical mechanics over soft matter physics to quantum phase transitions , or for studying first - order phase transitions .the range of applications is not restricted to mcmc simulations , however , but applies with little or no modifications to other random sampling problems , such as , e.g.stochastic ground - state computations or the sampling of polymer configurations with chain - growth methods .m.w . acknowledges support by the dfg through the emmy noether programme under contract no .we4425/1 - 1 as well as computer time provided by nic jlich under grant no .( [ eq : app_optimal_variance ] ) as a function of the correlation coefficient .[ fig : two_variable ] ] consider a general average of two random variables and , where . according to eq .( [ eq : variance_of_average ] ) , the variance of is where and are the variances of and , respectively , and denotes the correlation coefficient of and , .( [ eq : app_variance ] ) is a quadratic form in , which has a minimum as long as which is almost always fulfilled since : equality holds only for and , in which case _ any _ choice of yields the same variance . in all other cases ,the optimal weights are and the resulting variance of the average is a number of observations are immediate * for the uncorrelated case , one arrives back at the error - weighted average of eqs .( [ eq : error_weighted ] ) and ( [ eq : uncorrelated_variance ] ) . * in the correlated case , and for fixed variances and , the variance smoothly depends on the correlation coefficient .it has maxima at and , only one of which is in the range .notably , the relevant maximum is always at non - negative values of . * for , the variance _ vanishes identically _, apart from the singular case and . the generic form of as a function of is depicted in fig .[ fig : two_variable ] . in the presence of moderate correlations , therefore , _ anti - correlations _ are preferable over correlations in terms of reducing the variance of the average .note that the result ( [ eq : app_optimal_variance ] ) is different from that of eq .( 8) in ref . , since the definition of correlation coefficient used there is different from that in our situation of taking an average .instead of measuring the correlation between and , their definition refers to the correlation of and .the weights and of eq .( [ eq : app_kappa ] ) are not restricted to be between zero and one .it is easy to see that for or , the average is in fact outside of the bracket $ ] .this seemingly paradoxical effect is easily understood from the optimal weights derived here . from eq .( [ eq : app_kappa ] ) one reads off that the weights and leave the range as soon as resp . , depending on whether or , that is , only for strong positive correlations to the right of the maximum in fig .[ fig : two_variable ] .thus , if the smaller of and has the smaller variance ( and both are strongly correlated ) , the average is below both values .if the larger value has the smaller variance , the optimal average is above both values .the asymmetry comes here from the difference in variance .to understand this intuitively , assume for instance that and with strong positive correlations .it is most likely , then , that and deviate in the _ same _ direction from the true mean . since ,the deviation of should be generically smaller than that of .for , however , this is only possible if .this is illustrated in fig .[ fig : fluctuation ] .
besides the well - known effect of autocorrelations in time series of monte carlo simulation data resulting from the underlying markov process , using the same data pool for computing various estimates entails additional cross correlations . this effect , if not properly taken into account , leads to systematically wrong error estimates for combined quantities . using a straightforward recipe of data analysis employing the jackknife or similar resampling techniques , such problems can be avoided . in addition , a covariance analysis allows for the formulation of optimal estimators with often significantly reduced variance as compared to more conventional averages .
death is inevitable .it is usually preceded by a progressive deterioration of our bodies .this phenomenon is called aging or senescence and it is characterized by a decline in the physical capabilities of the individuals .although rare , some old people gazed at senescence with fine humour : `` old age is not so bad when you consider the alternative '' said m. chevalier ( french singer and actor ) ; `` it is good to be here . at 98 , it is good to be anywhere '' taught us g. burns ( us comedian and actor ). the new millennium , which is just beginning , will certainly be witness of a holy cruzade against aging .the principal battle will be fought in the biochemical and medicine fields .can physicist help in any way ?if we look at the progress made in the last decade , we believe that the answer is yes .indeed , physicists have brought new perpectives on the subject - the occam s razor principle .william of occam , a franciscan monk , philosopher and political writer who was born in england in the thirteenth century , believed that for every phenomena occurring in the universe we need to look at the simplest explanation first - complexity should not be assumed without necessity .this is the way physicists like to think of nature but this is not followed by biologists .they love to see differences and complexity where physicists love to see similarities and simplicity .a good model in physics means one with a small number of parameters . with the occam s razor principle in mind ,what kind of aging model can we propose ?there are two kinds of aging theories : biochemical and evolutionary .the first invokes damages in cells , tissues and organs , the existence of free radicals or the telomeric shortening , that is , it sees senescence as a natural consequence of biochemical processes .the second is the evolutionary theory , which explains the senescence as a competitive result of the reproductive rate , mutation , heredity and natural selection .evolutionary theories of aging are hypothetico - deductive in character , not inductive .they do not contain any specific genetic parameter , but only physiological factors and constraints imposed by the environment .there are two types : the optimality theory and the mutational theory . in the optimality theory ,senescence is a result of searching an optimal life history where survival late in life is sacrificed for the sake of early reproduction .a typical representative of such theories is the partridge - barton model . for the mutational theory ,on the other hand , aging is a process which comes from a balance between darwinian selection and accumulation of mutations .the natural selection efficiency to remove harmful alleles in a population depends on when in the lifespan they come to express .alleles responsible for lethal diseases that express late in life , escape from the natural selection and accumulate in the population , provoking senescence .however , if the natural selection is too strong then deleterious mutations might not accumulate .the most successful aging theory of the mutational type is the penna model . by the way , throughout this paper, aging simply means that the average survival probability of the population decreases with the age . here, in this paper , we analyse the heumann - htzel model . although released at the same year as the penna model it has remained in limbo .the achilles heel of the heumann - htzel model was its incapacity to treat populations with many age intervals ( which all we expect to be a free parameter in a reasonable model ) .last but not least , in its original formulation the model could not handle mutations exclusively deleterious ( harmful mutations are hundred times more frequent than the beneficial ones ) leading to population meltdown . with minor modifications we were able not only to repair those points but also to find some nice characteristics of the model : it is gompertzian , it exhibits catastrophic senescence and the effect `` later is better '' ( explained in the paper ) is present .in 1994 , dasgupta proposed an aging model very similar to the partridge - barton model , but without the antagonistic pleiotropy .the antagonistic pleiotropy arises when the same gene is responsible for multiple effects .for example , genes enhancing early survival by promotion of bone hardening might reduce later survival by promoting arterial hardening .reproduction is asexual . as in the partridge - barton model ,every individual in the dasgupta model can have only three ages .heumann and htzel generalized the dasgupta model to support an arbitrary number of ages. however , when they simulated a population with eleven ages , they found that ( in the final stationary state ) there is again only three ages , recovering the partridge - barton results .this fact put the heumann - htzel model in limbo .we will show later how some simple modifications can change drastically this scenario .let us now briefly describe the heumann - htzel model . at time , there is a population composed by individuals with age , . each individual carries a `` chronological genome '' of size with a survival probability per time step at age .there will be senescence if this genome , averaged over the whole population , has diminishing with . at each time step , _ every _ individual passes through the following stages : * the verhulst factor + the verhulst factor plays the role of the environment ( e.g. , food restrictions ) .it is given by where is the total population at time and is a chosen parameter .if an individual at age has then he survives to the next step , otherwise he is eliminated .actually , it is the verhulst factor which prevents the population to blow up . * the natural selection + a random number ] .
since its proposition in 1995 , the heumann - htzel model has remained as an obscure model of biological aging . the main arguments used against it were its apparent inability to describe populations with many age intervals and its failure to prevent a population extinction when only deleterious mutations are present . we find that with a simple and minor change in the model these difficulties can be surmounted . our numerical simulations show a plethora of interesting features : the catastrophic senescence , the gompertz law and that postponing the reproduction increases the survival probability , as has already been experimentally confirmed for the drosophila fly . pacs numbers : 87.10.+e , 87.23.kg , 87.23.cc key words : aging theories , senescence , population dynamics
a large number of ontologies have been developed for the annotation of biological and biomedical data , commonly expressed in the web ontology language ( owl ) or an owl - compatible language such as the obo flatfile format .access to the full extent of knowledge contained in ontologies is facilitated by automated reasoners that can compute the ontologies underlying taxonomy and answer queries over the ontology content . while ontology repositories , such as bioportal and the ontology lookup service ( ols ) , provide web services and interfaces to access ontologies , including their metadata such as author names and licensing , the list of classes and asserted structure , they do not enable computational access to the semantic content of the ontologies and the inferences that can be drawn from them .access to the semantic content of ontologies usually requires further inferences to reveal the consequences of statements ( axioms ) asserted in an ontology ; these consequences may be automatically derived using an automated reasoner . tothe best of our knowledge , no reasoning infrastructure that supports semantically enabled access to biological and biomedical ontologies currently exists . here, we present aber - owl , a reasoning infrastructure over ontologies consisting of an ontology repository , web services that facilitate semantic queries over ontologies specified by a user or contained in aber - owl s repository , and a user interface .such an infrastructure can not only enable access to knowledge contained in ontologies , but crucially can also be used for semantic queries over data annotated with ontologies , including the large volumes of data that are increasingly becoming available through public sparql endpoints .allowing access to data through an ontology is known as the `` ontology - based data access '' paradigm , and can exploit formal information contained in ontologies to : * identify possible inconsistencies and incoherent descriptions , * enrich possibly incomplete data with background knowledge so as to obtain more complete answers to a query ( e.g. , if a data item referring to an organism has been characterized with findings of pulmonary stenosis , overriding aorta , ventricular septal defect , and right ventricular hypertrophy , and the ontology or the set of ontologies it imports contains enough information to allow , based on these four findings , the inference of a tetralogy of fallot condition , then the data item can be returned when querying for tetralogy of fallot even in the absence of it being explicitly declared in database ) , * enrich the data schema used to query data sources with additional information ( e.g. , by using a class in a query that is an inferred super - class of one or more classes that are used to annotate data items , but the class itself is never used to characterize data ) , and * provide a uniform view over multiple data sources with possibly heterogeneous , multi - modal data . to demonstrate how aber - owl can be used for ontology - based access to data , we provide a service that performs a semantic search over pubmed and pubmed central articles using the results of an aber - owl query , and a service that performs sparql query extension so that the results of aber - owl queries can be used to retrieve data accessible through public sparql endpoints . in aber - owl , following the ontology - based data access paradigm , we specify the features of the relevant information on the ontology- and knowledge level , and retrieve named classes in ontologies satisfying these condition using an automated reasoner , i.e. , a software program that can identify whether a class in an ontology satisfies certain conditions based on the axioms specified in an ontology .subsequently , we embed the resulting information in database , linked data or literature queries. aber - owl can be accessed at http://aber-owl.net .the aber - owl software is freely available at https://github.com/reality/sparqowl can be installed locally by users who want to provide semantic access to their own ontologies and support the use of their ontologies in semantic queries .the aber - owl software can be configured with a list of uris that contain ontology documents ( i.e. , owl files ) and employs the owl api to retrieve the ontologies that are to be included in the repository . for each ontology document included in the repository ,the labels and definitions of all classes contained within the ontology ( as well as of all the ontologies it imports ) are identified based on obo foundry standards and recommendations : we use the rdfs : label annotation property to identify class labels for each ontology ( as well as of all the ontologies it imports ) , and we employ the _ definition _ ( http://purl.obolibrary.org/obo/iao_0000115 ) annotation property , defined in the information artifact ontology , to identify the text definitions of a class . labels of the classes occurring in each ontology , as well as of all the ontologies it imports , are stored in a trie ( prefix tree ) .the use of a trie ensures that class labels can be searched efficiently , for example when providing term completion recommendations . upon initiating the aber - owl web services ,we classify each ontology using the elk reasoner , i.e. , we identify the most specific sub- and super - classes for each class contained in the ontology using the axioms contained within it .the elk reasoner supports the owl el profile and ignores ontology axioms that do not fall within the owl el subset .the benefit of using the owl el profile is the support for fast , polynomial - time reasoning , and the owl el subset is a suitable dialect for a large number of biomedical ontologies .while we currently use elk for the aber - owl infrastructure , it is possible for a user to install an aber - owl server that employs different owl reasoners , such as hermit or pellet , using the standard reasoner interface of the owl api . querying is performed by transforming a manchester owl syntax query string into an owl class expression using the owl api and then aber - owl s short - form provideris employed to provide the mappings of the owl class and the property uris to the class and property labels .if this transformation fails ( i.e. , when the query string provided is not a valid owl class expression within the ontology being queried ) , an empty set of results is returned .if the transformation succeeds , the elk reasoner is used to retrieve sub- , super- or equivalent classes of the resulting owl class expression .the type of query ( sub - class , super - class , or equivalent class ) is specified by the user and defaults to a sub - class query .queries in which the url of the ontology document is not specified are delegated to all ontologies in aber - owl s repository .consequently , results may be returned from different ontologies .if a url is specified as part of a query but the ontology it corresponds to is not available within aber - owl s repository , an attempt is made to retrieve the ontology from the url , which is then classified and then the query results over the classified ontology are returned to the user .should this process fail , an empty set of results is returned .the results of an aber - owl query are provided in json format and consist of an array of objects containing information about the ontology classes satisfying the query : the uri of the ontology document queried , the iri of the ontology class , the class label and the definition of the class .detailed documentation of the web services is available at the aber - owl web site .we implemented a web server that can be used to access the aber - owl s ontology repository and reasoning services .the web server features a jquery - based interface and uses ajax to retrieve data from the aber - owl web services .aber - owl : pubmed is built on top of the aber - owl reasoning infrastructure .it employes the aber - owl reasoning infrastructure to resolve a semantic query formulated in manchester owl syntax and retrieve a set of named classes that satisfy the query . in particular ,depending on the type of query , all subclasses , superclasses or equivalent classes that satisfy a class description in manchester owl syntax within one or all ontologies in aber - owl s repository , or within a user - specified ontology , are returned by aber - owl .the results of the aber - owl query is a set of class descriptions , including the class uri , the label and the definition of the class .we use the results to perform a boolean textual search over a corpus of articles .we use the apache lucene framework to create a fulltext index of all titles and abstracts in medline / pubmed 2014 , and all fulltext articles in pubmed central . before indexing, every text is processed using lucene s english language standard analyzer which tokenizes and normalises it to lower case as well as applies a list of stop words . for a user - specified query in manchester owl syntax, we construct a lucene query string from the set of class descriptions returned from the aber - owl services .in particular , we concatenate each class label using lucene s or operator . as a result, the lucene query will match any article ( title , abstract or fulltext ) that contains a label of a class satisfying the semantic query .it is also possible to conjunctively perform multiple semantic queries by providing more than one query in manchester owl syntax .data in biology is commonly annotated to named classes in ontologies , identified through a uri or another form of identifier that usually directly maps to a uri .pieces of data may refer to genes and proteins , text passages , measurements and other observations , and can be presented in multi - modal form as text , formal statements , images , audio or video recordings .this information is increasingly being made available as linked data through publicly available sparql endpoints .to semantically access ontology - annotated data contained in datasets available through public sparql endpoints , we provide a service which extends the sparql language with syntax which allows the user to include aber - owl resultsets within the query .this comprises of a list of class uris returned by aber - owl , which can then be used to match data in the sparql endpoint .sparql query expansion is implemented using the php sparql library and is available both as a web service and through a web interface that can be accessed through aber - owl s main web site .the aber - owl framework can be used to retrieve all super - classes , equivalent classes or sub - classes resulting from a manchester owl syntax query .the classes are retrieved either from a specific ontology in aber - owl s ontology repository , from all ontologies in the repository , or from a user - specified ontology that can be downloaded from a specified uri . in our installation of aber - owl at http://aber-owl.net, the complete library of obo ontologies is imported as well as several user - requested ontologies . using our web server , any ontology in aber - owl s ontology repository can be queried and the results subsequently displayed .furthermore , following execution of any aber - owl query , the web interface provides the means to use the query in aber - owl : pubmed so as to search and retrieve relevant biomedical literature , or in aber - owl : sparql to construct a query for data annotated to one of the resulting classes .aber - owl : pubmed enables ontology - based semantic access to biomedical literature .it combines the information in biomedical ontologies with automated reasoning to perform a literature query for all things that can be inferred from a class description within one or more ontologies .for example , a query for the class ventricular septal defect will return articles in which , among others , tetralogy of fallot is mentioned due to tetralogy of fallot being inferred to be a subclass of ventricular septal defect in the uberpheno and human phenotype ontologies .since aber - owl uses an automated reasoner to identify subclasses , this information does not have to be asserted in the ontology but rather is implied by the ontology s axioms .aber - owl : pubmed can also perform more complex queries , such as for articles containing mentions of subclasses of part_of some apoptotic process and part_of some regulation , and articles mentioning regulatory processes that are a part of apoptosis will be returned .such queries are only possible through the application of automated reasoning over the knowledge contained in the biomedical ontologies , and go beyond the state of the art in that they enable a genuinely _ semantic _ way of accessing biomedical literature based on the knowledge contained in the ontologies . finally , aber - owl: pubmed can also be used to identify co - occurrences of multiple aber - owl queries .for example , a conjunctive combination of two sub - class queries , one for ventricular septal defect and another for part_of some heart , will return articles that contain references to both parts of the heart ( such as the aorta ) and particular types of ventricular septal defects , e.g. , muscular or membranous defects , as well as complex phenotypes such as the tetralogy of fallot .aber - owl : pubmed is accessible through a basic web interface at aber-owl.net/aber-owl/pubmed/ in which queries can be executed , the articles satisfying the queries will be displayed , and matching text passages in the title , abstract or fulltext will be highlighted .furthermore , aber - owl : pubmed can be accessed through web services and thereby can be embedded in web - based applications .aber - owl : sparql provides semantic access to linked data by expanding sparql queries with the results returned by an aber - owl query .query expansion is performed based on sparql syntax extended by the following construct : .... owl [ querytype ] [ < aber - owl service uri > ] [ < ontology uri > ] { [ owl query ] } .... for example , the query .... owl subclass < http://aber-owl.net/aber-owl/service/> < http://purl.obolibrary.org/obo/go.owl > { part\_of some ' apoptotic process ' } .... will return a set of class uris that satisfy the query part_of some apoptotic process in the gene ontology ( go ) , and the results will be embedded in the sparql query .for this purpose , the owl statement is replaced by the aber - owl : sparql service with a set of class uris .there are two main forms in which the owl statement can be embedded within a sparql query .the first is the values form in which the results of the owl query are bound to a variable using the sparql 1.1 values statement .for example , .... values ?ontid { owl subclass < http://aber-owl.net/aber-owl/service/ > < > { part_of some ' apoptotic process ' } } .... will bind the ontology uris resulting from the owl query ( part_of some apoptotic process ) to the sparql variable ?the second form in which the owl statement is useful is in the form of a filter statement .for example , the query .... filter ( ? ontid in ( owl subclass< http://aber-owl.net/aber-owl/service/ > < > { part_of some ' apoptotic process ' } ) ) .... will filter the results of a sparql query such that the values of ?ontid must be in the result list of the owl query .as many sparql endpoints use different uris to refer to classes in ontologies , we have added the possibility to re - define prefixes for the resulting ontology classes such that they match the iri scheme used by a particular sparql endpoint .when this feature is used , the class iris resulting from an owl query will be transformed into a prefix form similar to the format used in the obo flatfile format , and the appropriate prefix definition will be added to the sparql query if it has not been defined in the query already .for example , the uniprot sparql endpoint ( http://beta.sparql.uniprot.org ) uses the uri pattern http://purl.uniprot.org/go/<id > to refer to gene ontology classes , the ebi biomodels endpoint uses http://identifiers.org/go/<id > , while the uri policy of the obo foundry specifies that the uri pattern http://purl.obolibrary.org/obo/go_<id > should be used .the latter uri scheme is the one employed by aber - owl since this is the authoritative uri provided in the ontology document . usingthe prefix format will transform the results of the aber - owl query from uris into strings of the type go:<id > and the appropriate prefix to the sparql query ( i.e. , prefix go : < http://purl.obolibrary.org/obo/go_ > will be added . changing this prefix definition statement to prefixgo : < http://purl.uniprot.org/go/ > will effectively rewrite the uris so that they can be used in conjunction with the uri scheme employed by the uniprot sparql endpoint . alternatively , the sparql query can employ a dedicated mapping service , possibly in the form of a sparql endpoint with access to sameas statements , to convert between uri schemes used in different places .we can demonstrate the possibilities of using the aber - owl : sparql query expansion service by retrieving all human proteins in uniprot annotated to part_of some apoptotic process. to achieve this goal , we use the sparql 1.1 values statement to bind the results to a variable ?ontid , and then we can use this variable in the sparql query to retrieve all human proteins with a gene ontology annotation in ?the query is shown in figure [ fig : query - uniprot ] .as uniprot uses different uris for go classes than those returned by aber - owl ( which are based on the officially endorsed uris by the obo foundry and the gene ontology consortium ) , the uris have to be rewritten for the query to succeed . in particular , in aber - owl : sparql , an option must be activated to rewrite uris into a `` prefix form '' ( i.e. , uris of the type http://purl.obolibrary.org/obo/go_0008150 would be rewritten to go:0008150 ) , and the sparql prefix declaration will redefine the prefix to match the uri scheme used in the uniprot sparql endpoint ..... prefix go : < http://purl.uniprot.org/go/ > prefix taxon:<http://purl.uniprot.org / taxonomy/ > prefix up : < http://purl.uniprot.org/core/ > prefix skos : < http://www.w3.org/2004/02/skos/core # > select distinct ?ontid where { # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # binds ?ontid to the results of the owl query values ?ontid { owl subclass < http://aber-owl.net/aber-owl/service/ > < > { part\_of some ' apoptotic process ' } } .# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ?ontid is now bound to the set of class iris of the owl query ?protein a up : protein . ?protein up : organism taxon:9606 . ?protein up : mnemonic ?pname . ?protein up : classifiedwith ?ontid . ?ontid skos : preflabel ?label . } .... we can also utilize the aber - owl infrastructure for more powerful queries that use inference over the ontology structure and utilize the results in a sparql query . for example , we can use aber - owl : sparql to query gwas central for markers that have been identified in gwas studies as significant for ventricular septal defects .using the human phenotype ontology ( hpo ) and the definitions that have been developed for the hpo , we can identify that a tetralogy of fallot is a particular type of ventricular septal defect .in particular , according to the axioms contained in the hpo , a tetralogy of fallot condition can be inferred from the phenotypes _ ventricular septal defect _ , _ overriding aorta _ , _ pulmonary valve stenosis _ and _ right ventricular hypertrophy_. importantly , no explicit subclass relation between these four key phenotypes and _ tetralogy of fallot _ is asserted in the hpo .therefore , reasoning is required to retrieve _ tetralogy of fallot _ as a subclass of either of these four , or a combination of these four , phenotypes .similarly , owl reasoning over the ontology axioms is required to retrieve data annotated to tetralogy of fallot when querying for either of the four phenotypes .the queries can also be made more precise by explicitly asking for a condition in which all four of the tetralogy of fallot phenotypes must be satisfied : subclasses of overriding aorta and ventricular septal defect and pulmonic stenosis and right ventricular hypertrophy will specifically retrieve the tetralogy of fallot condition , including specific sub - types of tetralogy of fallot in the hpo ..... prefix rdf:<http://www.w3.org/1999/02/22-rdf - syntax - ns # > prefix gc:<http://purl.org / gwas / schema # > prefix xsd:<http://www.w3.org/2001/xmlschema # > prefix obo:<http://www.obofoundry.org / ro / ro.owl # > select ?ontid where { graph ? g { ?marker gc : associated ?phenotype ; gc : locatedingene ?gene ; gc : pvalue ?pvalue ; obo : hassynonym ?ext_marker_id . ?phenotype gc : hpoannotation ?ontid . } filter ( xsd : float(?pvalue ) < = 1e-10 ) .filter ( ? ontid in ( owl subclass < http://aber-owl.net/aber-owl/service/> < http://purl.obolibrary.org/obo/hp.owl > { ' ventricular septal defect ' } ) ) . } ....bioportal , the ontology lookup service ( ols ) and ontobee are amongst the most widely used ontology repositories in biology .these portals offer a user interface for browsing ontologies and searching for classes based on the class label ( or synonym ) .they also provide web services that enable programmatic access to the ontologies contained within them .however , neither bioportal , ontobee nor ols allow access to the knowledge that can be derived from the ontologies in the repositories .aber - owl , on the other hand , provides a reasoning infrastructure and services for ontologies , without aiming at replacing ontology repositories and the user experience they provide . in the future , we intend to integrate aber - owl more closely with other ontology repositories so that the additional information and user - interface widgets provided by these repositories can be combined with the reasoning infrastructure provided by aber - owl .another related software is ontoquery , which is a web - based query interface for ontologies that uses an owl reasoner .it can be used to provide an interface for a single ontology using an owl reasoner , but does not support use of multiple ontologies or access through web interfaces .the logical gene ontology annotations ( goal ) outlines an approach to access data annotated with ontologies through owl reasoning . for this purpose, goal constructs a custom knowledge base integrating both the ontology and the annotations , and then uses an owl reasoner to answer queries over this combined knowledge base .however , goal uses exactly one ontology , specifically built to incorporate the data queried ( mouse phenotypes ) as a part of the owl ontology so that a reasoner can be used to query both , the ontology and its annotations .aber - owl , on the other hand , is a general framework and does not require changes to existing ontologies . instead , aber - owl distinguishes between reasoning on the ontology level and retrieval of data annotated with ontologies .several tools and web servers utilize ontologies or structured vocabularies for the retrieval of articles from pubmed or pubmedcentral .for example , gopubmed classifies pubmed articles using the go and the medical subjects heading thesaurus .however , gopubmed uses only a limited number of ontologies , and while gopubmed uses the asserted structure of the ontologies , it does not use the knowledge contained within the ontologies axioms .aber - owl : pubmed , on the other hand , can utilize the knowledge contained in any ontology to perform basic searches in pubmed abstracts and fulltext articles in pubmed central . a main limitation of aber - owl : pubmed lies with the absence of a specialized entity recognition method to identify occurrences of ontology class labels in text . in particular , for ontologies such as the go that use long and complex class names , specialized named entity recognition approaches are required to identify mentions of the go terms in text .furthermore , aber - owl : pubmed currently uses only the rdfs : label property of classes and properties in ontologies to retrieve literature documents , but ignores possible synonyms , alternative spellings or acronyms that may be asserted for a class . in the future, we will investigate the possibility of adding more specialized named entity recognition algorithms to aber - owl : pubmed for specific ontologies .another limitation lies in aber - owl s interface .aber - owl : pubmed s web - based interface is not a complete text retrieval system but rather demonstrates the possibility of using ontology - based queries for retrieving text and can be used to aid in query construction .we envision the main use of aber - owl : pubmed in the form of its web services that can be incorporated in more complete and more complex text retrieval systems such as gopubmed or even pubmed itself . the use of aber - owl : sparql differs in three key points from the use of basic access to ontology - annotated data through sparql alone : 1 .aber - owl : sparql provides access to the semantic content of ontologies even when the ontologies are not available through the sparql endpoint that contains the ontology - annotated data .aber - owl : sparql provides access to the inferred ontology structure instead of the asserted structure , even when no owl entailment regime is activated in a sparql endpoint .aber - owl : sparql enables complex queries formulated in manchester owl syntax , and can perform these queries even when no owl entailment regime is activated in a sparql endpoint . in particular ,( 1 ) the ontologies used for annotation are not commonly accessible through the same sparql endpoint as the actual annotated data .if the sparql endpoint supports query federation ( using the sparql service block ) , this problem can usually be resolved if the ontology is available at some place ( such as bioportal ) through another sparql endpoint . however , in some application settings , a query expansion service may be more efficient than query federation . more importantly , however , ( 2 ) aber - owl : sparql provides access to the structure of an ontology as it is inferred by an owl reasoner . to achieve a similar outcome using plain sparql ,the sparql endpoint containing the ontology must have an owl entailment regime activated ; otherwise , only the asserted structure of an ontology is available for queries .we know of no sparql endpoint in the biomedical domain currently holding ontologies and simultaneously using an owl entailment regime ; in particular , neither bioportal nor ontobee or the ols currently make use of any kind of owl entailment . while the first two points can in principle be addressed by applying semantic web technologies , queries would still have to be formulated in sparql syntax .( 3 ) aber - owl : sparql uses the manchester owl syntax to formulate queries , and manchester owl syntax is widely used by ontology developers and users as it is closer to a human - readable sentence and therefore easier to access than other ways of expressing owl .the full benefit of a reasoning infrastructure over multiple ontologies can be realized when these ontologies are `` interoperable '' .while interoperability between biomedical ontologies has been extensively discussed , we can nevertheless identify several shortcomings through the use of aber - owl .firstly , ontology class names and relation names are not standardized .for example , the current library of ontologies included in aber - owl uses several different names ( and uris ) for the part - of relation , including part_of , part - of , part of and partof .while each relation is usually consistently applied within a single ontology , the use of different uris and labels for the same relation leads to difficulties when utilizing more than one ontology .the non - standardized use of relation names is particularly surprising as the obo relation ontology aimed to achieve the goal of using standard relations and common relation names almost 10 years ago .one possible explanation for the observed heterogeneity is that the lack of tools and an infrastructure that could efficiently utilize the information in one or more ontology has made it less of a priority for ontology developers to focus on these aspects of interoperability .furthermore , using the aber - owl infrastructure , potential problems in ontologies can be identified .for example , we could identify , and subsequently correct , three unsatisfiable classes in the neuro behavior ontology resulting from changes in the ontologies it imports .these problems are not easily detectable ; moreover , they require the use of reasoning over more than one ontology , as well as frequent re - classifications .these tasks are vital for the effects that a change in one ontology has on other ontologies to be detected . with the aber - owl services , we propose to separate the processing of knowledge in ontologies and the retrieval of data annotated with these ontologies .aber - owl provides a reasoning infrastructure that can be queried either through its web interface or its web services , and a set of classes that satisfy a specified condition is returned .these sets of classes can then be used to retrieve data annotated with them , text that contains their label , or from a corpus of text or a formal data resource that references them .as such , aber - owl provides a framework for automatically accessing information that is annotated with ontologies or contains terms used to label classes in ontologies .when using aber - owl , access to the information in ontologies is not merely based on class names or identifiers but rather on the knowledge the ontologies contain and the inferences that can be drawn from it .this also enables the use of knowledge- and ontology - based access to data : data of interest is specified on the knowledge- or ontology - level , and all possible classes that satisfy such a specification are inferred using an automated reasoner .the results of this inference process are then used to actually retrieve the data without the need to apply further inference .44 # 1isbn # 1#1#1#1#1#1#1#1#1#1*#1*#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1prebibitemshook , , , , , , , , , , , : construction and accessibility of a cross - species phenotype ontology along with gene annotations for biomedical research .f1000research * 2 * ( 2013 ) .doi:10.12688/f1000research.2 - 30.v1
many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the web ontology language ( owl ) . computational access to the knowledge contained within these ontologies relies on the use of automated reasoning . we have developed the aber - owl infrastructure that provides reasoning services for bio - ontologies . aber - owl consists of an ontology repository , a set of web services and web interfaces that enable ontology - based semantic access to biological data and literature . aber - owl is freely available at http://aber-owl.net .
one of the most famous results in queueing theory is _ burke s theorem _ . consider a queue in which available services occur as a poisson process of rate ( a so - called queueing server ) .if the arrival process is a poisson process of rate ( independent of the service process ) , then the departure process is also a poisson process of rate .we may say that the arrival process is a _ fixed point _ for the server .in this paper we consider the question of fixed points for queues with two or more classes of customer ( with different levels of priority ) .when a service occurs in such a queue , it is used by a customer whose priority is highest out of those currently present in the queue .we will see that a two - type fixed point can be constructed using the output processes ( consisting of departures and unused services ) from a one - type queue . then in a recursive way , a fixed point with classes of customer can be constructed using the output of a queue whose arrival process is itself a fixed point with classes .except in the familiar one - type case , the fixed points are not markovian .in particular , one observes _ clustering _ of the lower - priority customers . in the paper we work with a queueing model which is somewhat more general than the queue described above .our basic model is of a discrete - time queue with batch arrivals and services .let be the amount of service offered at time .we obtain fixed - point results for the case where are i.i.d .and each has so - called `` bernoulli - geometric '' distribution , i.e. is equal to the product of a geometric random variable and an independent bernoulli random variable . by taking appropriate limits where necessary, this model covers a variety of previously considered queueing servers , for example discrete - time queues , discrete - time queues with geometric or exponential service batches , continuous - time queues as described above , continuous - time queues with geometric or exponential service batches occurring at times of a poisson process , and brownian queues .versions of burke s theorem and related reversibility results were proved for this bernoulli - geometric model in .some such fixed - point processes were already constructed in certain cases ( queues in continuous or discrete time ) in in the context of stationary distributions for certain multi - type interacting particle systems . in this paperwe give a more direct proof of the fixed - point property , which relies on properties of _ interchangeability _ for queueing servers .weber showed that for a tandem queueing system consisting of two independent servers with service rates and , and an arbitrary arrival process , the distribution of the departure process is unchanged if and are exchanged .this interchangeability result was subsequently proved in a number of different ways , for example in , , and .the coupling proof given by tsoucas and walrand in is important for our purposes , since we can use their approach to extend the interchangeability result to multi - type queues . before developing the general batch queueing model ,we begin in section [ dotm1section ] by giving a guide to the main results and methods of proof in the particular case of the continuous - time queue .since this model is already rather familiar , we give an informal account without introducing too much notation .( everything is developed rigorously in later sections ) .in addition , certain aspects are simpler in the case ; for example , the service process has only one parameter , so all such service processes are interchangeable , and given any vector of arrival rates ( corresponding to customers of classes respectively ) , there is a unique fixed - point arrival process which is common to all queues for .our general model is introduced in section [ model ] , which describes the set - up of a discrete - time batch queue .multi - class systems are introduced in section [ multiclass ] .the bernoulli - geometric distribution , and corresponding queueing servers , are described in section [ bergeom ] .interchangeability results are given in section [ interchangeability ] .these extend the results for one - type queues described above , to cover multi - type systems and to the more general queueing server model . in section [ mainresultsection ] we give the construction of multi - type fixed points , and prove the fixed - point property using the interchangeability results .the main result is given in theorem [ fixedpointtheorem ] ( the corresponding results in the case are theorem [ 2typemm1thm ] and theorem [ mtypemm1thm ] ) . the proof of the interchangeability result itself is given in section [ interchangeabilityproof ] . in section [ examplesection ]we give examples of the application of the results to several of the particular queueing systems described above .the final example is that of the brownian queue . herethe lower - priority work in the fixed - point process corresponds to the local - time process of a reflecting brownian motion ; this process is non - decreasing and continuous but is constant except on a set of measure 0 .this is an extreme case of the `` clustering of lower - priority customers '' referred to above .the connections with interacting particle systems are discussed in section [ particlesection ] .the fixed points for servers in discrete time and in continuous time correspond to stationary distributions for multi - type versions of the tasep and of hammersley s process , respectively .time in the queueing systems corresponds to space in the particle systems ; with this identification , questions of fixed points for queues and stationary distributions for particle systems are closely analogous .finally in section [ continuoussection ] we mention a limit as the number of classes goes to infinity , with the density of each class going to 0 . in this limit ,the class - label of each customer becomes , for example , a real number in gj ] denotes ) .let be the number departing from the queue at the time of the service .so finally let be the unused service at the time of the service .see figure [ slotfig ] for a representation of the evolution of the queue along with its inputs and outputs . since , , are all functions of the data and , we sometimes write , and so on .note that we allow the possibility that .indeed , we do nt impose any stability condition on the queue ; so , for example , if the average rate of arrivals exceeds the average rate of service , then the queue will become saturated .in fact , the following simple observation will be useful later : [ saturationlemma ] suppose the arrival and service processes are independent , and are i.i.d .with mean and are i.i.d . with mean .then with probability 1 , the lemma follows immediately from the definition ( [ xdef ] ) , since the random walk whose increment at step is is either recurrent ( if ) or escapes to with probability 1 ( if ) ; in either case it attains arbitrarily high values .if the and take integer values , then it is natural to talk in terms of `` number of customers '' arriving , or departing , or in the queue , and so on .however we will also consider cases where the values are more general , in which case one could talk of `` amount of work '' rather than `` number of customers '' .we will now define a _ multi - class _ batch queue .the system can now contain different types of customers ( or work ) with different priorities .when service occurs at the queue , it is first available to first - class customers .if there is more service available than there are first - class customers present in the queue , the remaining service ( unused by the first - class customers ) is then offered to customers of lower class , starting with second - class customers , then third - class and so on .for example , suppose that at the start of time - slot , there are 7 customers in the queue , of whom 3 are first - class , 1 is third - class and 3 are fourth - class .suppose that 5 units of service are available at time .then the departures at time will be 3 first - class customers , 1 third - class customer and 1 fourth - class customer , leaving 2 fourth - class customers remaining in the queue .let be the total number of classes .we will have a collection of arrival processes , where is the number of - class customers arriving at time .we will also denote , for .let be the service process of the queue .similarly we will write for the number of - class customers departing at time , and for the number of - class customers present in the queue at the beginning of time - slot .write also and .there are two natural ways to construct the multi - class queue ( which are equivalent ) .one way would be to look at the queueing process of - class customers for each .this is a queue with arrival process and service process ; the services available to - class customers are those that have not been used by any higher - priority customer .alternatively , we will consider , for each , the queueing process of customers of classes combined .this is a queue with arrival process and service process .so in particular we define this second description turns out to be more useful since it describes the -class queue as a coupling of single - class queues , each with the same service process .once we come to consider interchangeability of queues , the fact that we are working with a common service process becomes crucial .[ bergeomqueue ] we define a _ bernoulli - geometric distribution _ , with parameters and .a random variable with this distribution has the distribution of the product of two independent random variables , one with distribution and the other with distribution .that is , if we have .we will consider a queue where is an i.i.d . sequence with , and is an i.i.d . sequence ( independent of ) with .( we will say that and are _ bernoulli - geometric processes _ ) .queues of this type are investigated in , in particular regarding their reversibility properties .for stability we will assume that , i.e. that , and further we assume that where for g_mj_m ] , and similarly as the amount of work arriving in ] , is the amount of unused service in ] with the following properties : * the distribution is stationary and ergodic .* $ ] .* let .define an arrival process as follows : if , then a customer with label arrives at time , while if , then no customer arrives at time .this arrival process is a fixed point for the queue for all .property ( ii ) is included just as a normalization ( since the mechanics of the queue are unchanged if the labels of all the customers are transformed by an increasing function ) .then in property ( iii ) , the condition ensures that the queue is not saturated .the distribution of is also a stationary distribution for the tasep .see for an investigation of this process , including in particular an interpretation as the `` speed process '' for a multi - type tasep started out of equilibrium .one interesting property of the process is a manifestation of the `` clustering '' effect described above .although has a continuous distribution for each , nonetheless one has that for any , .( for example , ) .in fact , with probability one there exist infinitely many such that .hence clustering occurs in the following sense : although any class - label has probability 0 of being seen a priori , if one sees the label at any particular time the same label has high probability of being seen nearby , and will be seen infinitely often in the process .jbm thanks pablo ferrari for many valuable conversations related to this work , and mike harrison and ilkka norros for discussions about the results on brownian queues .harrison , j. m. , ( 1985 ) _ brownian motion and stochastic flow systems_. wiley series in probability and mathematical statistics : probability and mathematical statistics .john wiley & sons inc ., new york .williams , r. j. , ( 1996 ) on the approximation of queueing networks in heavy traffic . in f.p. kelly , s. zachary and i. ziedins , eds . , _stochastic networks : theory and applications _ , pages 3556 .clarendon press , oxford .
burke s theorem can be seen as a fixed - point result for an exponential single - server queue ; when the arrival process is poisson , the departure process has the same distribution as the arrival process . we consider extensions of this result to multi - type queues , in which different types of customer have different levels of priority . we work with a model of a queueing server which includes discrete - time and continuous - time queues as well as queues with exponential or geometric service batches occurring in discrete time or at points of a poisson process . the fixed - point results are proved using _ interchangeability _ properties for queues in tandem , which have previously been established for one - type systems . some of the fixed - point results have previously been derived as a consequence of the construction of stationary distributions for multi - type interacting particle systems , and we explain the links between the two frameworks . the fixed points have interesting `` clustering '' properties for lower - priority customers . an extreme case is an example of a brownian queue , in which lower - priority work only occurs at a set of times of measure 0 ( and corresponds to a local time process for the queue - length process of higher priority work ) .
biomagnetometry is a rapidly growing field of noninvasive medical diagnostics . in particular , the magnetic fields generated by the human heart and brain carry valuable information about the underlying electrophysiological processes . since the 1970s superconducting quantum interference devices ( squids ) have been used to detect these generally very weak biomagnetic fields .the magnetic field of the human heart is the strongest biomagnetic signal , with a peak amplitude of 100 , but since this is still orders of magnitude weaker than typical stray field interference the measurement of such signals could initially only be performed inside expensive magnetically - shielded rooms ( msr ) .progress in medical research in the past decade has motivated a need for more affordable cardiomagnetic sensors .recently , multichannel squids were developed that no longer require shielding due to the use of gradiometric configurations .such devices are commercially available but are still quite expensive in both capital and operational costs .optical pumping magnetometers ( opm ) have been widely known since the 1960s , and offer both high sensitivity and reliable operation for research and applications like geomagnetometry .since opms usually work with a near room - temperature thermal alkali metal vapor , they avoid the need for the cryogenic cooling that makes squids so costly and maintenance intensive .our goal was to develop an affordable , maintenance - free device that is both sensitive and fast enough to measure the magnetic field of the human heart . in order to be competitive with the well - established squids, a cardiomagnetic sensor has to offer a magnetic field sensitivity of at least 1 with a bandwidth of about 100 .furthermore , the spatial resolution of the sensor has to be better than 4 , the standard separation of grid points during mapping .since the cardiomagnetometry community is mainly interested in one of the components of the magnetic field vector , one might think of using vector - type opms like the hanle magnetometer or the faraday magnetometer , devices which operate in zero fields only .however , these devices lose their sensitivity in the presence of even tiny field components in directions perpendicular to the field of interest .the broadening caused by such transverse field components must be kept well below the width of the magnetometer resonance , thus limiting those components to values below a few tenths of .accordingly , optical vector magnetometers can not be used for cardiomagnetometry in a straightforward way since the heart field features time - varying transverse field components on the order of 100 .we have therefore concentrated on the opm , which exhibits a fast response and which has been shown to be sufficiently sensitive in an unshielded environment .furthermore , lamp - pumped opms were used for the first biomagnetic measurements with optical magnetometers in the early 1980s , although that work was discontinued . instead of lamps , we use diode lasers as a light source in order to build a device that will scale to the many channels needed for fast mapping of the cardiomagnetic field .optically pumped magnetometers operate on the principle that the optical properties of a suitable atomic medium are coupled to its magnetic properties via the atomic spin .the ensemble average of the magnetic moments associated with the spins can be treated as a classical magnetization vector in space . here with is the total angular momentum of atoms in an optical hyperfine level where is the density matrix and the land factor of the state .optical magnetometers detect changes of the medium s optical properties induced by the precession of in a magnetic field .the frequency of this precession , the larmor frequency , is proportional to the modulus of : for cs the constant of proportionality , , has a value of .all atomic vapor magnetometers measure the magnetic field via a direct or indirect measurement of the larmor frequency .-magnetometer setup : the laser ( la ) emits a beam that traverses the sample ( sa ) at angle with respect to the magnetic field .the transmitted power is detected by a photodetector ( pd ) .the static magnetic field is aligned along the -direction .the oscillating magnetic field is aligned along the -direction . ] in the case of the magnetometer , a magnetic - resonance technique is used to measure the larmor frequency directly , by employing two perpendicular magnetic fields and .the static magnetic field is aligned along the -direction . as fig .[ fig : setup3d ] shows , the -vector of the laser beam lies in the -plane and is oriented at an angle with respect to the -direction .the magnetometer is sensitive to the modulus of .the oscillating magnetic field is aligned along the -direction with an amplitude much smaller than . in order to introduce the basic concepts we discuss the simplest case of an state .the motion of under the influence of and is then given by the bloch equations : the first term describes the precession of around the magnetic fields .the second term describes the longitudinal ( ) and transverse ( ) relaxation of .the third term represents the effect of optical pumping with circularly polarized light that creates the magnetization .it can be treated as an additional relaxation leading to an equilibrium orientation aligned with the -vector of the incoming light at the pumping rate .both relaxations add up to the effective relaxation rates . in the case of small amplitudes ,( [ eq : bewgl ] ) can be solved using the rotating wave approximation which leads to a steady - state solution where rotates around at the driving frequency .the optical property used in the magnetometer is the optical absorption coefficient which determines the power , , of the light transmitted through the medium . for circularly polarized light, the transmitted power is proportional to the projection of on the -vector of the incoming light .therefore , the precessing magnetization results in a modulation of the absorption index measurable as an oscillation of . the in - phase and quadrature components of with respect to the driving field can be obtained from eq .( [ eq : bewgl ] ) : here is the rabi frequency and the detuning of the oscillating field from the larmor frequency .the constant combines all factors such as the initial light power , the number of atoms in the sample , and the cross section for light - atom interactions determining the absolute amplitude of the signal .the components can be measured using phase - sensitive detection .the signals are strongest for , which was used in all experiments . ) and quadrature signals ( ) , measured in a single sweep of 20 s with the cardiomagnetometer placed in a poorly shielded room .magnetic 50 line interference was suppressed using a order lowpass filter ( time constant 10 ms ) .the half - width , derived from a fit , was .( b ) magnetic resonance line shape of the oscillation phase measured with respect to the driving field .the data was obtained in real time using a digital lock - in amplifier .the fitted half - width is : . ]both and show resonant behavior near . has an absorptive lorentzian line shape , and has a dispersive lorentzian line shape with the same half width expressed as here is the saturation parameter of the rf field .figure [ fig : res](a ) shows measured line shapes under conditions optimized for maximal magnetometric sensitivity ( see sec . [ sec : optimization ] for details ) . signal is of particular interest because it has a dispersive shape , featuring a steep linear zero - crossing at . in this region can be used to measure the deviation of from the value that corresponds to .the same is true for the deviation of the phase difference between the measured oscillation and the driving field from -90 ( see fig . [fig : res](b ) ) .the phase difference can be calculated from and , yielding the phase signal changes from at low frequencies to at high frequencies .for practical reasons it is preferable to shift the phase by 90 so that it passes through zero in the center of the resonance ( ) .this can easily be done by shifting the reference signal by using the corresponding feature of the phase detector . in mathematical terms that 90 is equivalent to the transformation and , yielding the width of the phase signal is smaller than it is not affected by rf power broadening , i.e. , it is independent of : the narrower lineshape of the phase signal is exactly compensated by a better s / n ratio of [ see eqs .( [ eq : deltaomega ] ) , ( [ eq : dr ] ) , and ( [ eq : di ] ) ] resulting in a statistically equivalent magnetic field resolution for both signals .however , since the lineshape of the phase signal depends only on , it is easier to calibrate in absolute field units .furthermore , light amplitude noise , for instance caused by fluctuating laser intensities , does not directly affect the phase signal , since both and scale in the same way with light intensity .only the much weaker coupling via the light shift can cause the phase signal to reflect light amplitude noise . considering those practical advantages of the phase signal we concentrate in the following sections on the sensitivity of the phase signal to magnetic field changes . .the solid line is for , the dashed lines are for ( nearly round ) and ( elliptical ) , respectively .if is scanned towards increasing values the system evolves clockwise through the nyquist plot . ]the lineshapes , and of the magnetic resonance have a major influence on the magnetometric sensitivity .the magnetic resonances in eqs .( [ eq : pip ] ) and ( [ eq : pqu ] ) can be interpreted as a complex transfer function connecting the current that drives the rf - coils , , and the photocurrent of the photodiode . by setting the effective transverse relaxation rate as the unit of frequency and using the normalized detuning , can be written in dimensionless units as : a parametric plot of in the complex plane called a nyquist plot was found to be useful for the inspection of experimental data . in this representation as an ellipse with diameters and for the real ( in - phase ) and imaginary ( quadrature ) components respectively ( see fig . [fig : circrf ] ) : the saturation parameter of the rf transition , , can be extracted from the ratio of the two diameters : when an interfering sine wave of amplitude and phase is added .a phase offset of in the demodulation due to a poorly adjusted lock - in phase leads to a rotated ellipse . ]figure [ fig : circ_phase ] shows a nyquist plot of a resonance for a situation in which an interfering sine wave is added to the photocurrent , leading to a shifted ellipse .the amplitude and the phase of the interference can be easily extracted from the nyquist plot .a phase shift in the demodulation leads to a rotated ellipse .in this situation the spectra of in - phase and quadrature components as a function of rf detuning appear asymmetric . by means of nyquist plotsit is easy to distinguish between an asymmetry caused by improper adjustment of the lock - in phase and one caused by inhomogeneous broadening .the latter causes a deviation from the elliptical shape .one model for inhomogeneous broadening is to assume a gradient in the static magnetic field .since we use buffer - gas cells the atoms do not move over large distances during their spin coherence lifetime so that inhomogeneous magnetic fields are not averaged out .instead , atoms at different locations in the cell see different magnetic fields , resulting in an inhomogeneous broadening of the magnetic resonance line . and , respectively .part ( b ) shows the deviation from circular for linear field distributions .the ( outer ) circular trace is for an unperturbed resonance .the other two are calculated for distribution widths of and , respectively . ]figure [ fig : circ_grads ] shows calculated nyquist plots for different gradients of the static field .the simplest model for such an inhomogeneity is a constant gradient over the length of the cell .this is expressed by a convolution of the theoretical magnetic resonance signals [ see eq .( [ eq : t ] ) ] with the normalized distribution of magnetic fields which , in this case , is a constant over the interval since vanishes everywhere except for the convoluted resonance is given by which can be evaluated analytically \nonumber \\ & & - \ , \frac{i}{2\,x_g } \sqrt{\frac{s}{1+s } } \left\ { \arctan \left(\frac{x_g - x}{\sqrt{1+s } } \right ) \right .\nonumber \\ & & + \left .\arctan \left(\frac{x_g+x}{\sqrt{1+s } } \right ) \right\}\ , .\label{eq : tgrad}\end{aligned}\ ] ] the main effect of the constant magnetic field distribution is to broaden the resonance , to decrease the amplitude , and to make the line shape differ from a lorentzian . in the nyquist plotthis is seen by a deformation of the elliptical trace towards a rectangular trace as shown in fig .[ fig : circ_grads](a ) .the effect is clearly visible in fig .[ fig : circ_grads](a ) for rather large widths of the magnetic field distribution ; in the experiment , however , the effect can be detected for much smaller inhomogeneities due to the large signal / noise ratio .) providing circular polarized light to the glass cell ( gs ) that contains the atomic medium .photodiode 2 ( pd2 ) measures the transmitted light intensity .its signal is amplified by a current amplifier and fed to the lock - in amplifier ( lia ) .the reference output of the lia drives the radio frequency coils ( rfc ) .the reference frequency of the lia is controlled by a sweep generator ( sg ) .automatic control and data aquisition is done by a pc via the gpib bus . ]the magnetometer described here was part of the device used by us to measure the magnetic field of the human heart .the setup was designed so that a volunteer could be placed under the sensor , with his heart close to the glass cell containing the cs sample . for moving the volunteer with respect to the sensor necessary for mapping the heart magnetic field a bed on a low friction support was used .the magnetometer sensor head itself was placed in a room with moderate magnetic shielding .the room was in volume shielded by a 1 mm -metal layer and an 8 mm copper - coated aluminum layer . for low frequencies , the shielding factor was as low as 5 to 10 , whereas 50 interference was suppressed by a factor of 150 . inside the shielded room , surrounding the sensor itself , three coil pairs were placed for the three dimensional control of the magnetic field . in the -direction ( vertical ) two round 1 m diameter coils were used .to make room for the patient , the spacing between the coils had to be 62 cm , far away from the helmholtz optimum of 50 cm .the two coil pairs for the transverse magnetic fields ( and directions ) formed four of the faces of a cube 62 cm on a side .all six coils were driven independently by current sources so that the sum and the difference of the currents in each coil pair could be chosen independently .this allowed us to control not only the magnetic field amplitudes in all three directions , but also the gradients .the field components and gradients were adjusted to produce a homogeneous field of 5 in the direction .an extended - cavity diode laser outside the shielded room was used as a light source .the laser frequency was actively stabilized to the transition of the doppler broadened cs line ( 894 nm ) using davll spectroscopy in an auxiliary cell .the light was delivered to the magnetometer sensor proper by a multimode fiber ( 800 core diameter ) . after being collimated ,the light was circularly polarized by a combination of a polarizing beam - splitter and a multiple - order quarter - wave plate .the circularly polarized light then passed through a glass cell containing the cs vapor and a buffer gas to prevent the atoms from being depolarized by wall collisions .the cell could be heated to 65 c using hot air which flowed through silicon tubes wrapped around the cell holder .the light power , , transmitted through the glass cell was detected by a photodiode specially selected to contain no magnetic materials . a current amplifier ( femto messtechnik , model dlpca-200 ) converted the photocurrent into a voltage that was fed to the input of the lock - in amplifier .the detection method resulted in a noise level 5 to 20% above the electron shot noise in the photodiode ( fig .[ fig : sn ] ) .the digital lock - in amplifier ( stanford research systems , model sr830 ) demodulated the oscillation of with reference to the applied oscillating magnetic field .that field was generated by two extra windings on each of the coils and was powered by the analog output of the reference function generator contained within the lock - in amplifier .the built - in function generator has the advantage that it delivers a very pure sine wave ( phase locked to the synchronization input ) and its amplitude can be controlled via the gpib interface of the lock - in amplifier .c under conditions optimized for maximum magnetometric sensitivity with a resolution bandwidth of 1 ( sampling time 1 s ) .the amplitude measured by the lock - in amplifier corresponds to the upper horizontal line .the amplitude of the central peak is depressed , since it is slightly broadened by the hanning window used by the fft spectrum analyzer ( see text ) .the level is the shot - noise level calculated from the dc - photocurrent .the dashed line marks the rms noise measured at 23 k .the with respect to the calculated shot - noise level is .the rms noise is a factor of 1.55 higher than resulting in a of . ] in order to record magnetic resonance lineshapes the lock - in amplifier was synchronized to a reference frequency supplied by a scanning function generator .the data measured by the lock - in ( amplitudes of the in - phase and quadrature signals ) were transmitted in digital form to a pc , thus avoiding additional noise .although the theory of optical magnetometry is well known , predictions about the real performance of a magnetometer , especially when it is operating in weakly shielded environments , are difficult to make .the performance depends on laser power , rf power , cell size , laser beam profile , buffer - gas pressure , and the temperature - dependent density of cs atoms .the size of the cells and the buffer gas pressure were dictated by the available cells : we used 20 mm long cells with 20 mm diameter including 45 mbar ne and 8 mbar ar with a saturated cs vapor . since the cell is oriented at 45 with respect to , the transverse spatial resolution was 28 mm .the cross section of the laser beam was limited by the 8 mm apertures of the optical components ( polarizers and quarter - wave plates ) .our magnetometer produces a signal which was proportional to the magnetic field changes .the noise of the signal in a perfectly stable field therefore determines the smallest measurable magnetic field change , called the noise equivalent magnetic field ( nem ) .the nem is given by the square root , , of the power spectral density , , of the magnetometer signal , expressed in .the rms noise , , of the magnetometer in a given bandwidth is then a straightforward way to measure the intrinsic sensitivity would be to extract the noise level from a sampled magnetometer time series via a fourier transformation .however , that process requires very good magnetic shielding since the measured noise is the sum of the magnetic field noise and the intrinsic noise of the magnetometer .many studies under well - shielded conditions have been carried out in our laboratory , leading to the result that optical magnetometers are in principle sensitive enough to measure the magnetic field of the human heart . however , the shielding cylinders used in these investigations were too small to accommodate a person .the present study investigates which level of performance can be obtained in a weakly shielded environment with a volume large enough to perform biomagnetic measurements on adults . inthe walk - in shielding chamber available in our laboratory the magnetic noise level was about one order of magnitude larger than the strongest magnetic field generated by the heart . in order to compensate for thisthe actual cardiomagnetic measurements were done with two magnetometers in a gradiometric configuration . however ,the optimal working parameters where determined for a single magnetometer channel only . since all time series recorded in this environment are dominated by magnetic field noise , the straightforward way of measuring the intrinsic noise could not be applied . as an alternative approacha lower limit for the intrinsic noise can be calculated using information theory .the so - called cramr rao lower bounds gives a lower limit on how precisely parameters , such as phase or frequency , can be extracted from a signal in the presence of a certain noise level . for the following discussionwe assume that the signal is a pure sine wave affected by white noise with a power spectral density of .we define the signal - to - noise ratio as the rms amplitude , , of the sinusoidal signal divided by the noise amplitude , , for the measurement bandwidth , : for a magnetometer generating a larmor frequency proportional to the magnetic field , eq .( [ eq : gf ] ) , the ultimate magnetic sensitivity is limited by the frequency measurement process . the cramr rao lower bound for the variance , , of the frequency measurement is used ( appendix [ sec : crf ] ) to calculate for cardiac measurements a bandwidth of is required . this together with a typical value for of results in a magnetic field resolution of . in order to be competitive with squid - based cardiomagnetometers that feature an intrinsic noise of 5 20 level of performance is not sufficient .for that reason we have concentrated on a different mode of operation where the phase signal is measured by digital lock - in detection . in this mode of operation has a fixed value near the larmor frequency .the information about the magnetic field is obtained from the phase shift of the magnetometer response at that frequency .the cramr rao bound for a phase measurement of a signal with known frequency is used in appendix [ sec : crphi ] to calculate the nem for that case : equations ( [ eq : rhof ] ) and ( [ eq : rhophi ] ) define the bandwidth : for which both approaches yield the same magnetometric sensitivity . for bandwidths larger than ,a phase measurement is more advantageous whereas for bandwidths smaller than a frequency measurement gives the higher sensitivity .order low - pass filter of the lock - in amplifier ( time constant ) .( c ) measured frequency response in the phase - stabilized mode . ] in addition to the sensitivity , the bandwidth , i.e. , the speed with which the magnetometer signal follows magnetic field changes , is an important feature of a magnetometer .the steady - state solutions of the bloch equations , and , follow small field changes at a characteristic rate , corresponding to a delay time . since the steady state is only reached exponentially , the frequency response is that of a first order low - pass filter [ see fig .[ fig : bw](a ) ] with a ( -3 db ) cut - off frequency given by and hence a bandwidth of where is the half width of the phase signal measured in . to achieve maximum sensitivity, atomic magnetometers aim at a maximum , at the cost of a reduced bandwidth of typically a few tenths of .a large bandwidth can be obtained by increasing the light power since that leads to shorter and therefore to higher bandwidth .larger light powers also increase the s / n ratio but the effect can be overcompensated by magnetic resonance broadening , resulting in a degradation of the magnetometric resolution .using feedback to stabilize the magnetic resonance conditions is another way to increase the bandwidth .figure [ fig : bw](c ) shows the frequency response of the opm in both the free - running ( without feedback ) mode and in the phase - stabilized mode where the phase signal is used to stabilize to the larmor frequency . for large loop gain the bandwidth is mainly limited by loop delays .a third method to achieve large bandwidths is the so - called self - oscillating mode . in this modethe oscillating signal measured by the photodiode is not demodulated but rather phase - shifted and fed back to the rf - coils . for a 90 phase shift the system then oscillates at the larmor frequency . in order to measure the magnetic field, the frequency of this oscillation has to be measured .magnetic field changes then show up at least theoretically as instantaneous frequency changes . in practice ,reaction times smaller than a single larmor period have been observed .of the three modes outlined above , the latter two both rely on frequency measurements .the self - oscillating magnetometer provides a frequency that has to be measured .the phase - stabilized magnetometer measures the frequency via a reference frequency locked to the larmor frequency . as a consequence , both methods suffer from the reduced magnetometric resolution predicted by eq .( [ eq : rhof ] ) .therefore , we have concentrated on the free - running mode of operation for which the magnetometric resolution is given by eq .( [ eq : rhophi ] ) and the bandwidth by eq .( [ eq : bw ] ) .thanks to the rather high light power required for optimal magnetometric resolution at higher cell temperatures , the cut - off frequency of the magnetometer was 95 .the bandwidth of the device under these conditions can be extracted from the transfer function ( fig .[ fig : bw](b ) ) and is about 140 . because of the time constant of the lock - in amplifier, the measured bandwidth is 10 than the one would expect for a first order low - pass filter [ eq . ( [ eq : bw ] ) ] . ) with added offset and phase rotation to the measured data .the fit model assumes a constant magnetic field distribution .the offset is indicated by the dot close to the origin .the short diameter of the ellipse is drawn in order to illustrate the phase rotation of 2.4 . ]figure [ fig : circ ] shows a nyquist plot with experimental data and a model simultaneously fit to the in - phase and quadrature components of the data .the data show a certain asymmetry that can not be reproduced by the model . the nyquist plots for different magnetic field distributions ( fig .[ fig : circ_grads ] ) suggest that the asymmetry is caused by inhomogeneous magnetic fields . unfortunately the models discussed in sec .[ sec : nyquist ] do not fit the data correctly , implying that higher - order gradients cause the deformation of the measured lineshape .the fact that the asymmetry is more pronounced for high rf amplitudes indicates that inhomogeneous rf - fields causing the different parts of the ensemble to contribute with different widths have to be considered .unfortunately , models for such inhomogeneities do not lead to analytic line shapes .an empirical model which assumes the measured resonance consists of a sum of several resonances , each at a different position and with a different width , can be fit to the data .the data can be fit perfectly if the number of subresonances is high enough .however , such fits have a slow convergence and do not provide the needed information about the width and amplitude of the resonance in single fit parameters . for practical reasons ( during the optimization more than 2000 spectra were fit ) we decided to use the constant magnetic field distribution model for fitting data similar to the ones in fig .[ fig : circ ] .magnetic field inhomogeneities have much less influence on the shape of the phase signal resulting in more reliable values for .the phase signal represents the speed with which the resonance evolves through the nyquist plot . using both the phase signal and the nyquist plot ,the in - phase and quadrature components of the resonance were reconstructed , however , the frequency scaling were given by the phase signal only .for the optimization of the nem given by eq .( [ eq : rhophi ] ) the s / n ratio of the lock - in input signal and the linewidth have to be measured .figure [ fig : sn ] shows a frequency spectrum recorded at the input of the lock - in amplifier using a fft spectrum analyzer .the frequency was tuned to the center of the magnetic resonance so that the modulation of the photocurrent was at its maximum amplitude .the power spectrum shows a narrow peak at surrounded by noise peaks that characterize the magnetic field noise .monochromatic magnetic field fluctuations , e.g. , line frequency interference , modulate the phase of the measured sine wave and show up in the power spectrum as sidebands .the low frequency flicker noise of the magnetic field thus generates a continuum of sidebands that sum up to the background structure surrounding the peak in fig .[ fig : sn ] .the estimation of the intrinsic sensitivity is based on the assumption that those sidebands would disappear in a perfectly constant magnetic field .the amplitude noise of the signal is mainly due to the electron shot noise in the photodiode , which generates a white noise spectrum . for frequencies which aremore than 1 k away from the resonance , the noise level drops to the white noise floor .the electron shot noise is the fundamental noise level that can not be avoided .the noise spectral density can be calculated from the dc current flowing through the photodiode : at room - temperature the measured rms noise in the spectrum was 5% to 20% above the shot - noise level , depending on induced noise on the photocurrent and the laser frequency stabilization that could cause excess noise in the light intensity .the rms noise rose rapidly for higher temperatures because of the increasing leakage current in the photodiode .unfortunately , in the experimental setup the photodiodes were in good thermal contact with the cs cell and , given that the optimal operating temperature of the cs cells turned out to be in the range of 50 c to 60 c , the photodiode produced an excess noise larger than the shot noise of the photocurrent .figure [ fig : sn ] shows a spectrum recorded under conditions optimized for maximal magnetometric resolution . at 53 cthe measured rms noise was higher than the shot noise by a factor of 1.55 .however , this limitation can be overcome easily since the photodiodes do not need to be close to the cs cell and thus can be operated at room - temperature . in order to avoid the problem of drifting values of during the optimization of , the theoretical shot noise level was used for in eq .[ eq : snr ] instead of the measured noise .the amplitude of the signal can be extracted from the fft - spectrum by integrating the spectrum over three points ( ) around the center frequency .the procedure was needed since the hanning window used by the spectrum analyzer to reconstruct the spectrum causes a slight broadening of the central peak .the values calculated in that way are in good agreement with those measured by the lock - in amplifier .the third parameter needed to calculate the intrinsic sensitivity is the half - width of the magnetic resonance .the value was extracted from a magnetic resonance spectrum recorded by the lock - in amplifier during a frequency sweep of the applied oscillating magnetic field .as discussed in section [ sec : circle ] a constant - gradient model was fit to the data in order to extract .for optimizing in a three - dimensional parameter space , the time for one measurement had to be kept as short as possible .when the lock - in amplifier signal was used as a measure for [ see eq .( [ eq : snr ] ) ] and the noise was calculated from the dc current it was not necessary to record a fft spectrum for each set of parameters of the optimization procedure . in that way the time for a single nem measurement was reduced to the 20 s sweep time of plus the time needed to measure the dc current and the temperature of the cell .the measurement was controlled by a pc running dedicated software for recording and fitting the magnetic resonance signals .the amplitude of the rf field , , was changed automatically by the software , resulting in series of typically ten nems as a function of .a typical optimization run was made by recording many such series while the system slowly heated up . repeatingthose runs for different light powers finally resulted in data for the whole parameter space .the points are extracted from measured magnetic resonance spectra by least squares fitting of model eq .( [ eq : tgrad ] ) .the phase signal ( a ) has a constant linewidth whereas the common widths of the in - phase and quadrature signals ( b ) increase rapidly with rf amplitude .the solid line represents a model fitted to the data that assumed an additional broadening caused by inhomogeneous magnetic fields . ]the first study made with the magnetometer examined the dependence of the magnetic resonance on the rf amplitude measured a series of spectra recorded at room temperature .figure [ fig : xywidth ] shows the dependence of the magnetic resonance signal width on the rf amplitude measured by the coil voltage .the width of the phase signal ( see fig . [fig : xywidth ] ) was fit with a constant , whereas the common width of the in - phase and quadrature components were given by eq .( [ eq : deltaomega ] ) . to fit the widths correctly ,a constant width had to be added to eq .( [ eq : deltaomega ] ) . the additional constant width can be interpreted as a residual broadening caused by magnetic field inhomogeneities of higher order than the one considered in the line fitting model . the nyquist plot ( see fig . [fig : circ ] ) shows that higher order gradients are present and the excellent agreement in fig .[ fig : xywidth ] suggests that they can be treated as an additional broadening . ) and quadrature ( ) signals as a function of rf amplitude .the points represent values extracted from measured magnetic resonance spectra .the solid lines show a model fit to the data points ( see text ) .the quadrature amplitude is equal to the amplitude of the incoming sine wave on resonance ( ) . ]figure [ fig : xyampl ] shows the amplitudes of the in - phase and quadrature magnetic resonance signals . the amplitudes where extracted from the same spectra used for fig .[ fig : xywidth ] .the fit model used to explain the amplitudes ( solid lines in fig .[ fig : xyampl ] ) was based on eqs .( [ eq : dr ] ) and ( [ eq : di ] ) with a background proportional to .the origin of the background was an inductive pick up of the field by the photocurrent loop which caused an additional phase - shifted sine wave to be superposed on the photocurrent .as discussed in the theory part ( see fig . [fig : circ_phase ] ) that lead to an offset in the measured amplitudes of the magnetic resonance signal .the nem as a function of rf amplitude is inversely proportional to the quadrature amplitude ( in fig .[ fig : xyampl ] ) , since the linewidth of the phase signal and the shot noise do not change with rf amplitude .the optimal rf amplitude was determined from the data shown in fig .[ fig : xyampl ] and corresponds to . .the points of minimal nem for each optimization run are indicated by points .the connecting line is a cut along which the data of fig .[ fig:4plots ] are obtained . including the variation of the rf amplitude 970 parameter sets were recorded and analyzed to produce the map . ] as described in section [ sec : optmes ] the dependence of the nem on the temperature was recorded while the system was slowly heated .the rf amplitude was automatically scanned so that for every temperature the optimal rf amplidude could be determined .figure [ fig : map ] shows a contour plot of the nem as a function of temperature and light power . if the light power is increased , the temperature ( and hence cs atom density ) has also to be increased to maintain optimal resonance conditions .figure [ fig:4plots](b ) shows the power transmitted through the cell relative to the incident light power .a relative transmission of 0.37 corresponds to an absorption length which matches the cell length .taking into account losses at the windows , a density corresponding to 1.4 absorption lengths was found to be optimal . .] for each light power the optimal temperature is indicated by a dot in fig .[ fig : map ] . plotting the nem along the optimum temperature power curve , i.e. , the curve connecting the dots , results in the plot shown in fig . [ fig:4plots](d ) . for light powers below 15 and the corresponding temperatures the sensitivity rapidly degrades .the loss in sensitivity is less pronounced if the power and temperature are chosen above the optimum .values for of up to 500 000 ( 114 db ) were measured at a resolution bandwidth of 1 .the optimal magnetic field resolution of our magnetometer is reached at a light power of 15 and a temperature of 53 c. with that set of parameters , the usable bandwidth of the magnetometer was determined by a cut - off frequency of about 80 ( see fig .[ fig:4plots](a ) ) . in order to meet the required 100 bandwidtha slightly larger light power can be used . all characterizing measurements ( cf . figs .[ fig : res ] , [ fig : bw ] , [ fig : circ ] , and [ fig : sn ] ) were therefore performed with a light power of 20 at 54 c.optimizing the performance of the magnetometer has led to a set of parameters for which the device offers a large sensitivity and a large bandwidth .both requirements can be met at the same time because of rather large linewidths that turned out to be optimal . under these conditionsthe high magnetometric sensitivity relies on the achieved very high signal / noise ratios .the system has the potential to operate at a of 500000 ( fig .[ fig : sn ] ) and we hope to be able to demonstrate this once the photodiodes can be removed from the heated cs cell .however , even using the measured of 320000 , the intrinsic sensitivity of is good enough for less demanding cardiomagnetic measurements .the magnetometer bandwidth of 140 in the free - running phase - detecting mode ( fig .[ fig : bw ] ) is high enough for cardiac measurements .the phase - detecting mode avoids the fundamental limitations associated with frequency measurements using short integration times .an important open experimental question is whether the predicted intrinsic sensitivity can be reached using several of the present opms in a higher order gradiometer geometry . with gradiometric squid sensors it is possible to achieve nem value on the order of in unshielded environments .in future we plan to use cells with spin - preserving wall coatings rather than buffer - gas cells as sensing elements .coated cells have the advantage that the atoms traverse the volume many times during the spin coherence lifetime , therefore averaging out field inhomogeneities .we are therefore confident that the present limit from field gradients can be overcome and that optical magnetometers can reach an operation mode limited by their intrinsic sensitivity .for the measurement of the frequency of a sine wave with a rms amplitude sampled at points separated by time intervals the cramr rao lower bound for the variance of in the presence of white gaussian amplitude noise of variance is given by : where is the total time interval for one frequency determination .the bandwidth on the input side of the lock - in amplifier is therefore , that at the output is . with the definition of the signal - to - noise ratio , eq .( [ eq : snr ] ) , can be expressed independently of the number of samples : ideal measuring processes are limited by that condition only .frequency measurements by a fft with peak interpolation is a cramr rao bound limited measuring process . from that bound a lower limit for the performance of a frequency measuring magnetometer can be derived .the so - called self - oscillating magnetometer is of this type since it supplies an oscillating signal with a frequency proportional to the magnetic field . with eq .( [ eq : gf ] ) it follows that the root spectral density of the measurement noise is given by : cramr rao lower bound for the measurement of the phase of a signal with known frequency is given by : an example of a measurement process limited only by that condition is the lock - in phase detection where the phase is calculated from the in - phase and quadrature outputs of the lock - in amplifier [ see eq .( [ eq : phase ] ) ] . in order to calculate the variance of the phase measurement we assume a white amplitude noise spectrum with a power spectral density : using this expression and the definition of [ eq . ( [ eq : snr ] ) ] , eq . ( [ eq : vphi ] ) can be written as from the measured phase , the detuning can be derived . for , eq .( [ eq : phase ] ) leads to . using eq .( [ eq : gf ] ) , the detuning can be expressed as a magnetic field difference which leads , together with eq .( [ eq : sigmaphi ] ) , to the magnetic field resolution : the root spectral density of the noise in the measurement , , is thus given by : work was supported by grants from the schweizerischer nationalfonds and the deutsche forschungsgemeinschaft .the authors wish to thank martin rebetez for efficient help in understanding the frequency response of the magnetometer and paul knowles for useful discussions and a critical reading of the manuscript .m. n. livanov , a. n. kozlov , s. e. sinelnikova , j. a. kholodov , v. p. markin , a. m. gorbach , and a. v. korinewsky , `` record of the human magnetocardiogram by the quantum gradiometer with optical pumping , '' adv . cardiol .* 28 * , 78 ( 1981 ) .i. tavarozzi , s. comani , c. d. gratta , g. l. romani , s. d. luzio , d. brisinda , s. gallina , m. zimarino , r. fenici , and r. d. caterina , `` magnetocardiography : current status and perspectives .part i : physical principals and instrumentation , '' ital .heart j. * 3 * , 75 ( 2002 ) .
cardiomagnetometry is a growing field of noninvasive medical diagnostics that has triggered a need for affordable high - sensitivity magnetometers . optical pumping magnetometers are promising candidates satisfying that need since it was demonstrated that thy can map the heart magnetic field . for the optimization of such devices theoretical limits on the performance as well as an experimental approach is presented . the promising result is a intrinsic magnetometric sensitivity of 63 ft a measurement bandwidth of 140 hz and a spatial resolution of 28 mm .
the bacterial min - proteins are a well studied example of a pattern - forming protein system that gives rise to rich spatiotemporal oscillations .it was discovered as a spatial regulator in bacterial cell division , where it ensures symmetric division by precise localization of the divisome to midcell .the dynamic nature of this protein system was demonstrated by live cell imaging in _e. coli _ bacteria , where these proteins oscillate along the longitudinal axis between the cell poles of the rod - shaped bacterium , forming so - called _ polar zones _ .most bacteria use a cytoskeletal structure , a so - called _ z - ring _ , for the completion of bacterial cytokinesis .this z - ring self - assembles from filaments of polymerized ftsz - proteins , the prokaryotic homolog of the eukaryotic protein tubulin , which serve as a scaffold structure for midcell constriction and the eventual septum formation in the midplane .if successful , this process creates two equally sized daughter cells with an identical set of genetic information .a necessary prerequisite for successful symmetric cell division is hence the targeted assembly of ftsz towards midcell . in _cells this is mediated by two independent mechanisms , nucleoid occlusion and the dynamic oscillation of the mincde proteins . while nucleoid occlusion prevents division near the chromosome , the min - system actively keeps the divisome away from the cell poles through the minc - protein acting as ftsz - polymerization inhibitor .the characteristic pole - to - pole oscillations create a time - averaged concentration gradient with a minimal inhibitor concentration of minc at midcell , suppressing z - ring assembly at the cell poles .although minc is indispensable for correct division site placement , it acts only as a passenger molecule , passively following the oscillatory dynamics of mind and mine . on the molecular level ,the oscillations emerge from the cycling of the atpase mind between a freely diffusing state in the cytosolic bulk and a membrane - bound state , induced by its activator mine under continuous consumption of chemical energy by atp - hydrolysis , as shown schematically in figure [ cycle ] . in its atp - bound form mindhomodimerizes and can subsequently attach to the inner bacterial membrane as atp - bound dimers using a membrane targeting sequence in form of a c - terminal amphipathic helix .despite the fact that the physicochemical details of mind membrane binding are not yet fully understood , it has been demonstrated that mind membrane binding is a cooperative process . when being bound , mind diffuses along the membrane .it also recruits mine which in turn triggers the atpase activity of mind , breaking the complex apart and releasing all constituents back into the cytosol .mind then freely diffuses in the bulk and can , after renewed loading of atp and dimerization , rebind to the membrane at a new position .this cycling of mind between two states is the core mechanism for wave propagation of the min - proteins . for a comprehensive overview on the underlying molecular processes, we refer to recent reviews on this topic .one of the most intriguing aspects of the min - system is the impact of geometry on the spatiotemporal patterns . while initial experiments in wild - type cells showed characteristic _pole - to - pole oscillations _ , growing _e. coli _ cells , which roughly double in length before division , can also give rise to stable oscillations in both daughter cells even before full septum closure . in very long filamentous mutantsthe pole - to - pole pattern vanishes and several minde localization zones emerge in a stripe - like manner ( _ striped oscillations _ ) , with a characteristic distance of , strongly reminiscent of standing waves .no stable oscillation patterns emerge in spherical cells where minde localization appears to be random without stable oscillation axes .strikingly , the min - system can be reconstituted outside the cellular context using purified components on supported lipid bilayers . using only fluorescently labeled mind and mine and atp as energy source, traveling surface waves were observed in the form of turning spirals and traveling stripes on flat homogeneous substrates , where mind proteins form a moving wave - front , that is consumed by mine at the trailing edge , demonstrating that mind and mine alone are indeed sufficient to induce dynamic patterning .interestingly , these assays work for different lipid species , demonstrating the robustness of the min - oscillations with respect to the detailed values of the binding rates . combining this reconstitution approach with membrane patterning, it was shown that the min - system is capable of orienting its oscillation axis along the longest path in the patch and hence in principle capable of sensing the surrounding geometry .more recently the gap between the traveling _ in vitro _min - waves and the standing min - waves in live cells was closed , using microfabricated pdms compartments mimicking the shape of _cells . in these biomimetic compartments , which confine the reaction space in 3d ,pole - to - pole oscillations were observed , reminiscent of the paradigmatic _ in vivo _ oscillation mode .later it was shown that the min - oscillations are indeed sufficient to spatially direct ftsz - polymerization to midcell , linking two key elements of bacterial cell division in a synthetic bottom - up approach . in order to study the effect of geometry in the physiological context of the cell, one can place growing cells in microfabricated chambers of custom shape .this _ cell sculpting _approach allowed the authors to systematically analyze the adaptation of the min - oscillations to compartment geometry and demonstrated experimentally that different oscillation patterns can be stable for the same cell geometry . using image processing , it was possible to measure the relative frequency of the different modes for a large range of interesting geometries .figure [ cycle ] summarizes the different geometries that have been used before in experiments and that are considered here with computer simulations .while geometry a uses a flat membrane patch , similar to flat patterned substrates , geometry b corresponds to microfabricated chambers with an open upper side .geometry c corresponds to the cell sculpting approach . like for other pattern forming systems ,the theory of reaction - diffusion processes offers a suitable framework to address the min - oscillations from a theoretical point of view .many theoretical models have been proposed to unravel the physical principles behind this intriguing self - organizing protein system and to explain the origin of its rich spatiotemporal dynamics . while all of them rely on a reaction - diffusion mechanism similar to the turing model , they differ severely in their details .the first class of mathematical models used an effective one - dimensional pde - approach and relied strongly on phenomenological non - linearities in the reaction terms .although all of them successfully gave rise to pole - to - pole oscillations , they did not allow a clear interpretation of the underlying biomolecular processes and were not in agreement with all experimental observations , such as mine - ring formation and the dependence of the oscillation frequency on biological parameters .the next advance in model building was the focus on the decisive role of mind aggregation and the relevance of mind being present in two states ( adp- and atp - bound ) .very importantly , this highlighted the interplay between unhindered diffusion with a nucleotide exchange reaction in the bulk as a delay element for mind reattachment .subsequent models shared a common core framework but still differed strongly in the functional form of the protein binding kinetics and the transport properties of membrane - bound molecules .the main difference between the more recent models was the dimensionality , ranging from one dimension to two and three .moreover , the models can be classified as deterministic pde - models or using stochastic simulation frameworks , and whether they neglected membrane diffusion or not . while some models contained higher than second order non - linearities in concentrations of the reaction terms , it is the prevailing opinion to rely on at most second order non - linearities , allowing for a clear interpretation in terms of bimolecular reactions .following the same line of thought , a strong effort was made to distill a minimal system that explains the oscillation mechanism without the necessity of spatial templates or prelocalized determinants and neglecting secondary processes like filament formation .the most influential minimal model for the min - system has been suggested by huang and coworkers .it has been further simplified by discarding cooperative mind recruitment by minde complexes on the membrane , allowing a clear view on the core mechanisms : the cycling of mind between bulk and membrane , cooperativity of mind - recruitment and diffusion in bulk and along the membrane . using the minimal model, it has been shown that the canalized transfer of proteins from one polar zone to the other underlies the robustness of the min - oscillations .because the deterministic variants of the minimal model do not allow us to address the role of stochastic fluctuations , a stochastic and fully three - dimensional version has been introduced to study the effect of stochastic fluctuations in patterned environments . for rectangular patterns of , it was found that the system can be bistable , with transverse pole - to - pole oscillations along the minor and longitudinal striped oscillations along the major axis , respectively . in this early work, it was observed that the stable phase emerged depending on the initial conditions and that sometimes switching occurred , but the statistics was not sufficiently good to observe switching in quantitative detail .indeed such multistability has been observed experimentally in sculptured cells over a large range of cell shapes and the deterministic minimal model has been used to explain the relative frequency of the different oscillations patterns for a given shape using a perturbation scheme .however , as a deterministic model , this approach was not able to address the rate with which one pattern stochastically switches into another . herewe address this important subject by using particle - based brownian dynamics computer simulations .compared to earlier work along these lines , we have developed new methods to efficiently simulate and analyze the switching process .we find excellent agreement with experimental data and measure for the first time the switching time of multistable oscillation patterns .we also use our model to confirm that it contains the minimal ingredients for the emergence of min - oscillations .in addition , we use our stochastic model to investigate the three - dimensional concentration profiles in different geometries and in particular the role of edges in membrane - covered compartments .we identify novel oscillation patterns in compartments with membrane - covered walls and find a surprisingly simple ( linear ) relation between the bound min - protein densities and the volume - to - surface ratio , which might be relevant for geometry sensing by _ e. coli _ cells .for the particle - based simulation , we use the reaction scheme of the minimal model for cooperative attachment .the model uses the following interactions between min - proteins and the inner bacterial membrane ( schematically shown in figure [ cycle ] ) .freely diffusing cytoplasmic can bind to the membrane with a rate constant binds preferably to high density regions ( cooperative mind binding ) membrane - bound mind also recruits cytoplasmic mine to the membrane with rate , creating a complex all membrane - bound proteins diffuse in the plane of the membrane , but with a much smaller diffusion constant than in the bulk . parameter & set *a * & set * b * & set * c * & unit & description ( reaction type ) + & & & & & bulk diffusion coefficient of mind + & & & & & bulk diffusion coefficient of mine + & & & & & membrane diffusion coefficient + & & & & & first order , unimolecular + & & & & & first order , membrane attachment + & & & & & second order , bimolecular + & & & & & second order , bimolecular + & & & & & first order , unimolecular + & & & & m & total mind concentration + & & & & m & total mine concentration + mine attachment activates atp hydrolysis of mind .the hydrolysis of atp to adp breaks up the membrane - bound complex and releases and mine back into the cytoplasmic bulk with rate constant finally exchanges adp by another atp molecule ( nucleotide exchange ) with the rate this completes the reaction cycle . in table[ tbl : parameterset ] we list our parameter values as set a. for comparison , we also list parameter values used in other studies ( set b in and set c in ) .we use custom - written code to simulate the stochastic dynamics of the min - system with very good statistics . for all simulations we use a fixed discrete time step of . during every time step each particle is first propagated in space . thereafter every particle can react according to the previously introduced min reaction scheme [ eq : reactionschemefirst ] [ eq : reactionschemelast ] .the movement of both free and membrane - bound particles is realized through brownian dynamics .individual molecules are treated as point - like particles without orientation .therefore we can monitor the propagation separately for each cartesian coordinate . during a simulation step of displacements of the diffusing particles with diffusion constant are drawn from a gaussian distribution with standard deviation such that where is the probability distribution of .the same update step is used for the and direction .free particles in the bulk of the simulated volume undergo three - dimensional diffusion with reflective boundary conditions at the borders of the simulation volume .the membrane - bound particles perform a two - dimensional diffusion on the membrane with a much smaller diffusion constant ( compare table [ tbl : parameterset ] ) .membrane - bound particles are allowed to diffuse between different membrane areas that are in contact with each other .the different reactions in the min reaction scheme [ eq : reactionschemefirst ] [ eq : reactionschemelast ] can be classified into three different types ( more details on the corresponding implementations are given in the supplementary information ) .the first type considered here are first order reactions without explicit spatial dependence .the conversion of to and the unbinding of the complex from the membrane are of this type .such reactions are treated as a simple poisson process . for a reaction rate ,the probability to react during a time step is given by the second type is also a first order reaction , but with confinement to a reactive area at a border of the simulated volume .the membrane attachment of proteins is a reaction of this type . for a given reaction rate , we implement these reactions by allowing particles that are closer to the membrane than to attempt membrane attachment with a poisson rate .this results in a reaction probability of the last reaction type is a second order reaction between free and membrane - bound particles .the cooperative recruitment of cytosolic and mine to membrane - bound mind are reactions of this third type . in our simulationwe adopt the algorithm implemented in the software package smoldyn , which has been used earlier to simulate the min - system .this algorithm is based on the smoluchowski framework in which two particles react upon collision .however , the classical treatment by smoluchowski only considers diffusion - limited reactions and therefore assumes instantaneous reactions upon collision .in order to take finite reaction rates into account , one imposes a radiation boundary condition . from the diffusion constant , the reaction rate and the simulation time step , a reaction radius calculated . whenever a freely diffusing particle comes within the distance of to a membrane - bound particle , the free particle reacts .for intermediate values of ( such as the time step of that we use for the min - system ) the value of is obtained numerically .those numerical values are taken from the smoldyn software .for example , for parameter set a the reaction radius for the rate is , and for it is . in our simulationswe use rectangular reaction compartments .we considered three different membrane setups as illustrated in figure [ cycle ] . to mimic _ in vitro _experiments , where substrates or open compartments are functionalized with a membrane layer , we place the reactive membrane at the bottom ( geometry a ) or at the side walls and the bottom of the simulation compartment ( geometry b ) . to simulate rectangular shaped _e. coli _ cells , inspired by the cell sculpting approach from , fully membrane - covered volumes are used ( geometry c ) .we refer to the long side of the lateral extension as the major or the -axis , and the smaller side as minor or -axis , and accordingly consider the compartment height to extend in the -direction , aligning the rectangular geometry perpendicular with the coordinate frame . in our simulationswe investigate a wide range of compartment dimensions . for a simulation box with a length of , width of and height of, we use particles , particles and mine particles as initial condition .these particle numbers amount to a total mind concentration of m and a mine concentration of m . for other simulation compartment sizeswe scale the particle numbers linear with the volume , since in experiments _ e. coli _ bacteria typically have a constant min - protein concentration . in our simulations of the min - systemdifferent oscillation patterns emerge along the major or minor axis of the simulation compartment . in order to analyze the frequency of different modes and the stability of the oscillations in the large amount of simulation data ,an oscillation mode recognition algorithm is needed .therefore we monitor the mind protein densities at the poles of the different axes over time . to determine the axis along which the oscillation takes place, we compare the fourier transformation of the normalized densities over time ( where i denotes the discretized time resolution ) .if an oscillation takes place , there is a dominant peak in the fourier spectrum and the overall maximal amplitude of the fourier spectrum is significantly higher than the one from the non - oscillating axis . the same fourier spectrum is also used to determine the oscillation period . to differentiate between pole - to - pole oscillations and striped oscillations of a given axis in the system, we extract the phase difference between the density oscillation at the poles of the cell .when identifying switching events , one has to be more careful because stochastic fluctuations might lead to temporal changes that might be mistaken to be mode switches .for this purpose , we therefore smoothen the data . in detail, we calculate the convolution between the densities over time and a gaussian time window here is the time between successive density measurements and we set as width of the time window . the current oscillation mode is now determined from the convoluted densities and assigned to the time . in this way , only switches are identified that persist for a sufficiently long time .first we investigated the oscillations that emerge in geometry a with parameter set a using a rectangular simulation volume with dimensions ( ) . with this particular choicethe width of the system approximately matches the typical length of wild - type _e. coli _ cells and the length of the system corresponds to the length of a grown _ e. coli _ cell which can roughly double in length before septum formation and division . as shown by the kymographs in figure [ switch ] and in agreement with the results of hoffmann _ et al . _ , in our simulations two different oscillation modes occur ( compare also supplemental movie s1 ) .note from the color legend that dark and light colors correspond to low and high concentrations , respectively , as used throughout this work . in the first modethe min - proteins oscillate along the minor -axis from one pole to the other ( pole - to - pole oscillation ) . in the second modethe proteins oscillate along the major -axis between the poles and the middle of the compartment ( striped oscillation ) .the system stochastically switches between the two modes , sometimes via a short oscillation along the diagonal of the compartment .the mode switching behavior of the min - system in large volumes is in agreement with the experimental results of wu _ et al ._ and can not be analyzed completely with conventional pde - models of the min - oscillations because they do not account for the noise in the system leading to the stochastic switch .detailed analyses of the pole - to - pole and the striped oscillations from figure [ switch ] are shown in figure [ all ] and [ all ] , respectively ( compare also supplemental movies s2 and s3 , respectively ) .first , we consider the pole - to - pole oscillations in figure [ all ] . in the kymographs of the bound mind and mine proteins ( top row figures ) we see clusters of bound proteins that detach from the membrane beginning in the middle and from there move towards one of the poles of the compartment in an alternating way .the shapes of the bound mind protein density clusters in the kymographs have a triangular form , in contrast to the line - like structures of the bound mine proteins . those density lines in the bound mine kymograph show that the mine proteins form a high density cluster in the middle of the cell which propagates to one of the compartment poles .this behavior is similar to the experimentally observed ringlike structures of mine proteins in _ e. coli _ bacteria that travel from the middle to the poles of the cell , leading to the dissociation of mind proteins from the membrane .the kymographs of the free particles ( middle row in figure [ all ] ) are averaged over all heights and have the inverse shape of the corresponding kymographs of the bound particles ( top row in figure [ all ] ) . where the density of bound particles is high , the density of free particles is low and vice versa . during the simulations the spatial density differences of both mind and mine are higher for membrane - bound particles than for free particles in the bulk .therefore in the two bottom kymographs in figure [ all ] , which are showing total particle densities , the pattern of membrane - bound particles is dominant . using a compartment of as in figures [ switch ] and [ all ] . *b * density change of non - bound mind and mine in time at different heights above the membrane for a bottom area and compartment height .[ cut ] ] , and , respectively .* b * density kymographs along the major axis for , and . * c * time - averaged density profiles for different membrane diffusion - coefficients . *d * time - averaged density profiles for different cooperative mind membrane - recruitment rates .[ sensitivity ] ] the kymographs of the striped oscillation in figure [ all ] have the same structure as the ones of the pole - to - pole oscillations .however , the edges of the bound min - protein clusters in the kymographs , that indicate the traveling min - waves , are curved , in contrast to the straight lines that we see for the pole - to - pole oscillations as shown in figure [ all ] .the time - averaged density profiles of mind proteins for the pole - to - pole and striped oscillations are shown in figure [ all ] and , respectively .as expected the density of the proteins is minimal between the oscillation nodes of the emerging standing wave patterns . ) .below and above the projected density profile along the side walls of the compartment is shown . ]it is highly instructive to compare the time evolution of the mind and mine protein densities . in figure [ cut ]we plot the time evolution of the particle densities of the transverse pole - to - pole oscillation at a fixed position , which is at the edge of the minor axis along which the oscillation takes place .the shape of the transient density profiles is similar to experimentally observed density profiles of traveling min - protein waves on flat membrane surfaces .the period of both oscillations modes was {33.8(1)}{\second} ] & ] .these kind of reactions have no dependence on the spatial coordinates of the system . in the min reaction schemethe nucleotide exchange reaction ( ) in the bulk and the membrane detachment ( ) are of this type . *simple membrane attachment reactions are also first - order unimolecular reactions of the above type {\,\ , k_{1,m}\,\ , } b ] after the diffusive step .this procedure is repeated iteratively until convergence is achieved . by inverting the tabulated relation between the s and the s one solves the inverse problem andcan thus obtain as a function of the diffusion coefficient , the time step and the forward rate constant .the radial distribution functions one obtains using this algorithm reduce to the radial distribution function of the smoluchowski model in the limit of small time steps , as to be expected for infinitely detailed brownian motion . for irreversible bimolecular reactionsthe rdf in the smoluchowski model reads in general however , the rdfs qualitatively resemble the functional form of the radial distribution function according to the collins and kimball model . for a quantitative test of our algorithm with the corresponding mean - field results we solve equation numerically . in order to doso we make further simplifying assumptions . for arbitrary distributions of the bound particles the differential equation can not be reduced to one dimension as before , however , it is possible to do so for evenly distributed bound particles .we numerically solved this pde using the min - system parameter set a as introduced in table 1 of the main text and assuming evenly distributed bound particles on the bottom surface of a rectangular simulation box of volume . to match this scenario of the mean - field model in the particle - based framework , we consider a molecular species of freely diffusing particles in a simulation volume which can react with membrane - bound particles of another species .for the sake of simplicity we remove particles of species after a successful reaction step with a membrane - bound particle , while the bound - particles are not removed after the reaction step to keep .the results of this comparison using the min - systems parameters and several different values for are shown in figure [ ib10 ] .free particles that are removed upon reaction with evenly distributed bound particles on the bottom surface of a rectangular simulation volume . denotes number of remaining free particles in the system .blue rectangles depict the values from the numerical solution of equation and red triangles represent the results from the particle - based simulation algorithm .the parameters from the main text are used .those are , and . in the four figuresdifferent values for the density of the bound particles were used : top left , top right , bottom left and bottom right .,title="fig : " ] free particles that are removed upon reaction with evenly distributed bound particles on the bottom surface of a rectangular simulation volume . denotes number of remaining free particles in the system .blue rectangles depict the values from the numerical solution of equation and red triangles represent the results from the particle - based simulation algorithm .the parameters from the main text are used .those are , and . in the four figuresdifferent values for the density of the bound particles were used : top left , top right , bottom left and bottom right .,title="fig : " ] + free particles that are removed upon reaction with evenly distributed bound particles on the bottom surface of a rectangular simulation volume . denotes number of remaining free particles in the system .blue rectangles depict the values from the numerical solution of equation and red triangles represent the results from the particle - based simulation algorithm .the parameters from the main text are used .those are , and . in the four figuresdifferent values for the density of the bound particles were used : top left , top right , bottom left and bottom right .,title="fig : " ] free particles that are removed upon reaction with evenly distributed bound particles on the bottom surface of a rectangular simulation volume . denotes number of remaining free particles in the system .blue rectangles depict the values from the numerical solution of equation and red triangles represent the results from the particle - based simulation algorithm .the parameters from the main text are used .those are , and . in the four figures different values for the density of the bound particleswere used : top left , top right , bottom left and bottom right .,title="fig : " ] 59ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1016/0092 - 8674(89)90586 - 2 [ * * , ( ) ] link:\doibase 10.1046/j.1365 - 2958.1999.01575.x [ * * , ( ) ] link:\doibase 10.1073/pnas.96.9.4971 [ * * , ( ) ] http://www.ncbi.nlm.nih.gov/pubmed/10515933 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1016/s1369 - 5274(01)00262 - 4 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nrmicro2671 [ * * , ( ) ] link:\doibase 10.1371/journal.pgen.1004504 [ * * , ( ) ] ** , ( ) link:\doibase 10.1128/jb.185.3.735 - 749.2003 [ * * , ( ) ] link:\doibase 10.1073/pnas.232590599 [ * * , ( ) ] link:\doibase 10.1074/jbc.m306876200 [ * * , ( ) ] link:\doibase 10.1074/jbc.m112.407817 [ * * , ( ) ] link:\doibase 10.1146/annurev.biochem.75.103004.142652 [ * * , ( ) ] link:\doibase 10.1038/nrmicro2612 [ * * , ( ) ] link:\doibase 10.1146/annurev - biophys-042910 - 155332 [ * * , ( ) ] link:\doibase 10.1111/mmi.12017 [ * * , ( ) ] , link:\doibase 10.3390/life4040915 [ * * , ( ) ] link:\doibase 10.1016/j.ceb.2016.02.005 [ * * , ( ) ] link:\doibase 10.1038/msb.2010.111 [ * * , ( ) ] link:\doibase 10.1093/emboj/21.8.1998 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nsmb.2037 [ * * , ( ) ] link:\doibase 10.1073/pnas.1206953109 [ * * , ( ) ] link:\doibase 10.1002/anie.201207078 [ * * , ( ) ] link:\doibase 10.7554/elife.03949 [ * * , ( ) ] link:\doibase 10.1073/pnas.1120854109 [ * * , ( ) ] link:\doibase 10.1038/nnano.2015.126 [ * * , ( ) ] * * , ( ) link:\doibase 10.1007/bf00289234 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.65.851 [ * * , ( ) ] * * , ( ) link:\doibase 10.1073/pnas.251216598 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.87.278102 [ * * , ( ) ] , link:\doibase 10.1016/s0006 - 3495(02)75426-x [ * * , ( ) ] link:\doibase 10.1088/1478 - 3975/2/2/002 [ * * , ( ) ] , link:\doibase 10.1073/pnas.2135445100 [ * * , ( ) ] link:\doibase 10.1083/jcb.200411122 [ * * , ( ) ] , link:\doibase 10.1111/j.1365 - 2958.2007.05607.x [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/4/043023 [ * * , ( ) ] link:\doibase 10.1073/pnas.0505825102 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.0020080 [ * * , ( ) ] link:\doibase 10.1088/1478 - 3975/3/1/001 [ * * , ( ) ] , link:\doibase 10.1103/physreve.73.021904 [ * * , ( ) ] , * * , ( ) link:\doibase 10.1073/pnas.0502037102 [ * * , ( ) ] link:\doibase 10.1016/j.celrep.2012.04.005 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1003347 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.90.128102 [ * * , ( ) ] , link:\doibase 10.1039/c3sm52251b [ * * , ( ) ] https://books.google.de/books?id=zm1raaaamaaj[__ ] ( , ) * * , ( ) * * , ( ) link:\doibase 10.1016/0301 - 0104(78)87025 - 6 [ * * , ( ) ] link:\doibase 10.1063/1.458533 [ * * , ( ) ] link:\doibase 10.1088/1478 - 3967/1/3/001 [ * * , ( ) ] link:\doibase 10.1371/journal.pone.0023126 [ * * , ( ) ]
the spatiotemporal oscillation patterns of the proteins mind and mine are used by the bacterium _ e. coli _ to sense its own geometry . strikingly , both computer simulations and experiments have recently shown that for the same geometry of the reaction volume , different oscillation patterns can be stable , with stochastic switching between them . here we use particle - based brownian dynamics simulations to predict the relative frequency of different oscillation patterns over a large range of three - dimensional compartment geometries , in excellent agreement with experimental results . fourier analyses as well as pattern recognition algorithms are used to automatically identify the different oscillation patterns and the switching rates between them . we also identify novel oscillation patterns in three - dimensional compartments with membrane - covered walls and identify a linear relation between the bound min - protein densities and the volume - to - surface ratio . in general , our work shows how geometry sensing is limited by multistability and stochastic fluctuations .
positron emission tomography ( pet ) is currently one of the most prominent and promising techniques in the field of medical imaging . it plays a unique role both in medical diagnostics and in monitoring effects of therapy , in particular in oncology , cardiology and neurology .therefore , notable efforts are devoted to improve this imaging technique .the best way so far is to determine the annihilation point along the line of response ( lor ) based on measurement of the time difference between the arrival of the gamma quanta at the detectors , referred to as the time of flight ( tof ) difference . as it was shown in ref . , even with the tof resolution of about 400 ps that is achievable with non - organic crystals , a signal - to - noise ratio can be improved substantially in reconstruction of clinical pet images .in the articles , a new concept of the tof - pet scanner , named j - pet , was introduced .the j - pet detector offers improvement of the tof resolution due to the use of fast plastic scintillators .a single detection unit of the newly proposed tof - pet detector is built out of a long scintillator strip .light pulses produced in the strip propagate to its edges where they are converted via photomultipliers into electric signals .there are two main reasons why the tof resolution may be improved in j - pet scanner : i ) a very short rise - time and duration of the signals and ii ) a relation between the shape and amplitude of the signals and the hit position .the latter feature usually distorts the time resolution but , when the waveform of the signal is registered , the information about a change of the shape with the position may increase the position resolution and indirectly improve also the resolution of the time determination . however , to probe the signals , with duration times of few nanoseconds , a sampling time of order of picoseconds is required .this can be done well with the oscilloscopes during the laboratory studies on the prototype , but in the final multimodular devices with hundreds of photomultipliers , probing with oscilloscopes is not feasible .therefore , sampling in the voltage domain using a predefined number of voltage levels is needed .an electronic system for probing these signals in a voltage domain was developed and successfully tested . in recent papers have investigated the performance of a single unit of a j - pet scanner . sampling in the voltage domain at four thresholds was simulated and each pair of waveforms was represented by a 15-dimensional vector holding information about the relative time values of a signal s arrival at both scintillator ends . in that scenario ,the spatial and time resolutions of the hit position and event time for annihilation quanta were determined to be 1.05 cm and 80 ps ( ) , respectively .it is evident that the spatial and time resolutions can be further improved primarily by an increase in the number of threshold levels , as was also concluded e.g. in article .however , the number of channels in the electronic devices is a very important factor in determining the cost of the pet scanner .therefore , the question arises : is it possible to recover the whole signal based on only a few samples ?equivalently , how many threshold levels have to be applied to achieve a reasonable estimation error ? in this article we propose a novel signal recovery scheme based on ideas from the tikhonov regularization and compressive sensing methods that is compatible with the signal processing scenario in j - pet devices .we investigate the quality of signal recovery based on the scheme with a single scintillator strip module introduced in ref .the two most important aspects of our work involve i ) a development of fast recovery algorithms and ii ) a statistical analysis of an error level . in practicethe algorithm needs to work in real - time scenarios : during a single pet examination more than 10 million signals are acquired in just 10 - 15 minutes .moreover , only results for realistic scenarios with noisy measurements are considered . in particular , as was mentioned , the most important part of our investigations is to determine a dependence of the signal recovery error on the number of samples taken in the voltage domain . in this paperthe formula for calculations of the recovery error will be introduced and proven .this article is organized as follows .we will define the problem of signal recovery and show briefly the tikhonov regularization and compressive sensing methods in sec . 2 .in the last part of this section we will introduce the theorem enabling the determination of the signal recovery error as a function of the number of samples .the experimental setup of the simplified pet device with a single scintillator strip that enables us to acquire real signals as well as the results of their analysis are presented in sec .a detailed analysis of the experimental characteristic of signal recovery error as a function of the number of samples , as well as the explanation of the specificity of the signal recovery method in the application to the j - pet measurement are provided in sec .3.2 . in sec .3.3 we have discussed the limitations of the method of signal recovery . in particular , we have presented how the quality of the information needed to recover the signals , and therefore to estimate the recovery error , vary with size of training set of fully acquired signals .moreover , we have demonstrated that using the recovered waveform of the signals , instead of samples at four voltage levels alone , improves the spatial resolution of the hit position reconstruction .a detailed description of this study is given in sec .the conclusions and directions for future work are presented in sec .we wish to recover a finite signal in a situation where the number of available samples , denoted as measurement , is much smaller than the signal dimension ( is sampled on some partial subset , where the cardinality ) . in the compressive sensing ( cs ) method , a sparse expansion of signal , evaluated via linear and orthonormal transformation , is considered . in the following we assume weare given a contaminated measurement and then one may write : where is a matrix modeling the sampling system , constructed from rows of matrix that corresponds to the indexes of described in the subset , and is an error term .therefore , during the recovery process the information about the measurement may be included in the form of the linear system of equations : it should be stressed that in the case of presence of noise , represented by signal , instead of an exact recovery of signal we will consider the solution and by the analogy , instead of signal we will consider the solution . before we look in more detail we may state that the evaluation of requires two steps : i ) recovery of the sparse expansion and ii ) calculation of based on the .the first step of the procedure is crucial . in the situation where an exact solution can not be found ,cs method provide an attempt to recover by solving optimization problem of the form where is the size of the error term .the minimization approach provides a powerful framework for recovering sparse signals .moreover , the use of minimization leads to a convex optimization problems for which there exist a variety of greedy approaches like orthogonal matching pursuit or basis pursuit .other insights provided by cs are related to the construction of measurement matrices ( ) that satisfy the restricted isometry property .for an extensive review of cs the reader is referred to refs .. we will incorporate from the cs framework to our scheme only the idea of conducting the experiment , formulated by a linear system of equations as given explicitly in eq .( [ linearsyst ] ) . the problem formulated by eq .( [ linearsyst ] ) alone is essentially underdetermined , and is so - called ill - posed . as in the case of the cs method , it is necessary to incorporate further assumptions or information about the desired solution in order to stabilize the problem .as an alternative to the cs theory , one may use the regularization methods .the tikhonov regularization ( tr ) method is the most suitable for our problem . here , the idea is to define the regularized solution as the minimizer of the following expression : in eq .( [ tikmin ] ) , both signals and are assumed to be given with multivariate normal ( mvn ) distributions , where and are the mean value and covariance matrix of a measured signal , respectively , and are the mean value and covariance matrix of a prior distribution of , respectively .the covariance matrix in eq .( [ ydist ] ) is diagonal with the values on the diagonal equal to the measurement error variances ( as explained in sec .2.4 ) . with the introduction of the second term to the optimization problem in eq .( [ tikmin ] ) , an additional information from a training set of fully sampled signals is provided .the prior distribution of sparse representation ( see eq .( [ xdist ] ) ) is evaluated based on the linear transformation of the training set of signals by using the principal component analysis ( pca ) decomposition .thus , in order to find the sparse representation of a given measurement , as a solution of eq .( [ tikmin ] ) , one needs to specify first the prior distribution of . beside the advantage of including the additional information from training signals ,a further benefit of the tr approach is that the problem in eq .( 2 ) has an optimal solution which can be determined explicitly . in sec .2.2 we will evaluate the orthonormal matrix , as well as the parameters of the prior distribution of signal , see eq .( [ xdist ] ) , via the pca decomposition of training signals .it should be stressed that these parameters are calculated only once , at the preparation stage of the procedure .thus , the same matrix and vector , are used to recover a signal for each measurement .an example of the idea of using the pca decomposition of a training data set in a similar problem may be found in ref . . in sec .2.3 the solution of the tr formula described in eq .( [ tikmin ] ) , as well as its properties , will be provided . finally in sec .2.4 we will introduce the theorem enabling the determination of the signal recovery error as a function of the number of samples .pca is a statistical study , based on the orthogonal transformation , to convert a set of signals into a set of linearly independent variables , such that the variance of the projected data is maximized . for the training data matrix of fully sampled signals , \label{datay}\ ] ] where is the mean of the aligned training signals , the pca coordinates ] in eq .( [ xay ] ) is calculated in such a way that the projection of the data matrix with successive basis vectors inherits the greatest possible variance in the data set .thus , the first basis vector has to satisfy : where ( the orthonormality is restricted ) .the component can be found by subtracting the first principal components from data set : and then finding the basis vector which extracts the maximum variance from this new data matrix where . in the case discussed in this paper , the matrix evaluated based on the pca decomposition of the training set of signals and therefore the parameters of the mvn distribution of ( ) are estimated based on data matrix , constructed according to eq .( [ xay ] ) .the empirical covariance matrix of data set may be evaluated as : .\label{covp}\ ] ] the covariance matrix is diagonal , with values sorted in non - increasing order . since the mean of the signals in data set is equal to 0 , see eq . ( [ datay ] ) , the mean . in the previous sectionwe have shown how the prior information from a training set of signals i.e. the orthonormal matrix , and the parameters of the prior distribution of signal , may be introduced to the tr framework . in this sectionwe will derive a sparse solution of eq .( [ tikmin ] ) , and its covariance matrix , denoted hereafter as , for a particular measurement , based on the tr assumptions .the posterior probability density function ( pdf ) of the signal conditional on measurement , namely , can be computed after combining the prior distribution of , , likelihood of measurement , and via the well - known bayesian rule : to describe the mvn distribution in eq .( [ bayes ] ) we will use the following notion : where is an variable with mean value and covariance matrix .hence , the marginal and conditional densities of and from eq .( [ bayes ] ) are given as follows : equations ( [ probx ] ) and ( [ probyx ] ) result directly from the previously described eq .( [ xdist ] ) and ( [ ydist ] ) , respectively . equation ( [ proby ] ) shows that the probability is independent of , and therefore serves as a normalization constant . the posterior probability in eq .( [ probxy ] ) can be described exclusively by its first two moments ( ) because a gaussian pdf is self - conjugate and the pdfs on the right hand side of eq .( [ bayes ] ) are gaussian . after some simple calculations the equations for and are given by : it is worth noting that the solutions in eq .( [ estx ] ) and ( [ ests ] ) are analogous to kalman filter update equations ( cf ) . it can be easily shown that is not only the minimum mean square error ( mse ) estimator ( see eq .( [ tikmin ] ) ) but also the maximum a posteriori ( map ) estimator , that is it should be stressed that all the information from the training set of signals ( matrix and vector ) and from the oscilloscope specification ( matrix ) are evaluated only once , at the preparation stage .thus , the sparse signal may be found , according to eq .( [ estx ] ) , as a linear combination of the previously defined parameters and a given measurement . however , the evaluation of the covariance matrix , according to eq .( [ ests ] ) , does not require the information about the measurement , and may be provided at the preparation stage .this fact opens a possibility for an estimation of the theoretical value of the recovery error .this idea will be presented in the next section . as mentioned at the beginning of the sec . 2 ,the evaluation of the recovered signal requires two steps : i ) recovery of the compact representation via eq .( [ estx ] ) and ii ) calculation of as the solution where and are derived by pca decomposition .one of the benefit of using the tr approach is that it provides an easy way to obtain the error term of the recovered signal .we assume for the sake of simplicity that in eq .( [ esty ] ) is known exactly . since the matrix is orthonormal, we have , and therefore we may focus on the recovered signal error . in multivariate statistics ,the trace of the covariance matrix is considered as the total variance .we will denote the trace of covariance matrix as .it is worth noting that is the mean value of the recovery error squared norm .let be the diagonal element of covariance matrix ( see eq .( [ covp ] ) ) .find the smallest value , and largest value ( with constraints and ) such that for each : from eq .( [ pk ] ) one may see that controls the decrease rate of : the greater , the faster the decreasing of and better the compressibility of signal .the characteristics and of the prior distribution of signal and a standard deviation of noise ( ) enable us to provide the formula for average value of the recovery error .for this purpose we formulate the following theorem : suppose that and describe the decrease rate of variances of signal according to eq .( [ pk ] ) . the signal may be recovered as the solution to eq .( [ estx ] ) with an average value of error equation ( [ sigmax ] ) enables us to estimate the number of required samples of signal to achieve a preselected mean recovery error .intuitively , the is also closely related to the compressibility of signal , and from eq . ( [ sigmax ] )one may observe that an average recovery error is inversely proportional to the constant value .the proof of the theorem is given in the appendix .in this section , we present results illustrating the proposed approach and demonstrating that the number of samples ( ) required to sense the data can be considerably less than the total number of time samples ( ) in the reference signal . we investigate the performance of the algorithm using a data set of reference signals registered in single module scintillator strip ej-230 of j - pet device . the scheme of the experimental setup is presented in fig .[ exper_setup ] .the 30 cm long strip was connected on two sides to the r4998 hamamatsu photomultipliers denoted as pm1(2 ) .a series of measurements was performed using collimated gamma quanta from a source placed between the scintillator strip and the reference detector .the collimator was located on a dedicated mechanical platform allowing it to be shifted along the line parallel to the scintillator strip with a submillimeter precision ., scaledwidth=70.0% ] the source was moved from the first to the second end in steps of 6 mm . at each position , about 5 000 pairs of signals from pm1 and pm2 were registered in coincidence .these signals were sampled using the serial data analyzer ( lecroy sda6000a ) with a probing interval of 50 ps . to demonstrate the recovery performance only signals from pm1 were investigated ( the procedure with signals from pm2 would be the same ) .the length of a signal was set to 15 ns , which corresponds to samples ( see fig .[ examp_008]-[examp_082 ] for details ) .we wish to make one comment about the data acquisition .the signal captured by an oscilloscope is length , where each sample is contaminated with white noise with 0 mean and variance .the simulation of measurement is then based on selecting samples according to the subset . however , in order to extract the reference , noise - free signal , the acquired samples have to be subjected to low pass filtering . in the following procedure we will need the signals and as well . since the absolute registration time has no physical meaning , we synchronize the signals in data set in such way that the fixed index number 20 corresponds to the amplitude of -0.06 v on the rising slope of each signal ( see fig .[ examp_008 ] , [ examp_026 ] , [ examp_082 ] ) .the complete data set contains more than 200 000 signal examples and was divided into two disjoint subsets : training and testing part , with a ratio 9 to 1 , respectively . in the training data set only the signals stored , while in the testing one both signals and are required .the training data set was transformed via pca into a new space according to the scheme shown in sec .the evaluated matrix , as well as the mean value signal , were saved and used in the further analysis during the signal recovery from the testing data set . in order to find the theoretical value of mean recovery error , introduced in eq .( [ sigmax ] ) , one needs to specify additionally the following parameters : ( we will investigate the error as a function of the number of samples ) . the standard deviation of the noise ( ) was estimated based on the training data set to c.a .0.015 v , which is consistent with the oscilloscope specification .the unknown parameters were found after the analysis of diagonal elements of the covariance matrix of the training data set . the smallest value and the largest value for which the condition from eq .( [ pk ] ) was met , are equal to 4.2 v and 0.33 , respectively .it should be stressed that , for a given number of samples ( ) , the expected value of in j - pet scenario would be slightly greater than for the one described by eq .( [ sigmax ] ) .the reason is that in the j - pet scenario the signals are probed in the voltage domain and hence in the case when the amplitude of the signal is smaller than the threshold level , not all the samples of the signal are acquired ( see fig .[ examp_082 ] for example ) .therefore , in order to evaluate the theoretical function of mean recovery error in the j - pet scenario , both , the values of the threshold levels as well as the distribution of signal amplitides have to be specified first . in the first step of the analysisthe distribution of signal amplitudes was investigated .the experimental cumulative distribution function ( cdf ) , based on the signals registered at all the positions along the scintillator strip , is presented in fig .the amplitudes of the signals are in the range from -0.3 v to -1.0 v. , scaledwidth=70.0% ] in order to suppress events when gamma quanta were scattered inside the patient s body , in the current pet scanners ( detecting gamma quanta based on photoelectric effect ) the energy window , typically in the range from 350 kev to 650 kev , is applied .such window suppress scattering under angles larger than 60 degrees .the j - pet detector is made of plastic scintillators which are composed of carbon and hydrogen .due to the low atomic number of these elements the interaction of gamma quanta with energy of 511 kev is predominantly due to the compton effect whereas the interaction via photoelectric effect is negligible . in order to suppress scattering in the patient through angles larger than 60 degrees , in the j - pet scanner only the signals with energy deposition larger than 200 kevwill be accepted .therefore , the signals with amplitude smaller than a -0.3 v are filtered out , and a sharp edge of the spectrum for this value is seen in fig . [ cdf ] . in the next step , based on the fully sampled signals stored in testing data set , we simulate a front - end electronic device that probes the signals at preselected number of voltage levels , both on the rising and falling slope .we carried out the experiments for different numbers of voltage levels from 2 to 15 . in each case , the level of -0.06 v on the rising slope was applied for triggering purposes , as was mentioned in sec .the remaining amplitude levels were adjusted after a simple optimization process , where the goal was to minimize the experimental mean recovery error . at each step of the optimization process , for a fixed number ( ) and values of voltage levels ,signal recovery was conducted in the following way .for each 300 signal samples from testing data set , all samples at preselected voltage levels were selected to simulate the measurement . since the amplitude of the signal may be less than certain voltage levels , not all samples had to be registered .therefore , for each processed signal , the number of acquired samples would be smaller or equal to . in order to remove the mean value from the measurement ,the corresponding values of signal were subtracted from signal samples from the oscilloscope .the measurement matrix was formed from the proper rows of matrix .the signal was recovered using eq .( [ estx ] ) , and finally the signal was derived as the linear solution of eq .( [ esty ] ) . with optimized values of voltage levels ,theoretical and experimental curves describing the mean recovery error as a function of the number of samples ( ) in the j - pet scenario are evaluated and shown in fig .[ theor_error ] . as a function of the acquired samples ( ) .meaning of the curves is described in the text .[ theor_error ] , scaledwidth=70.0% ] an empirical mean value of is marked with a solid blue line in fig .[ theor_error ] and is very similar to the expected , theoretical characteristic that takes into account the distribution of amplitudes and optimized values of voltage levels ( solid green line ) .the difference between those two functions is larger for small values of ( about of ) and almost negligible for greater numbers of samples .however , both of these functions differ significantly from the theoretical characteristic of , calculated according to eq .( [ sigmax ] ) , marked with dashed green line in fig .[ theor_error ] . in the followingwe will investigate only the case with a four - level measurement , which is of most importance since the currently developed front - end electronic allows one to probe the signals at four fixed - voltage levels .it is evident that this comparison of results may be performed in the same way for all values of .the optimized values of the four voltage levels are : -0.06 , -0.20 , -0.35 and -0.60 v. since , the index of the sample taken at the voltage level of -0.06 v at the rising slope is common for all signals , the effective number of simulated samples at rising and falling edge is equal to . from fig .[ theor_error ] , the theoretical value of the average recovery error for is c.a .0.173 v ( dashed green line ) .however , based on the experimental distribution of amplitudes of the signals , presented in fig .[ cdf ] , only for about of signals would all samples from four thresholds be available ( amplitudes larger than -0.60 v ) .moreover , for signals with amplitudes in the range from -0.35 v to -0.60 v ( about of signals ) , the effective number of samples is equal to 5 and the theoretical value of increases to 0.228 v . for the rest of the considered signals , with amplitudes in the range from -0.30 v to -0.35 v ( about of signals ) , the effective number of samples is equal to 3 and the theoretical value of is 0.346 v .finally , the expected mean value of in the j - pet scenario for four voltage levels is equal to c.a .0.227 v and is much more comparable with the experimental value ( equal to c.a .0.264 v ) than the theoretical value for 7 samples .the analysis of the characteristic of allows us to indicate the proper number of samples needed .the function is approximately proportional to but , due to the logaritmic factor ( see eq .( [ sigmax ] ) ) , it drops rapidly until reaches the value of about 10 .further increase in the number of samples does not provide any significant improvement in the signal recovery .this is very important information since the currently developed front - end electronic enable one to probe the signals at four fixed - voltage levels , providing eight time values for each signal .the distribution of the recovery error evaluated using all signals from the testing data set for optimized values of four voltage levels is shown in fig .[ dist_error ] ., scaledwidth=70.0% ] from the empirical characteristics of one may see that the recovery error is concentrated between 0 and 0.4 v with the tail reaching the value 1.5 v . as was shown in fig .[ theor_error ] , the mean value in the experiment is equal to c.a .0.264 v .in addition , the standard deviation and the median of a probability distribution of a recovery error are equal to c.a . 0.192 v and 0.206 v , respectively .the three signal recovery examples , with small , medium and large recovery error , are shown in fig .[ examp_008 ] , [ examp_026 ] and [ examp_082 ] , respectively . .[ examp_008 ] , scaledwidth=70.0% ] .[ examp_026 ] , scaledwidth=70.0% ] .[ examp_082 ] , scaledwidth=70.0% ] the values of the signal recovery errors in fig .[ examp_008 ] to [ examp_082 ] are as follow : 0.082 , 0.266 , 0.814 v . as expected , the worst situation takes a place when the amplitude of the signal is slightly below the selected threshold level ( see fig .[ examp_082 ] ) or where it is much larger than the highest sampling voltage . in our sampling schemethe highest recovery error occurs for signal amplitudes in the range from -0.55 to -0.6 v and from -0.95 to -1 v ( where -1 v corresponds to the maximum amplitude , see fig .[ cdf ] ) .unfortunately , there is no possibility to overcome these phenomena when only a few samples of the signal are measured . on the other hand, it can be seen that the mean value of the error is on an acceptable level . in a typical situationthe signal is recovered quite accurately ( see fig . [ examp_026 ] ) . although , the experimental and theoretical functions describing the recovery errors in the j - pet scenario , presented in fig . [ theor_error ] , are largely consistent , there are at least two aspects of the method that need to be investigated : \1 ) the assumption about the prior mvn distribution of signals ( see eq .( [ xdist ] ) ) , which has an impact on the difference bewteen the values of the errors , \2 ) the evaluation of the empirical values of as a function of the size of training set of signals .in order to verify the assumption about the normality of signals , we have used the kolmogorov - smirnov test on each of principal components in the training data set , evaluated according to eq .( [ xay ] ) . in each dimension , the mean value as well as the standard deviation were estimated for all the samples .the significance level used in this study was 0.05 .the hypothesis , regarding the normal distribution form , was rejected only for the first principal component that holds about of the signal energy . however , in that case the calculated value from the statistical test was not significantly higher than the critical value . from this analysis one infers that the signals stored in matrix are not exactly normally distributed .this fact may contribute to the difference between the theoretical and empirical values of .it should be stressed that all the informations needed to recover the signal ( matrix and vector - see sec . 2.2 for details ) , and therefore the empirical values of , were evaluated based on large set of about 200 000 training signals . in the followingwe will analyze the influence of the size of the training set of signals on the value of the signal recovery error .we conduct the experiment for a wide range of number of signals in the training set from 50 to 200 000 . in each casewe investigate only the four - level measurement ( ) .the results of the analysis of the empirical values of as a function of the size of the training set of signals are shown in fig .( [ trainingset ] ) . on the average recovery error .[ trainingset ] , scaledwidth=70.0% ] the parameters of the model that were used for the signal recovery in the study described in sec .3.2 were based on 200 000 training signals , which corresponds to the value of the empirical recovery error of 0.264 v . from fig .( [ trainingset ] ) one may observe that reducing the number of training signals down to about 10 000 does not influence the quality of the recovery of signal ; the error is almost constant in that range .however , for smaller number of traning signals , the error increases rapidly and the recovery of the signal becomes increasingly less accurate . in this section, we will incorporate the method for hit - position reconstruction , described in ref . , in order to evaluate a position resolution of the j - pet scanner with fully recovered signals .we will compare the spatial resolutions obtained from the original raw - signal ( 300 samples ) to those from the compressed signal ( e.g. 8 samples ) .we have carried out experiments with numbers of voltage levels from 2 to 15 , which corresponds to the number of samples from 3 to 29 . for a single event of gamma quantum interaction along the scintillator strip , a pair of signals at two photomultipliers is measured in a voltage domain .next , the signals are recovered according to the description in sec . 2 , andfinally , an event is represented by a 600-dimensional vector . for a fixed number of voltage levels a two - step procedure of the position reconstructionwas performed .first , the scintillator s volume was discretized and for each bin a high statistics set of reference 600-dimensional vectors was created .the objective of the second part of the procedure is to classify the new event to one of the given sets and hence determine the hit position . for more details about conducting the experiment of hit position reconstruction ,the reader is referred to ref . .we have conducted the test on the same data set and under the same conditions as described in ref . , where the spatial resolution was reported to be equal to 1.05 cm ( ) .the spatial resolutions derived from the recovered signals as a function of the number of samples included in the recovery process are shown in fig .[ spatial ] . ) . [ spatial ] , scaledwidth=70.0% ] in fig .[ spatial ] only the region for small , from 3 to 29 , is shown , but it has to be stressed that the spatial resolution derived from the original raw - signal ( 300 samples ) is equal to 0.933 cm ( ) and is almost the same as for =29 . for the most interesting case , with four voltage levels ( =7 ) , the spatial resolution is slightly worse than for the fully sampled signal and is equal to 0.943 cm ( ) . on the other hand , even in that casethe spatial resolution is about 0.1 cm better in comparison to the one evaluated based on signals in the voltage domain alone .in this paper a novel scheme of recovery of signals generated in plastic scintillator detectors in the j - pet scanner was introduced .the idea of signal recovery is based on the tikhonov regularization theory , that uses the training data set of signals . in these studies we assumed that training signals come from a mvn distribution .the compact representation of these signals was provided by the pca decomposition .one of the most important aspect of our work considers a statistical analysis of an error level of recovered signals . in this work a dependence of the signal recovery error on the number of samples taken in the voltage domain was determined .it has been proven that an average recovery error is approximately inversely proportional to the number of samples and inversely proportional to the decrease rate of variances in the covariance matrix . in the experimental section, the method was tested using signals registered by means of the single detection module of the j - pet detector .it was shown that the pca basis offers high level of information compression and an accurate recovery may be achieved with just 8 samples for each signal waveform .it is worth noting that the developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized . in the experimental sectionwe have demonstrated that using the recovered signals improves the hit - position reconstruction . in order to evaluate a position resolution of the j - pet scanner with fully recovered signals ,we have incorporated the method for hit - position reconstruction , described in ref . . in the cited work , the spatial resolution evaluated on the same data set and under the same conditions , based on 8 samples in voltage domain , without a recovery of the waveform of the signal ,was reported to be equal to about 1.05 cm ( ) .our experiment shows that the application of an information from four voltage levels to the recovery of the signal waveform can improve the spatial resolution to about 0.94 cm ( ) .moreover , the obtained result is only slightly worse than the one evaluated based on all 300 samples of the signals waveform .the spatial resolution calculated under these conditions is equal to about 0.93 cm ( ) .it is very important information since , limiting the number of threshold levels in the electronic devices to four , leads to a reduction in the cost of the pet scanner .future work will address a development of the more advanced method to define the hit - position and event time for annihilation quanta in the j - pet detector based on the recovered information .we believe that , with fully recovered signals , there is still scope for improvement in the time and position resolution of the j - pet scanner . in order to prove the theorem we assume , for the sake of simplicity , that the matrix has normally distributed elements with zero means and variances .these values of the parameters of normal distribution ensure that the matrix is orthonormal .hence , based on eq .( [ ests ] ) , the matrix is given by : the is equal to the trace of the matrix and hence : the sum in the last term in eq . ( [ approxs ] ) may be approximated by a definite integral . in the followingwe will use for the calculations a basic , rectangle rule , and : where . at the verybeginning we assumed that the function has the form : ( see eq .( [ pk ] ) ) .we will perform the integration using the substitution . without any significant loss of precision, we change the integration limits from ] .the calculations of the integral will be as follow : and thus acknowledge technical and administrative support of t. gucwa - ry , a. heczko , m. kajetanowicz , g. konopka - cupia , j. majewski , w. migda , a. misiak , and the financial support by the polish national center for development and research through grant innotech - k1/in1/64/159174/ncbr/12 , the foundation for polish science through mpd programme , the eu and mshe grant no .poig.02.03.00 - 161 00 - 013/09 , doctus - the lesser poland phd scholarship fund , and marian smoluchowski krakw research consortium `` matter - energy - future '' .we are grateful to prof .colin wilkin for correction of the manuscript .99 g. l. brownell , w. h. sweet , nucleonics 11 ( 1953 ) 40 j. s. robertson et al . , tomographic imaging in nuclear medicine ( 1973 ) 142 d. l. bailey , positron emission tomography : basic sciences .springer - verlag , nj ( 2005 ) .j. s. karp et al . , journal of nuclear medicine 49 ( 2008 ) 462 .d. j. kardmas et al . ,journal of nuclear medicine 50 ( 2009 ) 1315 .w. w. moses , s. e. derenzo , ieee transactions on nuclear science ns-46 ( 1999 ) 474 .p. moskal et al . , bio - algorithms and med - systems 7 ( 2011 ) 73 ; [ arxiv:1305.5187 [ physics.med-ph ] ] p. moskal et al ., nuclear medicine review 15 ( 2012 ) c68 ; [ arxiv:1305.5562 [ physics.ins-det ] ] .p. moskal et al ., nuclear medicine review 15 ( 2012 ) c81 ; [ arxiv:1305.5559 [ physics.ins-det ] ] .p. moskal et al . , radiotheraphy and oncology 110 ( 2014 ) s69 .q. xie et al ., ieee transactions on nuclear science 56 ( 2009 ) 2607 .h. kim et al . ,nuclear instruments and methods in physics research section a 602 ( 2009 ) 618 .d. xi et al . , ieee nuclear science symposium and medical imaging conference ( ieee nss / mic ) ( 2013 ) 1 .m. paka et al . ,bio - algorithms and med - systems 10 ( 2014 ) 41 ; [ arxiv:1311.6127 [ physics.ins-det ] ] .l. raczyski et al ., nuclear instruments and methods in physics research section a 764 ( 2014 ) 186 ; [ arxiv:1407.8293 [ physics.ins-det ] ] .p. moskal et al ., nuclear instruments and methods in physics research section a 764 ( 2014 ) 317 ; [ arxiv:1407.7395 [ physics.ins-det ] ] .p. moskal et al ., nuclear instruments and methods in physics research section a 775 ( 2015 ) 54 ; [ arxiv:1412.6963 [ physics.ins-det ] ] .a. tikhonov , soviet math. dokl . 4 ( 1963 ) 1035 .a. tikhonov , v. arsenin , solutions of ill - posed problems , winston and sons , washington , d.c .e. candes , j. romberg , t. tao , ieee transaction on information theory 52 ( 2006 ) 489 .d. donoho , ieee transaction on information theory 52 ( 2006 ) 1289 .s. mallat , z. zhang , ieee transaction on signal processing 41 ( 1993 ) 3397 .s. chen , d. donoho , m. saunders , siam journal of scientific computing 20 ( 1998 ) 33 .e. candes , t. tao , ieee tranaction on information theory 51 ( 2005 ) 4203 .e. candes , j. romberg , t. tao , communication on pure and applied mathematics 59 ( 2006 ) 1207 .j. hadamard , lectures on cauchy s problem in linear partial differential equations , yale university press , new haven ( 1923 ) .p. hansen , siam journal sci .. comput . 11 ( 1990 ) 503 .p. hansen , t. sekii , h. shibahashi , siam journal sci .comput . 13( 1992 ) 1142 .d. p. berrar et al . , a practical approach to microarray data analysis , kluwer academic publishers , boston ( 2002 ) .a. mahalanobis , journal of physics : conference series 139 ( 2008 ) 012031 .p. hansen , rank - deficient and discrete ill - posed problems .numerical aspects of linear inversion , siam , philadelphia ( 1997 ) .r. kalman , transaction of the asme journal of basic engineering ( 1960 ) 35 .h. sorenson , ieee spectrum 7 ( 1970 ) 63 .eljen technology http://www.eljentechnology.com .hamamatsu http://www.hamamatsu.com .humm et al . , european journal of nuclear medicine and molecular imaging 30 ( 2003 ) 1574 .
the j - pet scanner , which allows for single bed imaging of the whole human body , is currently under development at the jagiellonian university . the discussed detector offers improvement of the time of flight ( tof ) resolution due to the use of fast plastic scintillators and dedicated electronics allowing for sampling in the voltage domain of signals with durations of few nanoseconds . in this paper we show that recovery of the whole signal , based on only a few samples , is possible . in order to do that , we incorporate the training signals into the tikhonov regularization framework and we perform the principal component analysis decomposition , which is well known for its compaction properties . the method yields a simple closed form analytical solution that does not require iterative processing . moreover , from the bayes theory the properties of regularized solution , especially its covariance matrix , may be easily derived . this is the key to introduce and prove the formula for calculations of the signal recovery error . in this paper we show that an average recovery error is approximately inversely proportional to the number of acquired samples . ` tikhonov regularization ` , ` compressed sensing ` , ` positron emission tomography ` , ` j - pet `
this report is a collection of comments on the read paper of , to appear in the journal of the royal statistical society series b , along with a reply from the authors .
this note summarizes the statistical methods used so far by the atlas collaboration for setting upper limits or establishing a discovery , and includes the recommended approaches for future analyses , as recently agreed in the context of the atlas statistics forum .the recommendations aim at achieving a better uniformity across different physics analyses and their ultimate goal is to improve the sensitivity to new phenomena , while keeping robustness as a fundamental request .the best way to be safe against false discoveries is to compare the results obtained using at least two different methods , at least when one is very near the `` five sigma '' threshold which is required in high - energy physics ( hep ) to claim a discovery .one recommended method is explained in this paper ( section [ sec - recomm ] ) .we focus here on the searches for some type of `` signal '' in a sample of events dominated by other ( `` background '' ) physical sources .the events are the output of a particle detector , filtered by reconstruction algorithms which compute high - level features like an `` electron '' or a `` jet '' .large use of simulated samples is required to tune calibrations , characterize the event reconstruction , and compare the outcome of an experiment with the theoretical models .a typical simulation consists of few different steps .first , one needs to simulate the result of the primary particle interaction with the help of an `` event generator '' .usually , only one specific process of interest is considered ( e.g. higgs production with a specific channel ) and saved to disk , allowing the physicist to study a well defined type of `` signal '' .different monte carlo ( mc ) productions are then organized to obtain a set of processes which , depending on the analysis , can be considered either signal or background .the next step is to simulate the effects of the passage of the produced particles ( and their decay products ) through the detector .this requires the knowledge of the ways energy is deposited in each material and defines the `` tracking '' of the simulated particles up to the point in which they decay , leave the detector or stop .finally , the detector response is simulated : for each energy deposition into an active material , another mc process produces the electronic signal .the latter is processed in a way which closely follows the design of the front - end electronics , obtaining the simulated detector output in the same format as the data coming from the real detector .statistical uncertainties arise from fluctuations in the energy deposition in the active materials and from the electronic noise .systematics due to the limited knowledge of the real detector performance and to the details of the offline reconstruction also contribute to the final uncertainty and need to be addressed case by case .finally , theoretical uncertainties in the physical models need also to be accounted for . in general , the differences among the event generators can not be treated as standard deviations , because one usually has just two or three available generators .hence they should not be summed in quadrature but treated separately .section [ sec - notation ] summarizes the statistical aspects relevant to our problems and defines some notation .the methods applied in past atlas publications are reviewed in section [ sec - past - methods ] , while section [ sec - recomm ] focuses on the methods which can be used in future analyses .in hep we deal with hypothesis testing when making inferences about the `` true physical model '' : one has to take a decision ( e.g. exclusion , discovery ) given the experimental data . in the classical approach proposed by fisher, one may decide to reject the hypothesis if the _ -value _ , which is the probability of observing a result at least as extreme as the test statistic in the assumption that the null hypothesis is true , is lower than some threshold . in the search for new phenomena ,the -value is interpreted as the probability to observe at least as many events as the outcome of our experiment in the hypothesis of no new physics .alternatively , one may convert the -value into the _ significance _ , which is the number of gaussian standard deviations which correspond to the same right - tail probability : .the function is the quantile of the normal distribution , expressed in terms of the inverse error function .a -value threshold of 0.05 corresponds to and is commonly used in hep for setting upper limits ( or one - sided confidence limits ) with 95% confidence level . on the other hand , it is customary to require at least a `` five sigma '' level ( i.e. ) in order to claim for a discovery of a new phenomenon ( if one usually says only that the data suggest the evidence for something new ) .it is also common to quantify the sensitivity of an experiment by reporting the expected significance under the assumption of different hypotheses .another possible approach , suggested by neyman and pearson , is to compare two alternative hypotheses ( if the null hypothesis is the main focus of the analysis and no other model is of interest , the alternative can be defined as the negation of ) . in this case ,two figures of merit are to be taken into account : * the _ _ size _ _ is also known as `` significance level '' of the test .we do not use this terminology to avoid confusion with the significance defined above . ] of the test , which is the probability of incorrectly rejecting in favour of when is true . is also the false positive ( or `` type i error '' ) rate ; * the _ power _ of the test , which is the probability of correctly rejecting in favour of when is false . is the probability of failing to reject a false hypothesis , i.e. the false negative ( or `` type ii error '' ) rate . in the bayesian approach ,one always compares two ( or more ) different hypotheses . in order to take the decision among the alternatives, one can look at the posterior odds or at the ratio of the marginal likelihoods ( or `` bayes factor '' ) .the former are always well defined and take into account the information accumulated with the performed experiment in the light of the existing prior information , whereas the latter is often very difficult to compute and may even be ill - defined in some problem ( for example when comparing two models in which one of the priors is improper ) , although it does not depend on the prior knowledge about the hypothesis under consideration .the decision is taken in favour of the hypothesis which maximizes the chosen ratio , though the particular value of the latter can can suggest a weak , mild or strong preference for that hypothesis . in this note, we address two problems , exclusion and discovery , for which the notation is different and sometimes misleading , as illustrated below .for this reason , in the rest of the paper we will speak about the `` signal plus background '' ( sig+bkg or ) hypothesis and about the `` background only '' ( bkg or ) hypothesis , without specifying which is the null hypothesis . in the discovery problem , the null hypothesis describes the background only , while the alternative describes signal plus background . in the classical approach ,one first requires that the -value of is found below the given threshold ( in hep one requires ) . if this condition is satisfied , one looks for an alternative hypothesis which can explain well the data . in the exclusion problem ,the situation is reversed : describes signal plus background while the alternative hypothesis describes the background only . in the classical approach ,one just makes use of the null hypothesis to set the upper limit , though this will exclude with probability parameters values for which one has little sensitivity , obtaining `` lucky '' results .historically , this problem has been first addressed in the hep community by the cl method , whose approach is to reject the sig+bkg hypothesis if cls .cls is a ratio of -values which is commonly use in hep , and one can find a probabilistic interpretation if certain asymptotic conditions are met .another possibility being discussed by atlas physicists is to construct a power constrained upper limit ( pcl ) by requiring that two conditions hold at the same time : ( 1 ) the -value is lower than the chosen threshold , and ( 2 ) the power of the test is larger than a minimum value chosen in advance .so far , different atlas analyses used different approaches . converging takes time and is not always possible nor necessarily good , the main reason probably being that different uncertainties are addressed in different ways .whenever possible , the background is estimated from data .still , one has to extrapolate to the signal region and this requires the knowledge of the shape , and hence depends on the simulation .in addition , in many cases signal and control regions should be treated at the same time : systematics affect both signal and background and often it is impossible to find a signal free region . finally , in most cases the background is composed of several contributions which are independently simulated but are not really independent : systematic effects act on all of them , making things more and more complicate .accounting simultaneously for systematic effects on different components is now possible thanks to histfactory , a root tool for a coherent treatment of systematics based on roofit / roostats , initially developed by k. cranmer and a. shibata .first used in the top group , histfactory is now being adopted also by other atlas groups . searches for new physics ( for example , higgs searches ) often start by looking for a `` bump '' in a distribution which is dominated by the background .when the location of the bump is not know , the search is typically repeated in different windows , decreasing the sensitivity . in the atlas dijet resonance search ,a tool for systematic scans with different methods has been applied : bumphunter , developed g. choudalakis .the program makes a brute force scan for all possible bump locations and widths , achieving a very good sensitivity , and is appropriate when the bump position and/or width are not known .a hybrid bayesian - frequentist approach has been used by the lep and tevatron higgs working groups and is also used in atlas higgs searches .all or some nuisance parameters ( modeling systematic effects ) are treated in the bayesian way : a prior is defined for each parameter which is integrated over . on the other hand , for the parameters of interest the frequentist approachis followed , computing -values and constructing confidence intervals .histfactory can be used also with this approach , supporting normal , gamma and log - normal priors for nuisance parameters . in the higgs combination chapter in the atlas ``csc book '' , the statistical combination of sm higgs searches in 4 different channels ( using mc data ) was performed with roofit / roostats in the frequentist approach : systematics have been incorporated by profile likelihood .each search was performed with a fixed mass and repeated for different values , and the limits have been interpolated .many lessons have been learned and the statistical treatment has been refined since then , culminating in the recommended frequentist method explained in section [ sec - recomm - freq ] below .if possible , one may consider using more than a single approach in searches for new phenomena : if they agree , one gains confidence in the result ; if they disagree , one must understand why ( possibly finding flaws in the analysis ) .this becomes expecially important when the obtained sensitivity is close to the minimum limit for discovery .section [ sec - recomm - freq ] below summarizes the recently proposed frequentist approach which is being recommended for all atlas analyses .a possibility is to test the result of the frequentist approach with a bayesian method .the current dicussion about the bayesian approach is summarized in section [ sec - recomm - bayes ] , but at present there is no official atlas recommendation about it .the problem is formulated by stating that the expected number of events in bin is the sum of two separate contributions , a background expectation of events and a signal contribution given by the product of an intensity parameter with the expected number of signal events . for discovery , we test the background - only hypothesis .if there is no significant evidence against such hypothesis , we set an upper limit on the magnitude of the intensity parameter . the reader will find a full treatment of the recommended method in ref .very shortly , the profile likelihood is used to construct different statistics for testing the alternative bkg and sig+bkg hypotheses . in the asymptotic regime, confidence intervals can be found analytically using such statistics , and the resulting expressions can be used to define approximate intervals for finite samples .asymptotically , the maximum likelihood estimate is gaussian distributed about the true value with standard deviation which can be found numerically by means of the `` asimov dataset '' , defined as the mc sample which , when used to estimate all parameters , gives their true values . in case of exclusion ,the approximate upper limit ( with its uncertainty ) is . in case of discovery ,in which one assumes , the median significance is = \sqrt{2 [ ( s+b ) \ln(1 + s / b ) -s ] } \ ; , \ ] ] which is the recommended formula for a counting experiment by the atlas statistics forum when estimating the sensitivity for discovery . in the bayesian approach ,the full solution to an inference problem about the `` true physical model '' , which is responsible for the outcome of an experiment , is provided by the posterior probability distribution of the parameter of interest .typically , there are several nuisance parameters which model systematic effects or uninteresting degrees of freedom . in order to obtain the marginal posterior probability distribution as a function only of the parameter of interest, one has to integrate over all nuisance parameters .this marginalization procedure contrasts with the frequentist approach based on the profile likelihood , in which the nuisance parameters are fixed at their `` best '' values .prior probabilities need to be specified for all parameters and should model our knowledge about the effects which they refer to . quite often, one does not want to encode a precise model into the prior or does not assume any relevant prior information . in this case, uniform densities are commonly preferred for computational reasons , but they are often misinterpreted as `` non - informative '' priors , which is not the case .for example , a uniform density is no more flat , when considered as a function of the logarithm of the given parameter . when attempting to make an `` objective '' inference , least - informative priors should be used instead .they can be defined , as in the case of the reference priors , as as the ones which maximize the amount of missing information .reference priors are invariant under reparametrization , are known ( and often identical to jeffreys priors ) for most common one - dimensional problems in hep , and can also be used to test the dependence of the result from the choice of the prior .when dealing with discovery or exclusion in the bayesian approach , one has to make a choice between two alternative hypotheses : background only ( ) and signal plus background ( ) .comparing the posterior probabilities is the best way to account for the whole amount of information provided by the experiment in the light of the previous knowledge .although values of for the posterior odds are interpreted as a strong preference , no widespread agreement exists in the hep community about a minimum threshold for claiming a discovery .- value and follow the `` five sigma '' rule mentioned above . ] in order to check the impact of the assumptions made before performing the experiment on the final decision , it is also useful to compare the posterior odds against the prior odds ( defined as the ratio of prior probabilities for and , whenever this is well defined ) .this note summarizes the statistical approaches used in the past atlas analyses and the current ongoing efforts to provide uniformity of statistical treatment across all analyses .guidelines for estimating the sensitivity with a frequentist method based on profile likelihood ratio have been recently formalized . in this approach , which is recommended for all atlas analyses ,all nuisance parameters are fixed at their best values and a single mc sample ( the asimov dataset ) can be used to find the numerical values of the interesting statistics .the bayesian approach can also be considered in the analysis , although no official atlas recommendation has been made yet about the best method . in general , the prior densities should be chosen in the way which best models our prior knowledge of the model .whenever one wants to minimize the impact of the choice of the prior on the result , one should be aware that flat priors are to be considered informative .on the other hand , least - informative priors can be defined for all common hep problems and have very appealing properties . in the bayesian approach, the treatment of systematics is different from the recommended frequentist method , because the whole range of each nuisance parameter is considered in the marginalization .hence , the comparison between the two approaches may be helpful , expecially near the sensitivity threshold for discovery .
the statistical methods used by the atlas collaboration for setting upper limits or establishing a discovery are reviewed , as they are fundamental ingredients in the search for new phenomena . the analyses published so far adopted different approaches , choosing a frequentist or a bayesian or a hybrid frequentist - bayesian method to perform a search for new physics and set upper limits . in this note , after the introduction of the necessary basic concepts of statistical hypothesis testing , a few recommendations are made about the preferred approaches to be followed in future analyses .
the flooding and drying of a fluid on ground is a problem which has several applications in fluid mechanics such as coastal engineering , artificial lake filling , river overflow or sea intrusion on ground due to a tsunami .these works involves the modelling of the physical process near the triple contact line between the liquid ( the water ) , the gas ( the atmosphere ) and the solid ( the ground ) .the numerical simulation of this problem is very complex , due to the domain deformation and the triple contact line movement .some numerical approaches to solve this problem have been previously explored .for example , o. bokhove uses a conservative form of the shallow water equations associated to the discontinuous galerkin discretization .d. yuan et al . propose to use a non conservative form of the shallow water equations in total flow rate formulation and uses a finite difference scheme for the numerical computation . in this paper , we use a two dimensional viscous shallow water model in eight velocity formulation .this model , called stokes - like by p.l .lions ( , section 8.3 ) is , in some sense , intermediate between the semi - stationary model and the full model of compressible isentropic navier - stokes model .this model can be obtained by integrating vertically the so - called primitives equations of the ocean with some hypothesis on the viscous terms .we have chosen this model because we can prove the existence of a solution under certain considerations as the smoothness of the initial conditions and an acceptable hypothesis on a boundary operator .after the presentation of the model we describe the numerical method based on the demonstration of the convergence proposed in . finally , we present some numerical examples in two idealized domains and we give some perspectives for other studies .we assume that we know a continuous function from to that represents the topography ( the level of the ground with respect to a reference level ) .we note ( an open simply connected set of ) the horizontal domain occupied by the fluid at time and the elevation of the fluid compared to a horizontal zero level .we set the eight of the water column , .the shallow water equations are based on a depth integration of an incompressible fluid conservation laws in a free surface - three dimensional domain .governing equations for and can be obtained in the usual way ( for example ) and the two - dimensional system can be written as follows : } \{ t\ } \times \omega_t \label{eqqm } \\ & \frac{\partial \eta}{\partial t } + { \rm div } ( uh)= 0,\qquad { \rm in } \bigcup_{t\in [ 0,t ] } \{ t\ } \times \omega_t \label{eqm}\end{aligned}\ ] ] where is the gravity constant , the viscosity coefficient , a drag coefficient . represents external forcing ( for example the wind stress ) .these equations are completed by initial conditions ( ) and boundary conditions . in the following ,we assume that the boundary of the real domain is non - vertical .hence the variation of the surface elevation at the boundary imposes a modification of the horizontal domain occupied by the fluid .in many geophysical applications , the governing equations are solved in a domain \times\omega^3_t ] , the velocity of the moving mesh can be defined by solving the following problem : and we consider the following mapping where }(\omega_t\times\{t\})$ ] .the relation between the different domains is given by the mapping where is the characteristic curve from to corresponding to the velocity in the space - time domain .for each time , we denote by and the ale velocity and the ale thickness : with these notations , we can write the ale formulation of the problem in what follows , we will use a first order time discretization to solve this problem ( see for more details about second order schemes ) .let , we note as the time step . for , , we set with boundary and in order to ensure the positivity of , the continuity equation is renormalised as follows with this renormalisation, we do not have conservation of the mass , but when we follow the evolution of the mass during a simulation ( e.g. in the first test presented in the following section ) , only small variations of the mass quantity are observed .we denote by ( respectively and ) the approximation of the exact solution ( respectively and ) . then , we note and with the characteristic curve solution of : in our study , we approximate the foot of the characteristic by we set ( resp . ) the approximation of ( resp . . approximating the lagrangian derivative by a first order euler scheme , and using a linearised drag operator ,we obtain the following : implicitly , taking into account the equation [ eqdm1 ] , we have .this is an approximation at the first order in time of the initial shallow water problem .this problem is solved by a fixed point technic . a first approximation of [ eq12 ] is computed assuming and the approximation of then used in [ eq11 ] to obtain a first approximation of .we repeat this operation in order to obtain convergence of the system .prove of convergence can be fount in .we can not easily increase the order of this scheme because the ale formulation is only of the first order .the previous problem is solved on by using the spatial discretization proposed in the following section .after , we need to solve the problem given by equations [ pbc1 ] and [ pbc2 ] in order to compute the mesh velocity .each point with coordinates of the mesh is then moved with the first order approximation .the spatial discretization of the previous problem is based on the finite element galerkin method . in , an approach based on the galerkin method with a special basisis proposed , but this approach can not be applied easily if .we note the set of finite element functions of .the weak formulation of our approximated problem is where the operator is the trace operator from to .then the equations ( [ tensionbord ] ) and ( [ equa_10 ] ) give where .we then use a first order discretization of the partial time derivative : the first part of the right hand of this equation needs to be included on the left hand of the global problem and the second part on the right hand .the global weak formulation is : on , where is a point of and the sum is computed on all finite element functions .then , on the initial boundary , we have the decomposition with the same .formally , we can use two kinds of boundary conditions : 1 . a condition of normal displacement of the boundary .we write where is the normal displacement . with this condition ,the well posed weak formulation associated to the diffusion operator is where and .the boundary condition is then imposed on and since we do not want to impose a condition on or .2 . a condition on the displacement of the boundary where is a vector . with this condition , the well posed weak formulation associated to the diffusion operator is and the boundary conditionis then imposed on ( more precisely , we need to take into account the trace tensor including condition on in the boundary , but we assume on this boundary ) .[ remarqueoscil ] we can note that the friction term is necessary to stabilize the flow . indeed ,if we use the following decomposition where is the intersection of and the space of harmonic functions , we can see that the functions of are not controlled by the laplacian operator .we present in the following section a case with where we do not have damping of the flow .the previous method is tested in order to simulate the behavior of a fluid in a simplified domain .we assume that the domain is axisymmetric using the function . at the initial time , the fluid at rest touches the wall ( ) ( see figure [ domainvert ] ) .-5truecm physically , our initial domain is a disc with a radius of 130 meters .this domain is meshed with triangles ( 420 triangles for the mesh m1 , 470 for the mesh m2 and 1002 for the mesh m3 ) .the height of the column of water at the centre is 1 meter ( since at the centre of our domain ) .fluid viscosity is and gravity coefficient is assumed equal to in order to amplify the elevation .we apply a forcing at the surface of this fluid .this forcing is usually taken into account by applying the continuity of the horizontal stress tensor on the surface .so , for the three dimensional model , where is the wind velocity ( at ten meters over the flow for the ocean for example ) and is a `` drag coefficient '' assumed to be constant . using the vertical integration of the vertical operator , and assuming , we obtain a forcing condition on the mean velocity .we use here hence , we observe an oscillation of the free surface of the water plan . at each oscillation ,a part of the water moves on ground and a part of the ground is uncovered by this water ( see figure [ vuecoupe ] ) .-18truecm in the last part of the remark [ remarqueoscil ] , we indicate that with a part of the flow component is not diffusive .more precisely , our solution is only a gradient .if we do not take into account boundary friction effects or bottom friction effects , a part of the fluid is not diffusive . to observe this effect , we plot ( on figure [ oscillations ] ) the level of a boundary point according to time .we can observe an accumulation of energy on the fluid due to the numerical approximation , and this accumulation is not compensated by diffusive term . in figure [ oscillationsamorties ], we plot the variation of the same point taking into account the drag coefficient . with this term , all the modes of the fluid are diffusive and ,if we stop the forcing , global energy of the fluid vanishes .-0.5truecm in figure [ evolutionkinetic ] , we plot the time evolution of the kinetic energy of the flow for the different meshes ( m1 , m2 and m3 ) .we only have a little difference for all these meshes but a very expensive cost for the mesh m3 .-0.5truecm after the forcing phase ( 20 seconds ) , we can observe the transformation of the kinetic energy in potential energy and conversely .when the kinetic energy vanishes , potential energy is maximal . if the drag coefficient is assumed to be equal to zero , the amplitude of the oscillation does not decrease .finally , figure [ massconservation ] represents the evolution of the mass of fluid compared to the initial mass .even if we do not have a rigorous conservation of the mass , due to the renormalisation of the mass equation , the variation of the mass is very little .-0.5truecm in this second numerical experiment , we use a more complex domain to test our numerical method .we assume that the domain is axisymmetric using the function . and are chosen in order to at the distance of the centre of the fluid domain , the fluid touches the wall ( see figure [ dom_ini ] ) .-2truecm we use the same external forcing as for the first experiment ( [ forcing ] ) .but , due to the specific form of the topography , a part of the fluid goes to the external crown .we present some results of the simulation in figure [ dom_evol1 ] .recalling that the shallow water equations are based in the continuum mechanic , it can not cut this domain with a finite energy .hence , even if the thickness of the layer of water is very low , we conserve a thin film of water between all the parts of the domain occupied by the fluid .-4truecm we recall that the computation is only made in the moving two - dimensional domain .mathematical study proves that the main difficulty is to conserve the smoothness of the boundary .this result can be observed in this simulation because , at the final time , we have contact between two parts of the boundary ( see last part of the figure [ dom_evol1 ] ) .since we do not use a `` good '' boundary operator , we do not have sufficient smoothness on the boundary .we observe then a change of connectedness and a hole appears in the fluid domain .the presented work suggests that capillarity effect needs to be incorporated into the evolution equations , and more precisely in the triple contact line .theoretical results presented in give sufficient smoothness for the capillarity term used in equation [ tensionbord ] .the main difficulty is to describe the capillarity effects on the depth integrated model and in a dynamical system .a possible approach is to test some boundary operator and to compare numerical and experimental result .this approach will be proposed in future studies .g. fourestey , _ une mthode des caractristiques dordre deux sur maillages mobiles pour la rsolution des quations de navier - stokes incompressibles par lments finis _ , rapport de recherche , inria , * 4448 * , 2002 .
in this paper we propose a numerical method to solve the cauchy problem based on the viscous shallow water equations in an horizontally moving domain . more precisely , we are interested in a flooding and drying model , used to modelize the overflow of a river or the intrusion of a tsunami on ground . we use a non conservative form of the two - dimensional shallow water equations , in eight velocity formulation and we build a numerical approximation , based on the arbitrary lagrangian eulerian formulation , in order to compute the solution in the moving domain . shallow water equations , free boundary problem , ale discretization , flooding and drying _ ams subject classification _ : 35q30 , 65m60 , 76d03
when modeling dependence for bivariate extremes , only an infinite - dimensional object is flexible enough to capture the ` spectrum ' of all possible types of dependence .one of such infinite - dimensional objects is the spectral measure , describing the limit distribution of the relative size of the two components in a vector , normalized in a certain way , given that at least one of them is large ; see , for instance , and .the normalization of the components induces a moment constraint on the spectral measure , making its estimation a nontrivial task . in the literature ,a wide range of approaches has been proposed . survey many parametric models for the spectral measure , and new models continue to be invented .here we are mostly concerned with semiparametric and nonparametric approaches . propose an enhancement of the empirical spectral measure in by enforcing the moment constraints with empirical likelihood methods . a nonparametric bayesian method based on the censored - likelihood approach in proposed in . in this paperwe introduce a euclidean likelihood - based estimator related with the maximum empirical likelihood estimator of .our estimator replaces the empirical likelihood objective function by the euclidean distance between the barycenter of the unit simplex and the vector of probability masses of the spectral measure at the observed pseudo - angles .this construction allows us to obtain an empirical likelihood - based estimator which is simple and explicitly defined .its expression is free of lagrange multipliers , which not only simplifies computations but also leads to a more manageable asymptotic theory .we show that the limit distribution of the empirical process associated with the maximum euclidean likelihood estimator measure is the same as the one of the maximum empirical likelihood estimator in .note that standard large - sample results for euclidean likelihood methods can not be applied in the context of bivariate extremes .the paper is organized as follows . in the next sectionwe discuss the probabilistic and geometric frameworks supporting models for bivariate extremes . in section [ sec : estim ] we introduce the maximum euclidean likelihood estimator for the spectral measure . large - sample theory is provided in section [ sec : asym ] .numerical experiments are reported in section [ numerical.study ] and an illustration with extreme temperature data is given in section [ sec : temp ] .proofs and some details on a smoothing procedure using beta kernels are given in the appendix [ app : proofs ] and [ app : smooth ] , respectively .let be independent and identically distributed bivariate random vectors with continuous marginal distributions and .for the purposes of studying extremal dependence , it is convenient to standardize the margins to the unit pareto distribution via and . observe that exceeds a threshold if and only if exceeds its tail quantile ; similarly for .the transformation to unit pareto distribution serves to measure the magnitudes of the two components according to a common scale which is free from the actual marginal distributions .pickands representation theorem asserts that if the vector of rescaled , componentwise maxima converges in distribution to a non - degenerate limit , then the limiting distribution is a bivariate extreme value distribution with unit - frchet margins given by } \max \left ( \frac{w}{x},\frac{1-w}{y } \right ) \mathrm{d}h(w ) \bigg\ } , \qquad x , y>0.\ ] ] the spectral ( probability ) measure is a probability distribution on ] be a sample of pseudo - angles , for example the observed values of the random variables , , in the previous section , with .the euclidean loglikelihood ratio for a candidate spectral measure supported on and assigning probability mass to is formally defined as the euclidean likelihood ratio can be viewed as a euclidean measure of the distance of to the barycenter of the -dimensional unit simplex . in this sense ,the euclidean likelihood ratio is similar to the empirical loglikelihood ratio which can be understood as another measure of the distance from to .note that results from by truncation of the taylor expension and the fact that , making the linear term in the expansion disappear .we seek to maximize subject to the empirical version of the moment constraint .our estimator for the distribution function of the spectral measure is defined as ,\ ] ] the vector of probability masses solving the optimization problem this quadratic program with linear constraints can be solved explicitly with the method of lagrange multipliers , yielding where and denote the sample mean and sample variance of , that is , the weights could be negative , but our numerical experiments suggest that this is not as problematic as it may seem at first sight , in agreement with and , who claim that the weights are nonnegative with probability tending to one . the second equality constraint in implies that satisfies the moment constraint , as } w \ , \mathrm{d } \hat{h}(w ) = \sum_i w_i \hat{p}_i = 1/2 ] yields the identity ,\ ] ] where the transformation is defined as follows .let be the set of cumulative distribution functions of non - degenerate probability measures on ]is defined by } ( v - \mu_f ) \, \mathrm{d } f(v ) , \qquad w \in [ 0 , 1].\ ] ] here } v \ , \mathrm{d } f(v) ] denote the mean and the ( non - zero ) variance of .we view as a subset of the banach space ) ] equipped with the supremum norm .the map takes values in ) ] is denoted by the arrow ` ' and is to be understood as in .asymptotic properties of the empirical spectral measure together with smoothness properties of lead to asymptotic properties of the maximum euclidean likelihood estimator : * continuity of the map together with consistency of the empirical spectral measure yields consistency of the maximum euclidean likelihood estimator ( continuous mapping theorem ) .* hadamard differentiability of the map together with asymptotic normality of the empirical spectral measure yields asymptotic normality of the maximum euclidean likelihood estimator ( functional delta method ) .the following theorems are formulated in terms of maps taking values in . the case to have in mind is the empirical spectral measure with and as in section [ sec : background ] . in theorem 3.1 and equation ( 7.1 ) of , asymptotic normality of is established under certain smoothness conditions on and growth conditions on the threshold sequence .[ thm : consistency ] if are maps taking values in and if in outer probability for some nondegenerate spectral measure , then , writing , we also have in outer probability . the proof of this and the next theorem is given in appendix [ app : proofs ] . in the next theorem , the rate sequence is to be thought of as .let ) ] .[ thm : an ] let and be as in theorem [ thm : consistency ] . if is continuous and if in ) ] , then also with .\ ] ] comparing the expression for in with the one for in ( 4.7 ) in , we see that the link between the processes and here is the same as the one between the processes and in .it follows that tuning the empirical spectral measure via either maximum empirical likelihood or maximum euclidean likelihood makes no difference asymptotically .the numerical experiments below confirm this asymptotic equivalence . to facilitate comparisons with , note that our pseudo - angle ] via , and that the function in ( 4.2 ) in reduces to .how does the additional term influence the asymptotic distribution of the maximum euclidean / empirical estimator ? given the complicated nature of the covariance function of the process , see ( 3.7 ) and ( 4.7 ) in , it is virtually impossible to draw any conclusions theoretically .however , monte carlo simulations in confirm that the maximum empirical likelihood estimator is typically more efficient than the ordinary empirical spectral measure .these findings are confirmed in the next section .in this section , the maximum euclidean likelihood estimator is compared with the empirical spectral measure and the maximum empirical likelihood estimator by means of monte carlo simulations .the comparisons are made on the basis of the mean integrated squared error , .\ ] ] the bivariate extreme value distribution with logistic dependence structure is defined by in terms of a parameter ] . for real data applications smooth versions of empirical the estimator may be preferred , but these can be readily constructed by suitably convoluting the weights of our empirical likelihood - based method with a kernel on the simplex .we thank anthony davison , vanda incio , feridun turkman , and jacques ferrez for discussions and we thank the editors and anonymous referees for helpful suggestions and recommendations , that led to a significant improvement of an earlier version of this article .miguel de carvalho s research was partially supported by the swiss national science foundation , cces project extremes , and by the fundao para a cincia e a tecnologia ( portuguese foundation for science and technology ) through pest - oe / mat / ui0297/2011 ( cma ) .johan segers s research was supported by iap research network grant no .p6/03 of the belgian government ( belgian science policy ) and by contract no .07/12/002 of the projet dactionsde recherche concertes of the communaut franaise de belgique , granted by the acadmie universitaire louvain .let . by fubini s theorem , } v \ , \mathrm{d}f(v) & = w \ ,f(w ) - \int_0^w f(v ) \ , \mathrm{d}v , \qquad w \in [ 0 , 1 ] , \\ \int_{[0,1 ] } v^2 \ , \mathrm{d}f(v ) & = 1 - \int_0 ^ 1 2v \ , f(v ) \ , \mathrm{d}v,\end{aligned}\ ] ] and similarly for .it follows that implies that , , and } v \ , \mathrm{d }f_n(v ) \to \int_{[0 , w ] } v \ , \mathrm{d } f(v) ] .hence .therefore , the map ) ] such that belongs to . since takes values in , it follows that takes values in .define ) ] by .\ ] ] a straightforward computation shows that if is such that for some ) ] with derivative given by .the result then also follows from the functional delta method .we only consider the case of the empirical euclidean spectral measure using a beta kernel , but the same applies to the empirical likelihood spectral measure by replacing the with in .the smooth euclidean spectral density is thus defined as where is the concentration parameter ( inverse of the squared bandwidth , to be chosen via cross - validation ) and denotes the beta density with parameters .the corresponding smoothed spectral measure is defined as ,\ ] ] where is the regularized incomplete beta function , with . since the moment constraint is satisfied .plug - in estimators for the pickands dependence function and the bivariate extreme value distribution follow directly from , \label{a.smooth } \\\widetilde{g}(x , y ) & = \exp\left\{-\frac{2}{k}\sum_{i=1}^k \widehat{p}_i \int_0 ^ 1 \max\bigg(\frac{u}{x},\frac{1-u}{y}\bigg ) \beta\{u ; w_i \nu , ( 1-w_i ) \nu\}\mathrm{d}u\right\ } , \qquad x , y>0 \label{g.smooth}.\end{aligned}\ ] ] antoine , b. , bonnal , h. , renault , e. ( 2007 ) . on the efficient use of the informational content of estimating equations : implied probabilities and euclidean empirical likelihood . _ j. econometrics _138(2 ) : 461487 .renaud , v. , innes , j. l. , dobbertin , m. , rebetez , m. ( 2011 ) .comparison between open - site and below - canopy climatic conditions in switzerland for different types of forests over 10 years ( 19982007 ) .climatol . _ 105(12 ) : 119127 .
the spectral measure plays a key role in the statistical modeling of multivariate extremes . estimation of the spectral measure is a complex issue , given the need to obey a certain moment condition . we propose a euclidean likelihood - based estimator for the spectral measure which is simple and explicitly defined , with its expression being free of lagrange multipliers . our estimator is shown to have the same limit distribution as the maximum empirical likelihood estimator of j. h. j. einmahl and j. segers , annals of statistics 37(5b ) , 29532989 ( 2009 ) . numerical experiments suggest an overall good performance and identical behavior to the maximum empirical likelihood estimator . we illustrate the method in an extreme temperature data analysis . [ [ keywords ] ] keywords : + + + + + + + + + bivariate extremes ; empirical likelihood ; euclidean likelihood ; spectral measure ; statistics of extremes .
the increasing demand for energy , the improved sensitivity to environmental issues , and the need for a secure supply are all contributing to a new vision of energy resource management .this new awareness is contributing to the development of a novel approach in energy planning , based on the rational use of local resources . in this contest ,distributed energy management is considered one of the viable solutions to integrate local renewable sources and to promote rational use of energy . moreover ,the recent emphasis on sustainability , also related to climate - change policies , requires a fast development in the use of renewable resources in local energy systems .this determines a fast growth of distributed generation and co - generation , there has not been a corresponding fast upgrade of the electricity infrastructure .this inhomogeneous evolution of the different components of power systems is the consequence of the present structure of the electricity network , which , being characterised by strict dispatching and planning rules , hardly fits with the increasing demand for flexibility connected to the distributed generation .such a scenario induces unavoidable effects on the electricity market which reveals an unexpected sensitivity to the enhancement of distributed generation based on renewable energy sources ( res) . in fact , due to their non - programmable characteristics and widespread geographic distribution , the development of res - based distributed generation is undermining the technical and economic models on which the electricity system are currently based . in particular , they highlight problems in the classical management model of energy - flows .in fact , the classical hierarchical and deterministic methodologies used to ( i ) manage the power system , ( ii ) forecast the energy demands and production , ( iii ) balance the network , all show drawbacks which finally affect the electricity market price .an example of such problems comes from the analysis of the effects of policy of the subsidies granted by governments to promote the exploitation of res and for implementing the climate - change policies : in fact , such policies have played a major role in the amplification of critical market anomalies , like the negative and/or null price of electricity registered in the germany and italy . therefore , to implement the new smart - grid paradigms ,it is necessary to change and renew the classical approaches for modeling and managing the electricity market .however , even if the production of energy from renewable sources introduces perturbations in the power system and in the electricity market , it constitutes a crucial advantage in emissions trading .generators based on renewable sources have an intrinsic high level of forecast uncertainty , highly variable both in time and space .hence , the increasing amount of renewable - like ( re ) generators induce a stochastic variability in the system ; such variability could induce security issues such as difficulties in voltage controlling or unforeseen blackouts and eventually causes a significant error in the power flow forecasting , which can give rise to extreme results in the energy market , such as very high prices or null / negative selling price . to understand such effects ,we must describe not only the dynamics of the fluctuations in energy production / demand but also the functioning of electricity markets .current electricity markets aim to reach efficient equilibrium prices at which both producers and distributors could sell electricity profitably . typically , the electricity markets are hierarchically structured according to time - based criteria strictly connected to the power system s constrains . in particular ,the short term market ( st ) is usually structured into one - day - ahead market ( oda ) , intraday market ( i d ) and ancillary service , reserve and balancing market ( asr) .almost all the simulation approaches for the electricity market are either based on stochastic or on game theoretical studies based on past data - series , while few models focus on market equilibrium as obtained from production and transmission constraints . to the best of our knowledge, nobody has yet addressed the effects of distributed generation , not only in power balancing but also in balancing market prices taking into account the network constraints . in this paperwe present a simplified model that , taking into account the power system constraints , allows us both the forecasting of the balancing market and the singling out of the contribution of various actors into the formation of the price .our model is data - driven , since information on the day - ahead market transactions is used to tune the agent - based simulation of the market behaviour . in our modelwe take as static constraints : ( i ) the grid topology ; ( ii ) the type of production per node and ( iii ) the transmission rules .our dynamical constraints are the maximum and minimum generation from power stations and their ramp variation ( i.e. how fast the amount of generated energy can be changed ) ; such constraints influence factors like the short term availability of an energy source .the inputs of our model are typical consumer requests and the forecast of geo - related wind and solar energy generation .we model the balancing market by introducing agents aiming to maximise their profits ; such agents mimic the market operators of conventional generators .the agents behaviour is modelled by a probability distribution for the possible sell / buy actions ; such distribution is obtained by a training process using synthetic data . in each simulation ,agents place bids on the balancing market based on the energy requests fixed by the oda market . in order to ensure system - security , the transmission system operator ( tso ) selects the bids to guarantee energy balancing in real time .we model the tso behaviour by choosing bid combinations according to the tso s technical requirements and the economic merit order .our model allows us not only to forecast the statistics of the fluctuations in power offer / demand related to energy security but also the behaviour of the balancing market on a detailed infrastructure knowledge and to deduce the market share of the various energy sources ( e.g. oil , carbon etc . ) ; hence , it has important practical implications , since it can be used as a tool and a benchmark for agencies and operators in the distribution markets . moreover , our modelling allows us to understand changes in the market equilibria and behaviour due to the increasing penetration of distributed generation and also to address the question of economic sustainability of specific power plants . as a case study, we present a detailed one - day analysis of the italian electricity market .ress have a fundamental impact on the functioning of the electricity market due to the technical constraints of power systems , which require an instantaneous balance between power production and demand .in fact , the electricity market is structured to guarantee matching between the offers from generators and the bids from consumers at each node of the power network according to an economic merit order . to perform this task ,the exchanges starts one day ahead on the basis of daily energy demand forecasting and then successive market sections refine the offers with the aim of both satisfying the balancing conditions and of preserving the power quality and the security of energy supply .the most extensively studied market sector is the day - ahead market , which has been modelled both in terms of statistical analysis of historical data , game theory for the market phase and stochastic modelling of the market operators behaviour .on the other hand , few models have been proposed for the last market section , devoted to assuring the _ real - time power reserve_. in fact , asr allows us to compensate for the unpredictable events and/or the forecasting errors that can occur to the whole power system .in particular , the balancing market ( bm ) has a fundamental role in guaranteeing the reliability of the power system in presence of the deregulated electricity market . the most important studies in asr modelling aim to provide methods that allows the forecasting of the amount of power needed for network stabilization purposes , whereas only a few research activities deal with the energy price forecast .with respect to the state of the art , we present in this paper an alternative model for daily bm time evolution . in particular , we reproduce the operator market strategies by means of an agent based approach , where agents represent typical market operators .our model is characterised by three phases : sampling of the perturbations , training of the agents , and forecasting of the balancing market . in the first phase we use the information about the oda and i d market to deduce realistic power flow configurations by taking into account the physical constraints of the electricity grid .we then introduce stochastic variations related to the geographic distribution of power consumption and res generation ; in such a way , we generate a statistical sample of configurations representing realistic and geo - related time patterns of the energy requests / productions to be balanced . in this paperwe will consider only the simples stochastic model of variations , i.e. uncorrelated gaussian fluctuations with zero mean and variances from historical data .the difference at each time between the total actual power requests and the volume of the oda+id market is the size of the balancing market .the result of this phase samples the statistics of fluctuations induced by renewable sources ; hence , it has important applications for energy security ( forecasting energy congestions and/or outages ) and for maintaining quality of service .we show in fig .[ fig1 ] the setup of the system for the first phase ; in particular , panel ( a ) shows the topology of the electricity transmission network in italy , panel ( b ) shows the division of the italian market and panel ( c ) shows a typical daily time evolution of oda+id outcomes with the detailed contribution of each primary energy source .the market zones are used for managing possible congestions occurring in the italian electricity market . in the second phase we use such balancing requirements together with the static and the dynamic constraints of conventional power plants to train the agents of the balancing market by optimising their bidding behaviour ( see sec.[sec : methods ] ) . herethe balancing market generators are only conventional ; hence , optimising the usage and implementation of renewable resources to diminish short time market fluctuations is crucial to augment the sustainability of power production . in the third phase we use the balancing market size and the trained agents biddings to evaluate market price evolution by performing a statistically significant number of simulations . in these simulations, each agent can place bids , both for positive ( upward market ) or negative ( downward market ) balancing needs .this data is produced throughout the day at fixed intervals on a geo - referenced grid ; for the italian balancing market , the bids are accepted each 15 minutes .typical simulation outputs in the upward and the downward electricity balancing market are : * the time evolution of the balancing market size . * the time evolution of the electricity prices * the market share for each technology type in fig .[ fig2 ] we compare the results of the model with real data of the actual upward and downward balancing market obtained from the italian market operator web site ; the data reported in are averaged for each hour .we have taken the 2011 - 2012 winter season as a reference period . in the upper panels of fig .[ fig2 ] we show that the predicted sizes of the downward and upward markets expressed in term of energy reductions / increases for balancing requirement match with the time - data series of the reference period . in the lower panels of fig .[ fig2 ] we show that the predicted prices in the downward and upward markets also match with the time - data series of the reference period .we notice that price and size have a similar shape , highlighting the expected correlation among sizes and prices . to the best of our knowledge ,this is the first time that has been possible to forecast the behaviour of the balancing market without using a historical time series analysis but using information coming out from the one - day ahead power system .a significant result of our approach is the forecast of the detailed contribution of each primary energy source to the downward and upward electricity balancing market . in fig .[ fig3 ] we show that conventional energy sources contribute in a different manner to the upward and downward market .for example , due to dynamic constraints , carbon power plants contribution is negligible ( due to the limits in the minimum operative power generation , mostly in the upward market ) even if their energy production costs are the lowest .this result highlights the fact that market shares in the balancing market do not depend only on energy costs but stem from an equilibrium between dynamic response , energy costs , geographical position and interactions among the different energy sources .the use of renewable energy sources is creating a new energy market where it is of the utmost importance to be in condition to anticipate trends and needs from users and producers to reduce inefficiencies in energy management and optimize production .the future transformation of the traditional passive distribution network into a pro - active one is requiring the implementation of an energy system where production and power fluctuations can be efficiently managed . in particular ,power fluctuations have the strongest impact on markets and on short - time energy - continuity requirements . previous research on short time energy forecasting concentrates on next - day electricity prices , showing that the analysis of time - series yields accurate and efficient price - forecasting tools when using dynamic regression and transfer function models or arima methodology .systematic methods to calculate transition probabilities and rewards have also been developed to optimize market clearing strategies ; to improve market clearing price prediction , it is possible to employ neural networks . a further step toward and integrated model of ( day ahead ) market and energy flowshas been taken in , where authors propose a market - clearing formulation with stochastic security assessed under various conditions on line flow limits , availability of spinning reserve and generator ramping limits. however , one - day ahead markets and balancing markets are fundamentally different and need separate formulations .since wind power is possibly the most erratic renewable source , it has been the focus of most investigations into short - time fluctuations .the analysis of possible evolutions in optimal short - term wind energy balancing highlights the needs for managing reserves through changes in market scheduling ( higher and more regular ) and in introducing stochastic planning methods as opposed to deterministic ones . in ,together with a probabilistic framework for secure day - ahead dispatch of wind - power , a real - time reserve strategy is proposed as a corrective control action . on the operators side , the question regarding the virtual power plant ( i.e. a set of energy sources aggregated and managed by a single operator as a coherent single source ) participation in energy and spinning reserve markets with bidding strategies that take into account distributed resources and network constraints have been developed in utilizing complex computational solutions like nonlinear mixed - integer programming with inter - temporal constraints solved by genetic algorithms . in this paperwe model both the electric energy flows and the very short - term market size taking into account the variability of renewable energy generation and customer demands through a stochastic approach .network and ramping constraints are explicitly taken into account through the ac power - flow model while market price prediction is modelled through an agent - based simulation of energy operators .the inputs of the model are the day - ahead prices and sizes , quantities that are possible to successfully predict .our approach falls into the class of models of inter - dependent critical infrastructures .we validate our model in the case of the italian power grid and balancing market ; we found that even a simplified stochastic model of production and demand based on uncorrelated gaussian fluctuations allows us to predict the statistics of energy unbalances and market prices .our model complements the virtual - plant approaches that concentrate on the marketing strategies of single operators managing several sources . to the best of our knowledge ,the explicit mechanism through which fluctuations enter the price determination had never been considered explicitly before our investigation . in the current phase of transition from a centralised to a distributed generation system, our approach allows us to address the complex task of estimating the additional cost associated with the balancing of renewable energy sources .this evaluation allows us to better understand the real impact of green sources in diminishing the carbon footprint , since balancing in absence of a well - developed technology of energy storage still relies heavily on conventional generators .moreover , by comparing the current situations with novel scenarios where new generators ( nodes in the model ) are introduced , our approach allows for a detailed geo - localised _`` what - if '' _ analysis of the energy planning .an important direction for our model to develop would be a deeper understanding and modelling of fluctuations .in fact , the probability - distribution of fluctuations in energy production has different statistics depending on the renewable type .moreover , both spatial and temporal correlations among the fluctuations should be taken into account : as an example , weather - influenced fluctuations like the ones from wind and solar generators display naturally a cross correlation among nearby located sources ; on the same pacing , the non - instantaneous character of weather variations also induces temporal correlations .though our analysis stems from a theoretical approach to understanding the effect of stochastic components in an interconnected system , it has immediate practical implications since the computational burden of our method is compatible with the scheduling time of the balancing market , permitting the potential use of this software for _ `` on - the - fly '' _ decision support . an important development for our model would be to address `` _ _ what - if _ _ '' analysis aimed at understanding how the introduction of new rules and policies affects the market . in fact, it has been shown that regulatory intervention affects using cash - out arrangements not only spot price dynamics , but also price volatility ( i.e. fluctuations ) . moreover , by predicting power umbalances , our approach allows for a better understanding of the energy security risks induced by renewable sources .in fact , the introduction of the stochastic components is crucial for the management of electrical energy systems , for which the deterministic approach has allowed a detailed description of the functioning of the electrical energy system , by virtue of ( i ) an accurate profile for the management of generation and ( ii ) a high degree of accuracy in load - prediction , i.e. conditions that are nowadays significantly changed .the development of models that allow the evaluation of ancillary service costs in an electricity system during a res - based transition phase , has practical implications , particularly important in energy system planning .moreover , the associated tools can be usefully implemented by the tsos and the market operators in order to forecast in real time both the expected amount of energy required for balancing purposes and their price evolution . inprevious studies , market sizes and the electricity price forecasts have been evaluated by statistical analysis of time - data series . despite the accuracy of these methods, their formulations did not allow the forecasting of the possible changes in markets caused by a transformation of the system involving market rules and/or infrastructure evolutions ( different power grids topology , transmission codes , new or different management of power plants ) .we propose a methodology that is able to take into account any upgrade , since it models the behaviour of the market operators subject to a realistic set of perturbations of the current system .the reference configuration of the power system is obtained starting from two datasets .the first dataset is related to the characteristics of the power system ( from the terna website ) and includes the geo - referenced position of every 220 and 380 kv substations with their electrical characteristics , the geo - referenced position of conventional generators with their power rates and power ramp limits and the electrical characteristics of the power network .the second dataset ( from the gme website ) reports the detailed time evolution of production / consumption for each 15 minutes of a reference day in the winter period 2011 - 2012 .since we aimed to describe the entire electricity balancing market session , we performed a complete simulation for each of the market subsections . for each interval of 15 minutes, the simulation is characterised by three phases : sampling of the perturbations , training of the agents and forecasting of the balancing market . in the first phase , the electric state of the power gridis initially perturbed stochastically by adding uncorrelated zero - mean gaussian fluctuations in order to mimic the variability both in the power production and in the demand .in particular , we employ realistic values of the variances of res generators and of electricity demand to obtain a realistic set of perturbed physical states of electricity network . to each of these perturbed states, it corresponds an unbalanced power condition .the first phase allows for the statistical forecasting of the size of the balancing market . in the second phase ,the forecast sizes of the balancing market are used in order to train the agents to tune their offer propensities , i.e. their willingness to offer a certain amount of energy at a certain price .these propensities are , in practice , due to the expertise of the operators in understanding market fluctuations and placing bids that enable them to reach the maximum profit . to a further details on the operators behaviour model, please refer to the supplementary informations . in the third phase , trained agents place bids on a set of realistic perturbations representing the possible balancing requirements ; price is formed according to tso s merit order .the implementation of the proposed methodology requires a detailed description and analysis of the power system from a technical and economical point of view . in particular, the evaluation of the perturbed states in a medium - size national power transmission grid is a complex task ; in our italian case - study , it involved around a thousand interconnected nodes dispatching power , around a hundred conventional generators , around a thousand res generators , and thousands of loads .each node was subjected to complex physical constraints which had to be modelled adequately in order to ensure a correct description of the system .in addition , global system constraints had to be considered in order to ensure the correct behaviour of the system in terms of the quality of the supply .moreover , the distributed res generators and loads which were aggregated at the corresponding transmission nodes , assumed values of power that fluctuated in time and space .we modelled their power production or consumption in a statistical way , assuming gaussian - like forecast errors with standard deviations , which represented the expected power variations at each single node of the power grid in a given time .the application of an ac power - flow algorithm allowed for the validation of the dynamic and static physical constraints , giving the possible states of the power system .the considered variables associated with them are : * load power demand and the corresponding ; * wind power production and the corresponding ; * photovoltaic power production and the corresponding .the system variability was tackled though a statistical mechanics approach ; the set of possible states of the system at a defined time was numerically sampled by adding a random value extracted from a gaussian distribution with zero mean and variance to the expected power production and consumption at every node and res generator of the grid .each perturbed state was characterised by a different total power production and demand , and their difference was the required balancing power ; hence , was a random variable that represented the market size .to sample the statistics of the market behaviour , we needed a significant number of possible balancing requirements . with this aim, we generated 6000 statistically independent perturbed states for each time interval since such number of sampling is enough to get a sufficient accuracy in our statistics . in order to model the balancing market ,the specific rules on which it is based should be described . in general , a market session is an auction , in which the bids placed by market operators are accepted by the tso according to a cost - minimization method .the operative rules of the italian balancing market are briefly described in the supplementary information section . for each perturbed state ,an auction is made with a corresponding sampled value of the market size .since can be either positive or negative we can have respectively a so - called upward market or a downward market session . in a market session ,each agent ( market operator ) represents a conventional power plant and can place a bid at the auction , in which it specifies the amount of energy that the corresponding power plant can provide to the system , and its price .once the bids have been placed , the tso accepts all the viable offers until the total energy needed for balancing is reached .since the bid values are obtained related to the agent propensities described by a specific probability distribution , agents must be trained to estimate their propensities . to this end, we started from an initial guess and performed several market sessions in which each agent updated its propensities in order to maximise a profit function as described in detail in the supplementary information section .once the agents have been trained , we can forecast the behaviour of the balancing market by performing market sessions on the sampled perturbed states .in addition to the market size , we can calculate the global price per kilowatt from the set of accepted offers .notice that is a random variable , associating a market price to each perturbed state of the system .the outputs of the simulations are the sampled distributions , , and of sizes and prices of the upward and downward markets .[ fig4 ] shows a flow - chart of the whole simulation procedure . in order to describe the system evolution over time ,these distributions have been obtained for each time interval , obtaining a dynamic distribution of market size and energy price . in order to validate the outcomes of our simulations with the available balancing market outcomes, results have been aggregated for each hour of the day .any opinion , findings and conclusions or reccomendations expressed in this material are those of the author(s ) and do not necessary reflect the views of the funding parties . [ [ competing - financial - information ] ] competing financial information + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the authors declare no competing financial interests . , scaledwidth=100.0% ] quartile range ( i.e. data is inside such range with probability ) of the real data ; the black segment in the full rectangles is the median of the real data .red segments correspond to the median and blue segments define the range from the to the quartile of the data synthetically generated from our models . in the upper panels we show the comparisons among real data and the predicted size of upward and downward markets ,i.e. the difference among the foreseen energy production and the actual request . in the lower panels we show the comparison among real market prices and the ones predicted from our agent based model .[ fig2],scaledwidth=100.0% ] , scaledwidth=100.0% ] , scaledwidth=70.0% ]the italian electricity balancing market is a pay - as - bid like market . in such markets ,the goods are payed for the amount of money agreed in the placed bid .the bm stage follows the one day - ahead market ( oda ) and intra - day market ( i m ) . in oda and im the market authority ( gme , electrical market authority ) decides the amount of energy that each generator must provide to the network , given economic and physical constraints and the forecast power production and consumption of loads and res generators .since there is an the uncertainty associated to power production and consumption , after the oda and i m market stages there will be a real time market stage , the balancing market ( bm ) , whose target is to balance unpredicted changes in production ( or load ) that can occur respect to power production and consumption forecasts . in the bm stage ,the transmission system operators ( tso ) interacts with authorized market operators , i.e. electrical operators ( or group of them ) that can offer or buy energy .these operators are often represented by a broker , whose job is to submit a bid on the market , specifying the quantity and the price of the energy that the operator plans to sell ( or buy ) on the bm at each time of the day .after the bidding stage , the tso communicates to the brokers if their offers are accepted and how much power must be supplied to the system . normally , there are various market sessions every day ( in italy there are 5 sessions per day , each related to offers spaced by 15 minutes intervals ) .power balancing is carried out on a zonal basis .there are 6 zones defined on a geographical basis , as shown on fig .[ fig1](b ) . 1 .auction : each broker places an upward and a downward bid for every time interval related to the auction .every bid is a couple of real numbers , in which the broker specifies the amount of energy that the corresponding operator can supply to the system and the associated price .when a bid is accepted in the upward market , the operator agrees to provide the energy to the system by increasing his production by the same amount ; on the downward market , the operator will instead reduce his production by the agreed amount .2 . market : given the balancing power needs for every time interval , the tso accepts offers until the system power balance is reached , seeking economic advantage and with the condition that electrical constraints are met .we base our agent model on the roth - erev algorithm ; such kind of algorithms have already been applied for simulating the italian oda electricity market . in such kind of models ,agents learn how to place optimal bids in competitive auctions with the aim of buying ( or selling ) in the most convenient way .the behaviour of real operators is related to their market knowledge , often obtained by a learning process performed during time .roth - erev algorithms simulate this learning process by adjusting propensities using a self - consistent methodology whose goal is to maximize profits . in this paper we apply a modified version of roth - erev algorithm as introduced by nicolaisen et al . since we do nt have the information on the exact relationships among market operators and brokers , we consider every conventional power plant generator as a single agent .we describe operator propensities using a statistical description of the possible bidding strategies .the bidding strategies of the operator are described by a finite discrete set . here is the strategy index , is the number of possible strategies , is the operator propensity to offer at a given markup value ( for upward bids , for downward bids ) ; in our simulations . the mark - up value allows to calculate the bidding price as , where is the production cost ( per mwh ) of each generator , given by his technology type .the behaviour of the operators is modelled by a stochastic process in which the probability of placing a bid at a given price is the normalised propensity .* : every generator has a minimum and a maximum of allowed power supply ; is the actual power production of the generator . * : due to construction and technological limits , each generator has ramping constraints that limits in time their maximum change in power production .to optimise the propensities of the agents , we apply an iterative algorithm . at the beginning of the learning algorithm ,all propensities have the same value . the iterations of the algorithm are divided in three phases : 1 .bid presentation : every agent presents a bid , both for upward and downward market .this bid is given by a feasible quantity of offered energy ( i.e. satisfying the physical constraints ) and by a price that will be drawn from agents propensities .2 . market session : given the knowledge of the balancing needs of the system , the tso accepts all the bids needed to ensure that energy while seeking economic profit , verifying that the physical constraints of the system are met .3 . agent update : market outcomes are communicated to each agent , that updates his propensities in relation to the profit made in the session .agents propensities at iteration are updated as follows : where ] is an experimental parameter that assign a different weight to played and non - played actions . to the best of our knowledge ,roth - erev algorithms have been always applied by training agents over historical data . in this paperwe overcome the need of historical data by training the agents on realistic system states that are synthetically generated .scala a , mureddu m , chessa a , caldarelli g , damiano a. distributed generation and resilience in power grids . in : hammerli b , kalstad svendsen n , lopez j , editors .critical information infrastructures security . vol .7722 of lecture notes in computer science .springer berlin heidelberg ; 2013 .p. 7179 .zimmerman rd , murillo - snchez ce , thomas rj .matpower : steady - state operations , planning , and analysis tools for power systems research and education .power systems , ieee transactions on . 2011;26(1):1219 . .available from : http://www.pserc.cornell.edu / matpower/.
the increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources . this is fundamentally changing the configuration of energy management and is introducing new problems that are only partly understood . in particular , renewable energies introduce fluctuations which cause an increased request for _ conventional _ energy sources to balance energy requests at short notice . in order to develop an effective usage of low - carbon sources , such fluctuations must be understood and tamed . in this paper we present a microscopic model for the description and for the forecast of short time fluctuations related to renewable sources in order to estimate their effects on the electricity market . to account for the inter - dependencies in the energy market and the physical power dispatch network , we use a statistical mechanics approach to sample stochastic perturbations in the power system and an agent based approach for the prediction of the market players behavior . our model is data - driven ; it builds on one - day - ahead real market transactions in order to train agents behaviour and allows us to deduce the market share of different energy sources . we benchmarked our approach on the italian market , finding a good accordance with real data .
crowd dynamics has recently attracted the interests of a rapidly increasing number of scientists .analytical and numerical analysis are effective tools to investigate , predict and simulate complex behaviour of pedestrians , and numerous engineering applications welcome the support of mathematical modelling .growing population densities combined with easier transport lead to greater accumulation of people and increase risk of life threatening situations .transport systems , sports events , holy sites or fire escapes are just few examples where uncontrolled behaviour of a crowd may end up in serious injuries and fatalities . in this field ,pedestrian traffic management is aimed at designing walking facilities which follow optimal requirements regarding flow efficiency , pedestrians comfort and , above all , security and safety . from a mathematical point of view , a description of human crowds is strongly non standard due to the intelligence and decision making abilities of pedestrians .their behaviour depends on the physical form of individuals and on the purpose and conditions of their motion .in particular , pedestrians walk with the most comfortable speed , tend to maintain a preferential direction towards their destination and avoid congested areas . on the contrary , in lifethreatening circumstances , nervousness make them move faster , push others and follow the crowd instead of looking for the optimal route . as a consequence ,critical crowd conditions appear such as freezing by heating and faster is slower phenomena , stop - and - go waves , transition to irregular flow and arching and clogging at bottlenecks . in order to describe this complex crowd dynamics ,numerous mathematical models have been introduced , belonging to two fundamentally distinct approaches : microscopic and macroscopic . in the microscopic framework pedestriansare treated as individual entities whose trajectories are determined by physical and social laws .examples of microscopic models are the social force model , cellular automata models , ai - based models .macroscopic description treats the crowd as a continuum medium characterized by averaged quantities such as density and mean velocity .the first modelling attempt is due to hughes who defined the crowd as a thinking fluid and described the time evolution of its density using a scalar conservation law .current macroscopic models use gas dynamics equations , gradient flow methods , non linear conservation laws with non classical shocks and time evolving measures . at an intermediate level ,kinetic models derive evolution equations for the probability distribution functions of macroscopic variables directly from microscopic interaction laws between individuals , see for example and and references therein .also , recently introduced approaches include micro - macro coupling of time - evolving measures and mean - field games .these models are good candidates to capture the effects of individual behavior on the whole system . in this paper we shall analyze and compare two macroscopic models describing the time evolution of the density of pedestrians .the first one , introduced by hughes , consists of a mass conservation equation supplemented with a phenomenological relation between the speed and the density of pedestrians .the second one involves mass and momentum balance equations so is of second order type .it was proposed by payne and whitham to describe vehicular traffic and adopted to describe pedestrian motion by jiang et al .it consists of the two - dimensional euler equations with a relaxation source term . in both models ,the pedestrians optimal path is computed using the eikonal equation as was proposed by hughes . in order to simulate realistic behaviourwe consider two dimensional , continuous walking domains with impenetrable walls and exits as pedestrians destination . to our knowledgethe only available results using hughes model concern simulations of flow of pedestrians on a large platform with an obstacle in its interior . in the case of the second order model jiang considered the same setting and showed numerically the formation of stop - and - go waves . however , none of the above works analyzed complex crowd dynamics .behaviour at bottlenecks and evacuation process was not considered in any of the previous works .the first aim of this paper is to provide a more detailed insight into the properties of macroscopic models of pedestrian motion . in particular , we compare hughes model and the second order model analyzing the formation of stop - and - go waves and flows through bottlenecks .our simulations suggest that hughes model is incapable of reproducing neither such waves nor clogging at a narrow exit .it appears to be also insensitive to the presence of obstacles placed in the interior of the walking domain , which can be crucial in the study of evacuation .this is why in the second part of the paper we restrict ourselves only to the second order model we focus on the study of the evacuation of pedestrians through a narrow exit .this problem is an important safety issue because of arching and clogging appearing in front of the exit , which can interrupt the outflow and result in crushing of people under the pressure or the crowd .experimental studies are rare due to the difficulties in reproducing realistic panic behaviour , while numerical simulations are available mainly in the microscopic framework .for example helbing et al . analyzed the evacuation of two hundred people from a room through a narrow door and in the issue of optimal design of walking facilities was addressed with genetic algorithms . at firstwe show the dependence of the solutions on different parameters of the model .more precisely , we consider the effect on the evacuation of the strength of the interpersonal repealing forces and the desired speed of pedestrians. both of these parameters may indicate the nervousness and the level of panic of pedestrians . in order to improve evacuation , hughes suggested that suitably placed obstacles can increase the flow through an exit .this idea is an inversion of the braess paradox , which was formulated for traffic flows and states that adding extra capacity to a network can in some cases reduce the overall performance . in the case of crowd dynamics , placing an obstacle may be seen intuitively as a worse condition .nevertheless , it is expected to lower the internal pressure between pedestrians and their density in front of the exit and as a result preventing from clogging .this phenomenon has been studied experimentally in case of granular materials by zuriguel et al . who analyzed the outflow of grains from a silo and found out the optimal height above the outlet of an obstacle which reduces the blocking of the flow by a factor of one hundred . in case of pedestrians , to our knowledge , so far this problem has been studied only numerically .helbing et al . using the social force model observed that a single column placed in front of the exit decreases the pressure between the column and the door and may prevent from clogging . in the same framework , different shapes and placements of obstacles were studied in with an indication of the formation of the so called waiting zone in front of the exit .frank and dorso in studied the effects of a column and a longitudinal panel assuming in the social force model that pedestrians change their direction away from an obstacle until the exit becomes visible .following the idea of hughes , we try to improve the evacuation of pedestrians using properly tuned obstacles placed in front of the exit .motivated by the numerical simulations in which clogging appears when a large group of pedestrians reached the exit simultaneously , we give an example of a system of five circular columns arranged in the shape of a triangle opened towards the exit .we show that this system of obstacles effectively creates an area with lower density in front of the door and reduces the clogging .this paper is organized as follows : in section [ sec : models ] we explain in detail macroscopic models and in section [ sec : numericalscheme ] we describe numerical approximation of the models .section [ sec : results ] is devoted to the numerical results .at first we present error analysis and comparison between the two macroscopic models .then we analyze the evacuation of pedestrians from a room .we consider a two dimensional connected domain corresponding to some walking facility .it is equipped with an exit which models the destination of the crowd motion and can contain obstacles .the boundary of the domain is composed of the outflow boundary and the wall , which , as obstacles , is impenetrable for the pedestrians . in thissetting we consider a macroscopic model introduced by payne - whitham for vehicular traffic flow in and by jiang et al . in to describe crowd dynamics .the model derives from fluid dynamics and consists of mass and momentum balance equations with source term .denoting by the density of pedestrians and by their mean velocity the model reads where describes the average acceleration caused by internal driving forces and motivations of pedestrians .more precisely , it consists of a relaxation term towards a desired velocity and the internal pressure preventing from overcrowding the unit vector describes the preferred direction pointing the objective of the movement of pedestrians and will be defined in the next section .the function characterizes how the speed of pedestrians changes with density .various speed - density relations are available in the literature , see . for our simulationswe choose the exponential dependence where is a free flow speed , is a congestion density at which the motion is hardly possible and is a positive constant .the parameter in ( [ eq : vectorf ] ) is a relaxation time describing how fast pedestrians correct their current velocity to the desired one . the second term in ( [ eq : vectorf ] ) models a repulsive force modeling the volume filling effect and is given by the power law for isentropic gases model ( [ eq : mainsystem ] ) is referred to as a second order model as it consists of mass and momentum balance equations completed with a phenomenological law describing the acceleration .a simpler system , which is a first order model , was introduced by hughes .it is composed of a scalar conservation law where , closed by a speed - density relation given by .the models ( [ eq : mainsystem ] ) , have to be completed by defining the vector field . following the works of hughes , we assume that the pedestrians movement is opposite to the gradient of a scalar potential , that is the potential corresponds to an instantaneous travel cost which pedestrians want to minimize and is determined by the eikonal equation where is a density dependent cost function increasing with . in the simplest case we could prescribe , which gives the potential in the case of convex domains .pedestrians want to minimize the path towards their destination but temper the estimated travel time by avoiding high densities .the behaviour can be expressed by the density driven rearrangement of the equipotential curves of using the following cost function instead of coupling the mass and momentum balance laws with an eikonal equation , another possible approach has been recently introduced in : the transport equation is interpreted as a gradient flow in the wasserstein space , which has the advantage of providing existence results despite the non - smooth setting .let us now present the numerical scheme on unstructured triangular mesh that we used to perform numerical simulations .the model of pedestrian flow couples equations of different nature , i.e. a two dimensional non - linear system of conservation laws with sources , coupled with the eikonal equation through the source term . in this section we describe a finite volume scheme built on dual cells for systems ( [ eq : mainsystem ] ) and and a finite element method based on the variational principle for problem ( [ eq : eikonal ] ) .the numerical simulations are carried out using the multidisciplinary platform num3sis developed at inria sophia antipolis .the models of pedestrian motion ( [ eq : mainsystem ] ) and can be put in the form where in the case of the second order model ( [ eq : mainsystem ] ) denotes the unknowns vector , density and momentum , and for the first order model we take , and . according to the framework of finite volume schemes, we decompose the domain into non overlapping , finite volume cells , , given by dual cells centered at vertices of the triangular mesh .for each cell we consider a set of neighbouring cells , . by denote the face between and , its length and is a unit vector normal to the pointing from the center of the cell towards the center of the cell .the solution of the system ( [ eq : mainsystemmatrix ] ) on a cell is approximated by the cell average of the solution at time , that is a general semi - discrete finite volume scheme for ( [ eq : mainsystemmatrix ] ) can be defined as where is a numerical flux function .the spatial discretization of the source term is treated by a pointwise approximation in order to obtain a numerical approximation using a finite volume scheme we have to compute numerical fluxes across the face between control cells and along the direction . despite the fact that the model is two dimensional , these fluxes are computed using a one - dimensional approximation .the homogeneous part of the model coincides with the isentropic gas dynamics system for which many solvers are available , ( see ) .however , the occurrence of vacuum may cause instabilities and not all of them preserve non negativity of the density .we use the first order hll approximate riemann solver .it assumes that the solution consists of three constant states separated by two waves with speeds and corresponding respectively to the slowest and fastest signal speeds .it is positivity preserving under certain conditions on the above numerical wave speeds that is where is the sound speed , is the normal component of the velocity and are averaged roe velocity and sound speed respectively . for the numerical function in the case of the first order model we use the lax - friedrichs flux [ eq : laxfriedrichsflux ] ,\ ] ] with the numerical viscosity coefficient given by last equality is justified by the fact that is a unit vector .the difficulty in the time discretization of equation ( [ eq : semidiscretescheme ] ) lies in the non linear coupling of the models with the eikonal equation ( [ eq : eikonal ] ) in the flux for the first order model and in the source term for the second order one .this is why we apply explicit time integration method .denoting the time step by , the density at the time step is obtained by using an explicit euler method with the splitting technique between the transport and the source terms where the numerical flux function depends explicitly on .the stability is achieved under the cfl condition . in case of the second order model is the maximal value of the characteristic wave speed of the homogeneous part of the system ( [ eq : mainsystem ] ) .for the first order model , using the speed - density relation ( [ eq : speed ] ) , the maximal wave speed is given by due to the same argument used for the computation of the coefficient in the lax - friedrichs flux ( [ eq : laxfriedrichsflux ] ) .the value of is set to in the following computations . to obtain the solution at time step we need to compute the direction vector defined by ( [ eq : directionvector ] ) .it means that we have to solve the eikonal equation ( [ eq : eikonal ] ) and compute the gradient of its solution .equation ( [ eq : eikonal ] ) is a special case of the static hamilton - jacobi equation , for which many numerical methods have been developed such as level - set methods , fast marching and fast sweeping methods , semi - lagrangian scheme , finite volume or finite element schemes .we implement the bornemann and rasch algorithm belonging to the last of the above approaches thus it is easier to implement on unstructured triangular meshes with respect to other methods .it is a linear , finite element discretization based on the solution to a simplified , localized dirichlet problem solved by the variational principle .having found the potential we calculate its gradient using the nodal galerkin gradient method .it is related to cell and is computed by averaging the gradients of all triangles having node as a vertex . in two dimensionsit has the form where are triangles with the considered node as a vertex , counts for vertices of and is a basis function associated with vertex .we perform simulations on a two - dimensional domain with boundary , see fig [ fig : domain ] .we set the outflow boundary far from the exit of the room through which pedestrians go out so that the outflow rate does not influence the flow through the door .we assume pedestrians can not pass through walls , but can move along them : we impose free - slip boundary conditions in order to implement in the case of the second order model we compute the fluxes through boundary facets using an interior state and a corresponding ghost state .in particular , we choose at wall boundary and , for the outflow .[ rem : wall ] for the numerical flux function we use the hll approximate riemann solver .however , our numerical simulations show that the condition ( [ eq : slipcondition ] ) is not satisfied at the wall boundary .sub iterations would be needed at each time step to converge to the correct solution.to reduce the computational cost , after computing in we set to zero at wall boundary nodes the component of the velocity normal to the boundary . adding the source term preserves the slip - wall boundary condition .( mass conservation ) it is essential that there is no loss of the mass through the wall boundary during numerical simulation . the hll solver with the ghost state defined by ( [ eq : ghostcell ] ) satisfies this condition when ( [ eq : slipcondition ] ) holds .in fact , let us consider four possible combinations of minimum and maximum wave speeds ( [ eq : wavespeeds ] ) .using we get at the wall boundary , where the last equality is due to remark [ rem : wall ] .then we always have , where .therefore the flux is always in the center region of the hll solver , that is and its first component is zero if is defined by ( [ eq : ghostcell ] ) . [ cols="^ " , ]in this study , two pedestrian flow models have been analyzed in the context of macroscopic modelling of evacuation of pedestrians from a room through a narrow exit .error analysis has been carried out for numerical validation of a finite - volume scheme on unstructured grid.various test - cases have been considered to quantify the influence of the model parameters on the behaviour of solutions and to measure the ability of the models to reproduce some of the phenomena occurring in the evacuation of high density crowds .more precisely , numerical experiments show that the classical hughes type model can not reproduce stop - and - go waves or clogging at bottlenecks . on the other hand ,it was verified numerically that the second order model captures better the structure of interactions between pedestrians and is able to produce the above behaviours . however , even this model is still far from being validated and should be verified and calibrated with realistic experiments .in fact , we have pointed out that values of some of its parameters have a significant effect on the formation of the above phenomena so their tuning is essential .for some particular choices of the parameters the evacuation through a narrow exit was analyzed and an example of the inverse braess paradox was given .it was shown that using a particular configuration of obstacles it is possible to reduce the clogging at the exit and increase the outflow .analysis of more realistic settings are to be considered at the next step .this research was supported by the european research council under the european union s seventh framework program ( fp/2007 - 2013 ) / erc grant agreement n. 257661 .n. bellomo and a. bellouquid . on the modelling of vehicular traffic and crowds by kinetic theory of active particles . in _ mathematical modeling of collective behavior in socio - economic and life sciences _ , number 2 in modeling and simulation in science , engineering and technology , pages 215221 .birkhuser boston , 2010 .a. seyfried , m. boltes , j. khler , w. klingsch , a. portz , t. rupprecht , a. schadschneider , b. steffen , and a. winkens .enhanced empirical data for the fundamental diagram and the flow through bottlenecks . in _ pedestrian and evacuation dynamics _ ,pages 145 156 .berlin / heidelberg , springer , 2010 .
we analyze numerically two macroscopic models of crowd dynamics : the classical hughes model and the second order model being an extension to pedestrian motion of the payne - whitham vehicular traffic model . the desired direction of motion is determined by solving an eikonal equation with density dependent running cost , which results in minimization of the travel time and avoidance of congested areas . we apply a mixed finite volume - finite element method to solve the problems and present error analysis for the eikonal solver , gradient computation and the second order model yielding a first order convergence . we show that hughes model is incapable of reproducing complex crowd dynamics such as stop - and - go waves and clogging at bottlenecks . finally , using the second order model , we study numerically the evacuation of pedestrians from a room through a narrow exit . istituto per le applicazioni del calcolo `` mauro picone '' , consiglio nazionale delle ricerche , via dei taurini 19 , i-00185 roma , italy ( mtwarogowska.com ) . ] inria sophia antipolis - mditerrane , opale project - team , 2004 , route des lucioles bp 93 , 06902 sophia antipolis cedex , france ( paola.goatin.fr , regis.duvigneau.fr ) ]
traditionally , market risk is proxied by the distribution of asset log - returns on different scales .such distribution for most assets in various classes is well - known to have a power law tail \sim x^{-\mu}$ ] with the `` tail exponent '' in the range from 3 to 5 both for daily and intraday returns .though the second and possibly third order moments of the distribution exist , the traditional volatility measures based on variance of returns are not sufficient for quantifying the risk associated with extreme events .much better metrics to capture systematic events are the so - called _ drawdowns _ ( and their complements , the _ drawups _ ) , which are traditionally defined as a persistent decrease ( respectively increase ) in the price over consecutive -time intervals . in other words ,a drawdown is the cumulative loss from the last maximum to the next minimum of the price , and a drawup is the the price change between a local minimum and the following maximum . by definition , drawups and drawdownsalternate : a drawdown follows a drawup and vice versa .in contrast to simple returns , drawdowns are much more flexible measures of risk as they also capture the transient time - dependence of consecutive price changes .drawdowns quantify the worst - case scenario of an investor buying at the local high and selling at the next minimum ( similarly drawups quantifies the upside potential of buying at the lowest price and selling at the highest one ) .the duration of drawdowns is not fixed as well : some drawdowns can end in one drop of duration , when others may last for tens to hundreds s .the distribution of drawdowns contains information that is quite different from the distribution of returns over a fixed time scale .in particular , a drawdown reflects a transient `` memory '' of the market by accounting for possible inter - dependence during series of losses . during crashes , positive feedback mechanisms are activated so that previous losses lead to further selling , strengthening the downward trend , as for instance as a result of the implementation of so - called portfolio insurance strategies .the resulting drawdowns will capture these transient positive feedbacks , much more than the returns or even the two - point correlation functions between returns or between volatilities .in contrast to autocorrelation measures , which quantify the average ( or global ) serial linear dependence between returns over a generally large chosen time period , drawdowns are local measures , i.e. they account for rather instantaneous dependences between returns that are specific to a given event .statistically , drawdowns are related to the notion of `` runs '' that is often used in econometrics .this paper presents an analysis of the statistical properties of intraday drawdowns and drawups .our tests are performed on the most liquid futures contracts of the world .our results are thus of general relevance and are offered as novel `` stylized facts '' of the price dynamics .we discuss and quantify the distribution of intra - day extreme events , and compare distributional characteristics of drawdowns with those of individual returns . in so doing, we discover that the generally accepted description of the tail of the distribution of returns by a power law distribution is incorrect : we find highly statistically significant upward deviations from the power law by the most extreme events .these deviations are associated with well - known events , such as the `` flash - crash '' of may 6 , 2010 .statistical tests designed to detect such deviations confirm their high significance , implying that these events belong to a special class of so - called_ `` dragon - kings '' _ : these events are generated with different amplifying mechanisms than the rest of the population .we show that some of these events can be attributed to an internal mutual - excitation between market participants , while others are pure response to external news . as for extreme drawdowns , there are in principle two end - member generating mechanisms for them : ( i ) one return in the run of losses is an extreme loss and , by itself alone , makes the drawdown extreme ; ( ii ) rare transient dependences between negative returns make some runs especially large .we document that most of the extreme drawdowns are generated by the second mechanism , that is , by emerging spontaneous correlation patterns , rather than by the domination of one or a few extreme individual returns .the paper is organized as follows .section [ sec : data ] discusses the high - frequency data and cleaning procedures .section [ sec : dd ] presents the detection method of the so - called _-drawdowns _ that we use as a proxy of transient directional price movements .section [ sec : descript_stat ] provides descriptive statistics of the detected events .section [ sec : distrib_dd ] focuses on the properties of the distributions of drawdowns and quantify their tails as belonging essentially to a power law regime . in section [ sec : dragonkings ] , we present a generalised dragon - king test ( dk - test ) , derived and improved from , which allows us to quantify the statistical significance of the deviations of extreme drawdowns from the power law distribution calibrated on the rest of the distribution . section [ sec : distrib_aggragated ] describes the aggregated distributions over all tickers and validates that our findings hold both at individual and global levels . for this , we use the generalised dk - test as well as the parametric u - test also introduced by and study their respective complementary merits .section [ sec : dependence ] examines the interdependence of the speed and durations of extreme drawdowns with respect to their size . section [ sec : conclusion ] concludes .we use tick data for the most actively traded futures contracts on the world indices ( see table [ tb : contracts ] ) from january 1 , 2005 to december 30 , 2011 . for futures on the omx stockholm 30 index ( omxs ) , our dataset starts from september 1 , 2005 ; and for futures on hong kong indexes ( hsi and hcei ) , our datasets are limited to the period before april 1 , 2011 . for futures on the bovespa index ( bovespa ) , we restrict our analysis to the period after january 1 , 2009 , ignoring the relatively inactive trading in 20052008 .many of the contracts presented in table [ tb : contracts ] are traded almost continuously ( e.g. e - mini s&p 500 futures contracts are traded every business day from monday to friday with only two trading halts : from 16:15 to 17:00 cdt and from 15:15 to 15:30 cdt ) .though it is being progressively changing , most of the daily volume is traded within so - called regular trading hours ( rth , in case of e - mini contracts : 8:3015:15 cdt ) . for asian exchanges , the activity within regular trading hours is also non - uniform . for the analysis ,we have limited ourselves only to the part of rth where the trading is the most active ( in terms of volume ) , which we refer to as an active trading hours ( ath ) in table [ tb : contracts ] ..description of the futures contracts used for analysis . [ cols="<,<,<,<",options="header " , ] these results validate our previous findings .first , fits of the distribution of drawdowns / drawups are much more robust than for individual log - returns .second , the reported exponents of the power law tails for the aggregated distributions lie in the same range of values that we reported for individual contracts ( see figure [ fig : boxplot ] ) .third , the estimated exponent for drawdowns is significantly larger than the exponent for drawups ( the difference is larger than 6 standard errors ) .figure [ fig : ccdf_aggregated ] and table [ tb : fit_aggregated ] show that the power law approximation of the normalized speeds of drawdowns and drawups is almost perfect and holds for more than 5 orders of magnitude in the vertical axis .in contrast , the fits of the tails of the distributions of durations are relatively poor .in particular , the hypothesis that the distribution of drawup durations is a power law can be rejected in favor of the stretched exponential distribution family using the nested wilks test . an important observation from figure [ fig : ccdf_aggregated ]is that the extreme events of individual distributions ( figure [ fig : ccdf ] ) also branch off the aggregated distribution .one can clearly see that up to ten of the largest drawdowns and drawups deviate substantially from the power law fit of the tail .interestingly , the original and the modified ( section [ sec : dragonkings ] ) dk - tests give contradictory and confusing results . however , `` the absence of evidence is not the evidence of absence '' , which summarises the fallacy of the argumentum ad ignorantiam .in other words , we argue that the failure to diagnostic the largest drawdowns as outliers reflects the lack of power of these tests .as discussed above , the 1520 extremes events in the tail contribute substantially to the numerator in expression and lead to a spurious identification of up to 400 outliers in the tail ( the hypothesis is rejected for values of ranks up to ) . on the another hand , the modified dk - test sins at the other extreme by being too conservative and fails to reject for any , because it requires the simultaneous rejection of for and the acceptance of for .this typically does not occur when outliers are not a few far - standing events , but are organised with a continuous and smooth deviation of the tail as in figure [ fig : ccdf_aggregated ] . to address this issue and test for the presence of a change of regime in the tail of the distribution, we employ the parametric u - test , which tests deviation of the tail with respect to the fitted power law distribution rather than with respect to the rest of sample as in the dk - test .we present a slightly modified description of the u - test from that found in and provide a closed - form solution of the maximum likelihood estimation of the power law exponent .we select the lower threshold for the calibration of the power law in the distribution tail and apply the same nonlinear transformation as for the dk - test .then , by visual inspection , we determine a candidate for the rank such that observations smaller than ( i.e. , of rank larger than ) are distributed according to the exponential distribution and the total number of outliers is not larger than . the exponent of the exponential distribution can be estimated with the maximum likelihood method applied to the subsample , where the likelihood with right - censored observations is given by ^r\prod_{k = r+1}^nf(y_k),\ ] ] where is the cdf of the exponential distribution and is the corresponding pdf .the exponent can then be estimated by maximizing . in the case of an exponential distribution , this yields the closed form expression ^{-1}\ ] ] the p - values that the smallest ranks deviate from the null hypothesis of the exponential distribution can be then obtained from the following equation ( see derivations in ) : where is the normalized incomplete beta - function , and the exponent for the probability distribution is taken to be equal to the mle ( [ eq : alpha_exp ] ) . the event for which can be then diagnosed as an outlier with respect to the fitted exponential distribution of the tail .this corresponds to the original event being a `` dragon - king '' with respect to the fitted power law distribution . table [ tb : dk_aggregated ] presents results of the application of the u - test to the aggregated distribution of normalized returns of drawdowns , where the threshold is selected by using the kolmogorov - smirnov test ( table [ tb : fit_aggregated ] ) .the first important observation is that all `` dragon - king '' events that were detected for individual contracts ( table [ tb : dk_dd ] ) are also quantified as belonging to a different regime than the power law for the aggregated distribution .this supports the findings of section [ sec : dragonkings ]. however , not all extremes ( events with rank 1 ) of the individual contracts are present n the upper tail of the aggregated distribution . for drawdowns ,the extreme tail of the aggregated distribution contains mostly events from european and us markets .for example , the extreme drawdowns for omxs , hcei , tamsci , nikkei , topix , asx and bovespa are not qualified as an outliers ( neither they were reported as `` dragon - kings '' at the individual level ) . for drawups ,the number of detected outliers is much smaller than for drawdowns . on the contrary ,several events classified as outliers at the aggregate level were not reported as individual `` dragon - kings '' using the ( conservative ) modified dk - test .for example , we were unable to reject the null hypothesis for the two largest events occurring in the smi futures contracts ( figure [ fig : ccdf ] ) .for , we report the following p - values : , and ; for , we obtain p - values : , , and . in both cases ,one inequality of the system does not hold .this results from the fact that the second event is not sufficiently larger than , which leads to the absence of rejection of the null ( no dragon - kings ) for ( ) . andthe third event only slightly deviates from the tail ( for ) , which leads to the absence of rejection of the null for larger values .finally , all events detected as outliers of the aggregate distribution ( table [ tb : dk_aggregated ] ) can be detected in their corresponding individual distributions using the u - test .however , being dependent on the calibration of the exponent of the power law , the u - test is subjected to estimation errors that need to be accounted properly .the nonparametric dk - test is free from this drawback , at the cost of having more limited power . in general , as with any statistical testing problem , it is always a good practice to consider several different tests to confirm the conclusions .are extreme drawdowns ( drawups ) associated with the largest speed and/or the longer durations ?clarifying the interdependence between size , speed and duration of extreme drawdown ( drawups ) is important to better understand their generating mechanism . in previous sections ,we have already commented that events that are extreme with respect to one characteristic may not be extreme with respect to another ( see tables [ tb : dk_dd ] and [ tb : dk_dd_dur_speed ] ) .the largest ( with respect to ) drawdowns and drawups are often not the fastest , and by far not the longest events in the population .the occurrence of an extreme normalized speed does not ensure that the event will have an extreme size .moreover , the longest drawdowns and drawups typically have relatively small returns .our goal here is to quantify the mutual interdependence between size , speed and duration .generally , the complete information about the dependence between two random variables and is contained in their copula structure . here , we consider a simpler metric , the tail dependence , which is defined at the probability of observing a very large value of one variable conditional on the occurrence of a very large value of the other variable : ,\ ] ] where and are the marginal cumulative distributional functions of and . in practice , it is difficult to work with the asymptotic tail dependence , which is defined in the empirically unattainable limit .we will thus consider for fixed value of probability and document the behaviour of as approaches from below . for normalized returns and normalized speeds ( solid lines ) and for normalized returns and durations ( dashed lines ) of the aggregated drawdowns ( red ) and drawups ( green ) for different contracts and different probabilities .,scaledwidth=70.0% ] figure [ fig : td_aggregated ] shows the non - asymptotic tail dependence coefficients of ( i ) and and ( ii ) and for the aggregated probability distributions and ( marginal distributions are presented in figure [ fig : ccdf_aggregated ] ) . in other words ,figure [ fig : td_aggregated ] quantifies the probability that the observed drawdown is large , conditional on it being ( i ) fast or ( ii ) long .one can observe that of the normalized returns conditional on the durations decreases monotonously with and converges to zero as .this indicates an absence of dependence of the extreme values of size and durations . in other words , the longest drawdowns and drawupsdo not belong to the highest quantiles in term of sizes . on the contrary ,the tail dependence between returns and speed is significant and tends to increase for , except very close to ( ) due to the finite size of the data sample .the estimated value of the tail dependence between returns and speed is approximatively in the range for drawdowns and for drawups , i.e. , conditional on a very large speed , there is about a 10% probability that the corresponding drawdown ( drawup ) is extreme in normalised return .figure [ fig : tail_dependence ] , which presents the tail dependence coefficients at three probability levels and for individual contracts , supports our previous findings at the aggregate level . with the exception of the omxs , hcei and asx contracts that are characterised by monotonously decaying ,all other analyzed future contracts exhibit clear signatures of non - zero tail dependence with varying in the range ( for nikkei , topix ) and ( for cac , dax , aex , stoxx , dj , nifty ) . between normalized returns and normalized speeds of drawdowns ( red bars ) and drawups ( green bars , inverted x - axis ) for different contracts and different probability levels .the last row corresponds to the aggregated distributions . ]we have investigated the distributions of -drawdowns and -drawups of the most liquid futures financial contracts of the world at time scales of seconds .the -drawdowns and -drawups defined by expressions ( [ eq : delta ] ) with ( [ eq : cond ] ) are proposed as robust measures of the risks to which investors are arguably the most concerned with .the time scale of seconds for the time steps used to defined the drawdown and drawups is chosen as a compromise between robustness with respect to microstructure effects and reactivity to regime changes in the time dynamics . similarly to the distribution of returns , we find that the distributions of -drawdowns and -drawups exhibit power law tails , albeit with exponents significantly larger than those for the return distribution . this paradoxical result can be attributed to ( i ) the existence of significant transient dependence between returns and ( ii ) the presence of large outliers ( termed dragon - kings ) characterizing the extreme tail of the drawdown / drawup distributions deviating from the power law .we present the generalised non - parametric dk - test together with a novel implementation of the parametric u - test for the diagnostic of the dragon - kings .studying both the distributions of -drawdowns and -drawups of individual future contracts and of their aggregation confirm the robustness and generality of our results .the study of the tail dependence between drawdown / drawup sizes , speeds and durations indicates a clear relationship between size and speed but none between size and duration .this implies that the most extreme drawdown / drawup tend to occur fast and are dominated by a few very large returns .these insights generalise and extend previous studies on outliers of drawdown / drawup performed at the daily scale .we would like to thank professor frdric abergel and the chair of quantitative finance of lcole centrale de paris ( http://fiquant.mas.ecp.fr/liquidity-watch/ ) for the access to the high - frequency data used in the present analysis .we are very grateful to professor yannick malevergne for many fruitful discussions while preparing this manuscript .the statistical analysis of the data was performed using open source software : python 2.7 ( http://www.python.org ) and libraries : pandas , numpy ( http://www.numpy.org/ ) , scipy ( http://www.scipy.org/ ) , ipython and matplotlib .filimonov , v. , sornette , d. , aug .apparent criticality and calibration issues in the hawkes self - excited point process model : application to high - frequency financial data .swiss finance institute research paper no .13 - 60 .gopikrishnan , p. , meyer , m. , amaral , l. a. n. , stanley , h. e. , jul .inverse cubic law for the distribution of stock price variations .the european physical journal b - condensed matter and complex systems 3 ( 2 ) , 139140 . masteika , s. , rutkauskas , a. v. , alexander , j. a. , 2012 .continuous futures data series for back testing and technical analysis . in : 2012 international conference on economics , business and marketing management .. 265269 .
we investigate the distributions of -drawdowns and -drawups of the most liquid futures financial contracts of the world at time scales of seconds . the -drawdowns ( resp . -drawups ) generalise the notion of runs of negative ( resp . positive ) returns so as to capture the risks to which investors are arguably the most concerned with . similarly to the distribution of returns , we find that the distributions of -drawdowns and -drawups exhibit power law tails , albeit with exponents significantly larger than those for the return distributions . this paradoxical result can be attributed to ( i ) the existence of significant transient dependence between returns and ( ii ) the presence of large outliers ( dragon - kings ) characterizing the extreme tail of the drawdown / drawup distributions deviating from the power law the study of the tail dependence between the sizes , speeds and durations of drawdown / drawup indicates a clear relationship between size and speed but none between size and duration . this implies that the most extreme drawdown / drawup tend to occur fast and are dominated by a few very large returns . we discuss both the endogenous and exogenous origins of these extreme events . extreme events , drawdowns , power law distribution , tail dependence , `` dragon - king '' events , financial markets , high - frequency data
machine learning and neuroscience speak different languages today . brain science has discovered a dazzling array of brain areas , cell types , molecules , cellular states , and mechanisms for computation and information storage .machine learning , in contrast , has largely focused on instantiations of a single principle : function optimization .it has found that simple optimization objectives , like minimizing classification error , can lead to the formation of rich internal representations and powerful algorithmic capabilities in multilayer and recurrent networks .here we seek to connect these perspectives .the artificial neural networks now prominent in machine learning were , of course , originally inspired by neuroscience .while neuroscience has continued to play a role , many of the major developments were guided by insights into the mathematics of efficient optimization , rather than neuroscientific findings .the field has advanced from simple linear systems , to nonlinear networks , to deep and recurrent networks .backpropagation of error enabled neural networks to be trained efficiently , by providing an efficient means to compute the gradient with respect to the weights of a multi - layer network .methods of training have improved to include momentum terms , better weight initializations , conjugate gradients and so forth , evolving to the current breed of networks optimized using batch - wise stochastic gradient descent .these developments have little obvious connection to neuroscience .we will argue here , however , that neuroscience and machine learning are , once again , ripe for convergence .three aspects of machine learning are particularly important in the context of this paper .first , machine learning has focused on the optimization of cost functions ( * figure [ fig1]a * ) .second , recent work in machine learning has started to introduce complex cost functions , those that are not uniform across layers and time , and those that arise from interactions between different parts of a network .for example , introducing the objective of temporal coherence for lower layers ( non - uniform cost function over space ) improves feature learning , cost function schedules ( non - uniform cost function over time ) improve generalization and adversarial networks an example of a cost function arising from internal interactions allow gradient - based training of generative models .networks that are easier to train are being used to provide `` hints '' to help bootstrap the training of more powerful networks .third , machine learning has also begun to diversify the architectures that are subject to optimization .it has introduced simple memory cells with multiple persistent states , more complex elementary units such as `` capsules '' and other structures , content addressable and location addressable memories , as well as pointers and hard - coded arithmetic operations .these three ideas have , so far , not received much attention in neuroscience .we thus formulate these ideas as three hypotheses about the brain , examine evidence for them , and sketch how experiments could test them . butfirst , let us state the hypotheses more precisely .the central hypothesis for linking the two fields is that biological systems , like many machine - learning systems , are able to optimize cost functions .the idea of cost functions means that neurons in a brain area can somehow change their properties , e.g. , the properties of their synapses , so that they get better at doing whatever the cost function defines as their role .human behavior sometimes approaches optimality in a domain , e.g. , during movement , which suggests that the brain may have learned optimal strategies .subjects minimize energy consumption of their movement system , and minimize risk and damage to their body , while maximizing financial and movement gains .computationally , we now know that optimization of trajectories gives rise to elegant solutions for very complex motor tasks .we suggest that cost function optimization occurs much more generally in shaping the internal representations and processes used by the brain .we also suggest that this requires the brain to have mechanisms for efficient credit assignment in multilayer and recurrent networks .a second realization is that cost functions need not be global. neurons in different brain areas may optimize different things , e.g. , the mean squared error of movements , surprise in a visual stimulus , or the allocation of attention .importantly , such a cost function could be locally generated .for example , neurons could locally evaluate the quality of their statistical model of their inputs ( * figure [ fig1]b * ) .alternatively , cost functions for one area could be generated by another area .moreover , cost functions may change over time , e.g. , guiding young humans to understanding simple visual contrasts early on , and faces a bit later .this could allow the developing brain to bootstrap more complex knowledge based on simpler knowledge .cost functions in the brain are likely to be complex and to be arranged to vary across areas and over development .a third realization is that structure matters .the patterns of information flow seem fundamentally different across brain areas , suggesting that they solve distinct computational problems .some brain areas are highly recurrent , perhaps making them predestined for short - term memory storage .some areas contain cell types that can switch between qualitatively different states of activation , such as a persistent firing mode versus a transient firing mode , in response to particular neurotransmitters .other areas , like the thalamus appear to have the information from other areas flowing through them , perhaps allowing them to determine information routing .areas like the basal ganglia are involved in reinforcement learning and gating of discrete decisions . as every programmer knows ,specialized algorithms matter for efficient solutions to computational problems , and the brain is likely to make good use of such specialization ( * figure [ fig1]c * ) .these ideas are inspired by recent advances in machine learning , but we also propose that the brain has major differences from any of today s machine learning techniques .in particular , the world gives us a relatively limited amount of information that we could use for supervised learning .there is a huge amount of information available for unsupervised learning , but there is no reason to assume that a _ generic _ unsupervised algorithm , no matter how powerful , would learn the precise things that humans need to know , in the order that they need to know it . the evolutionary challenge of making unsupervised learning solve the `` right '' problems is , therefore , to find a sequence of cost functions that will deterministically build circuits and behaviors according to prescribed developmental stages , so that in the end a relatively small amount of information suffices to produce the right behavior .for example , a developing duck imprints a template of its parent , and then uses that template to generate goal - targets that help it develop other skills like foraging .generalizing from this and from other studies , we propose that many of the brain s cost functions arise from such an internal bootstrapping process .indeed , we propose that biological development and reinforcement learning can , in effect , program the emergence of a sequence of cost functions that precisely anticipates the future needs faced by the brain s internal subsystems , as well as by the organism as a whole .this type of developmentally programmed bootstrapping generates an internal infrastructure of cost functions which is diverse and complex , while simplifying the learning problems faced by the brain s internal processes . beyond simple tasks like familial imprinting, this type of bootstrapping could extend to higher cognition , e.g. , internally generated cost functions could train a developing brain to properly access its memory or to organize its actions in ways that will prove to be useful later on .the potential bootstrapping mechanisms that we will consider operate in the context of unsupervised and reinforcement learning , and go well beyond the types of curriculum learning ideas used in today s machine learning . in the rest of this paper, we will elaborate on these hypotheses .first , we will argue that both local and multi - layer optimization is , perhaps surprisingly , compatible with what we know about the brain .second , we will argue that cost functions differ across brain areas and change over time and describe how cost functions interacting in an orchestrated way could allow bootstrapping of complex function .third , we will list a broad set of specialized problems that need to be solved by neural computation , and the brain areas that have structure that seems to be matched to a particular computational problem .we then discuss some implications of the above hypotheses for research approaches in neuroscience and machine learning , and sketch a set of experiments to test these hypotheses .finally , we discuss this architecture from the perspective of evolution . 2much of machine learning is based on efficiently optimizing functions , and , as we will detail below , the ability to use backpropagation of error to calculate gradients of arbitrary parametrized functions has been a key breakthrough . in * hypothesis 1 *, we claim that the brain is also , at least in part , an optimization machine .but what exactly does it mean to say that the brain can optimize cost functions ?after all , many processes can be viewed as optimizations .for example , the laws of physics are often viewed as minimizing an action functional , while evolution optimizes the fitness of replicators over a long timescale . to be clear ,our main claims are : that * a ) * the brain has powerful mechanisms for credit assignment during learning that allow it to optimize global functions in multi - layer networks by adjusting the properties of each neuron to contribute to the global outcome , and that * b ) * the brain has mechanisms to specify exactly which cost functions it subjects its networks to , i.e. , that the cost functions are highly tunable , shaped by evolution and matched to the animal s ethological needs .thus , the brain uses cost functions as a key driving force of its development , much as modern machine learning systems do . to understand the basis of these claims, we must now delve into the details of how the brain might efficiently perform credit assignment throughout large , multi - layered networks , in order to optimize complex functions .we argue that the brain uses several different types of optimization to solve distinct problems . in some structures, it may use genetic pre - specification of circuits for problems that require only limited learning based on data , or it may exploit local optimization to avoid the need to assign credit through many layers of neurons . it may also use a host of proposed circuit structures that would allow it to actually perform , in effect , backpropagation of errors through a multi - layer network , using biologically realistic mechanisms a feat that had once been widely believed to be biologically implausible .potential such mechanisms include circuits that literally backpropagate error derivatives in the manner of conventional backpropagation , as well as circuits that provide other efficient means of approximating the effects of backpropagation , i.e. , of rapidly computing the approximate gradient of a cost function relative to any given connection weight in the network .lastly , the brain may use algorithms that exploit specific aspects of neurophysiology such as spike timing dependent plasticity , dendritic computation , local excitatory - inhibitory networks , or other properties as well as the integrated nature of higher - level brain systems .such mechanisms promise to allow learning capabilities that go even beyond those of current backpropagation networks .not all learning requires a general - purpose optimization mechanism like gradient descent .many theories of cortex emphasize potential self - organizing and unsupervised learning properties that may obviate the need for multi - layer backpropagation as such .hebbian plasticity , which adjusts weights according to correlations in pre - synaptic and post - synaptic activity , is well established .various versions of hebbian plasticity can give rise to different forms of correlation and competition between neurons , leading to the self - organized formation of ocular dominance columns , self - organizing maps and orientation columns .often these types of local self - organization can also be viewed as optimizing a cost function : for example , certain forms of hebbian plasticity can be viewed as extracting the principal components of the input , which minimizes a reconstruction error . to generate complex temporal patterns , the brain may also implement other forms of learning that do not require any equivalent of full backpropagation through a multilayer network . for example ,`` liquid- '' or `` echo - state machines '' are randomly connected recurrent networks that form a basis set of random filters , which can be harnessed for learning with tunable readout weights .variants exhibiting chaotic , spontaneous dynamics can even be trained by feeding back readouts into the network and suppressing the chaotic activity .learning only the readout layer makes the optimization problem much simpler ( indeed , equivalent to regression for supervised learning ) .additionally , echo state networks can be trained by reinforcement learning as well as supervised learning .we argue that the above mechanisms of local self - organization are insufficient to account for the brain s powerful learning performance . to elaborate on the need for an efficient means of gradient computation in the brain , we will first place backpropagation into it s computational context . then we will explain how the brain could plausibly implement approximations of gradient descent .the simplest mechanism to perform cost function optimization is sometimes known as the `` twiddle '' algorithm or , more technically , as `` serial perturbation '' .this mechanism works by perturbing ( i.e. , `` twiddling '' ) , with a small increment , a single weight in the network , and verifying improvement by measuring whether the cost function has decreased compared to the network s performance with the weight unperturbed .if improvement is noticeable , the perturbation is used as a direction of change to the weight ; otherwise , the weight is changed in the opposite direction ( or not changed at all ) .serial perturbation is therefore a method of `` coordinate descent '' on the cost , but it is slow and requires global coordination : each synapse in turn is perturbed while others remain fixed .weight perturbation ( or parallel perturbation ) perturbs all of the weights in the network at once .it is able to optimize small networks to perform tasks but generally suffers from high variance .that is , the measurement of the gradient direction is noisy and changes drastically from perturbation to perturbation because a weight s influence on the cost is masked by the changes of all other weights , and there is only one scalar feedback signal indicating the change in the cost .weight perturbation is dramatically inefficient for large networks .in fact , parallel and serial perturbation learn at approximately the same rate if the time measure counts the number of times the network propagates information from input to output .some efficiency gain can be achieved by perturbing neural activities instead of synaptic weights , acknowledging the fact that any long - range effect of a synapse is mediated through a neuron . like weight perturbation and unlike serial perturbation , minimal global coordination is needed : each neuron only needs to receive a feedback signal indicating the global cost .the variance of node perturbation s gradient estimate is far smaller than that of weight perturbation under the assumptions that either all neurons or all weights , respectively , are perturbed and that they are perturbed at the same frequency . in this case ,node perturbation s variance is proportional to the number of cells in the network , not the number of synapses .all of these approaches are slow either due to the time needed for serial iteration over all weights or the time needed for averaging over low signal - to - noise ratio gradient estimates . to their credit however , none of these approaches requires more than knowledge of local activities and the single global cost signal .real neural circuits in the brain have mechanisms ( e.g. , diffusible neuromodulators ) that appear to code the signals relevant to implementing those algorithms . in many cases , for example in reinforcement learning , the cost function , which is computed based on interaction with an unknown environment , can not be differentiated directly , and an agent has no choice but to deploy clever twiddling to explore at some level of the system .backpropagation , in contrast , works by computing the sensitivity of the cost function to each weight based on the layered structure of the system . the derivatives of the cost function with respect to the last layer can be used to compute the derivatives of the cost function with respect to the penultimate layer , and so on , all the way down to the earliest layers .backpropagation can be computed rapidly , and for a single input - output pattern , it exhibits no variance in its gradient estimate .the backpropagated gradient has no more noise for a large system than for a small system , so deep and wide architectures with great computational power can be trained efficiently . to permit biological learning with efficiency approaching that of machine learning methods , some provision for more sophisticated gradient propagation may be suspected .contrary to what was once a common assumption , there are now many proposed `` biologically plausible '' mechanisms by which a neural circuit could implement optimization algorithms that , like backpropagation , can efficiently make use of the gradient .these include generalized recirculation , contrastive hebbian learning , random feedback weights together with synaptic homeostasis , spike timing dependent plasticity ( stdp ) with iterative inference and target propagation , complex neurons with backpropagating action - potentials , and others . while these mechanisms differ in detail , they all invoke feedback connections that carry error phasically .learning occurs by comparing a prediction with a target , and the prediction error is used to drive top - down changes in bottom - up activity .as an example , consider oreilly s temporally extended contrastive attractor learning ( xcal ) algorithm .suppose we have a multilayer neural network with an input layer , an output layer , and a set of hidden layers in between .oreilly showed that the same functionality as backpropagation can be implemented by a bidirectional network with the same weights but symmetric connections .after computing the outputs using the forward connections only , we set the output neurons to the values they should have .the dynamics of the network then cause the hidden layers activities to evolve toward a stable attractor state linking input to output .the xcal algorithm performs a type of local modified hebbian learning at each synapse in the network during this process .the xcal hebbian learning rule compares the local synaptic activity ( pre x post ) during the early phase of this settling ( before the attractor state is reached ) to the final phase ( once the attractor state has been reached ) , and adjusts the weights in a way that should make the early phase reflect the later phase more closely .these contrastive hebbian learning methods even work when the connection weights are not precisely symmetric .xcal has been implemented in biologically plausible conductance - based neurons and basically implements the backpropagation of error approach .approximations to backpropagation could also be enabled by the millisecond - scale timing of of neural activities .spike timing dependent plasticity ( stdp ) , for example , is a feature of some neurons in which the sign of the synaptic weight change depends on the precise millisecond - scale relative timing of pre - synaptic and post - synaptic spikes .this is conventionally interpreted as hebbian plasticity that measures the potential for a causal relationship between the pre - synaptic and post - synaptic spikes : a pre - synaptic spike could have contributed to causing a post - synaptic spike only if it occurs shortly beforehand . to enable a backpropagation mechanism ,hinton has suggested an alternative interpretation : that neurons could encode the types of error derivatives needed for backpropagation in the temporal derivatives of their firing rates .stdp then corresponds to a learning rule that is sensitive to these error derivatives . in other words , in an appropriate network context , stdp learning could give rise to a biological implementation of backpropagation . another possible mechanism , by which biological neural networks could approximate backpropagation ,is `` feedback alignment '' .there , the feedback pathway in backpropagation , by which error derivatives at a layer are computed from error derivatives at the subsequent layer , is replaced by a set of random feedback connections , with no dependence on the forward weights .subject to the existence of a synaptic normalization mechanism and approximate sign - concordance between the feedforward and feedback connections , this mechanism of computing error derivatives works nearly as well as backpropagation on a variety of tasks . in effect , the forward weights are able to adapt to bring the network into a regime in which the random backwards weights actually carry the information that is useful for approximating the gradient .this is a remarkable and surprising finding , and is indicative of the fact that our understanding of gradient descent optimization , and specifically of the mechanisms by which backpropagation itself functions , are still incomplete . in neuroscience , meanwhile , we find feedback connections almost wherever we find feed - forward connections , and their role is the subject of diverse theories .it should be noted that feedback alignment as such does not specify exactly how neurons represent and make use of the error signals ; it only relaxes a constraint on the transport of the error signals .thus , feedback alignment is more a primitive that can be used in fully biological ( approximate ) implementations of backpropagation , than a fully biological implementation in its own right . as such, it may be possible to incorporate it into several of the other schemes discussed here .the above `` biological '' implementations of backpropagation still lack some key aspects of biological realism .for example , in the brain , neurons tend to be either excitatory or inhibitory but not both , whereas in artificial neural networks a single neuron may send both excitatory and inhibitory signals to its downstream neurons . fortunately , this constraint is unlikely to limit the functions that can be learned .other biological considerations , however , need to be looked at in more detail : the highly recurrent nature of biological neural networks , which show rich dynamics in time , and the fact that most neurons in mammalian brains communicate via spikes .we now consider these two issues in turn .[ [ temporal - credit - assignment ] ] temporal credit assignment : + + + + + + + + + + + + + + + + + + + + + + + + + + + the biological implementations of backpropagation proposed above , while applicable to feedforward networks , do not give a natural implementation of `` backpropagation through time ''( bptt ) for recurrent networks , which is widely used in machine learning for training recurrent networks on sequential processing tasks .bptt `` unfolds '' a recurrent network across multiple discrete time steps and then runs backpropagation on the unfolded network to assign credit to particular units at particular time steps .while the network unfolding procedure of bptt itself does not seem biologically plausible , to our intuition , it is unclear to what extent temporal credit assignment is truly needed for learning particular temporally extended tasks .if the system is given access to appropriate memory stores and representations of temporal context , this could potentially mitigate the need for temporal credit assignment as such in effect , memory systems could `` spatialize '' the problem of temporal credit assignment .for example , memory networks store everything by default up to a certain buffer size , eliminating the need to perform credit assignment over the write - to - memory events , such that the network only needs to perform credit assignment over the read - from - memory events . in another example , certain network architectures that are superficially very deep , but which possess particular types of `` skip connections '' , can actually be seen as ensembles of comparatively shallow networks ; applied in the time domain , this could limit the need to propagate errors far backwards in time . other , similar specializations or higher - levels of structure could , potentially , further ease the burden on credit assignment .can generic recurrent networks perform temporal credit assignment in in a way that is more biologically plausible than bptt ? indeed , new discoveries are being made about the capacity for supervised learning in continuous - time recurrent networks with more realistic synapses and neural integration properties . in internal force learning , internally generated random fluctuations inside a chaotic recurrent network are adjusted to provide feedback signals that drive weight changes internal to the network while the outputs are clamped to desired patterns .this is made possible by a learning procedure that rapidly adjusts the network output to a state where it is close to the clamped values , and exerts continuous control to keep this difference small throughout the learning process .this procedure is able to control and exploit the chaotic dynamical patterns that are spontaneously generated by the network .werbos has proposed in his `` error critic '' that an online approximation to bptt can be achieved by learning to predict the backward - through - time gradient signal ( costate ) in a manner analogous to the prediction of value functions in reinforcement learning .broadly , we are only beginning to understand how neural activity can itself represent the time variable , and how recurrent networks can learn to generate trajectories of population activity over time .moreover , as we discuss below , a number of cortical models also propose means , other than bptt , by which networks could be trained on sequential prediction tasks , even in an online fashion .a broad range of ideas can be used to approximate bptt in more realistic ways . [ [ spiking - networks ] ] spiking networks : + + + + + + + + + + + + + + + + + it has been difficult to apply gradient descent learning directly to spiking neural networks .a number of optimization procedures have been used to generate , indirectly , spiking networks which can perform complex tasks , by performing optimization on a continuous representation of the network dynamics and embedding variables into high - dimensional spaces with many spiking neurons representing each variable .the use of recurrent connections with multiple timescales can remove the need for backpropagation in the direct training of spiking recurrent networks .fast connections maintain the network in a state where slow connections have local access to a global error signal .while the biological realism of these methods is still unknown , they all allow connection weights to be learned in spiking networks .these and other novel learning procedures illustrate the fact that we are only beginning to understand the connections between the temporal dynamics of biologically realistic networks , and mechanisms of temporal and spatial credit assignment . nevertheless , we argue here that existing evidence suggests that biologically plausible neural networks can solve these problems in other words , it is possible to efficiently optimize complex functions of temporal history in the context of spiking networks of biologically realistic neurons . in any case , there is little doubt that spiking recurrent networks using realistic population coding schemes can , with an appropriate choice of connection weights , compute complicated , cognitively relevant functions .the question is how the developing brain efficiently learns such complex functions .the brain has mechanisms and structures that could support learning mechanisms different from typical gradient - based optimization algorithms .the complex physiology of individual biological neurons may not only help explain how some form of efficient gradient descent could be implemented within the brain , but also could provide mechanisms for learning that go beyond backpropagation .this suggests that the brain may have discovered mechanisms of credit assignment quite different from those dreamt up by machine learning .one such biological primitive is dendritic computation , which could impact prospects for learning algorithms in several ways .first , real neurons are highly nonlinear , with the dendrites of each _ single _ neuron implementing something computationally similar to a three - layer neural network .second , when a neuron spikes , its action potential propagates back from the soma into the dendritic tree .however , it propagates more strongly into the branches of the dendritic tree that have been active , potentially simplifying the problem of credit assignment .third , neurons can have multiple somewhat independent dendritic compartments , as well as a somewhat independent somatic compartment , which means that the neuron should be thought of as storing more than one variable .thus , there is the possibility for a neuron to store both its activation itself , and the error derivative of a cost function with respect to its activation , as required in backpropagation , and biological implementations of backpropagation based on this principle have been proposed .overall , the implications of dendritic computation for credit assignment in deep networks are only beginning to be considered . beyond dendritic computation , diverse mechanisms like retrograde ( post - synaptic to pre - synaptic ) signals using cannabinoids , or rapidly - diffusing gases such as nitric oxide , are among many that could enable learning rules that go beyond backpropagation .harris has suggested how slow , retroaxonal ( i.e. , from the outgoing synapses back to the parent cell body ) transport of molecules like neurotrophins could allow neural networks to implement an analog of an exchangeable currency in economics , allowing networks to self - organize to efficiently provide information to downstream `` consumer '' neurons that are trained via faster and more direct error signals .the existence of these diverse mechanisms may call into question traditional , intuitive notions of `` biological plausibility '' for learning algorithms .another biological primitive is neuromodulation . the same neuron or circuit can exhibit different input - output responses and plasticity depending on a global circuit state , as reflected by the concentrations of various _ neuromodulators _ like dopamine , serotonin , norepinephrine , acetylcholine , and hundreds of different neuropeptides such as opiods .these modulators interact in complex and cell - type - specific ways to influence circuit function .interactions with glial cells also play a role in neural signaling and neuromodulation , leading to the concept of `` tripartite '' synapses that include a glial contribution .modulation could have many implications for learning .first , modulators can be used to gate synaptic plasticity on and off selectively in different areas and at different times , allowing precise , rapidly updated orchestration of where and when cost functions are applied .furthermore , it has been argued that a single neural circuit can be thought of as multiple overlapping circuits with modulation switching between them . in a learning context , this could potentially allow sharing of synaptic weight information between overlapping circuits . discusses further computational aspects of neuromodulation .overall , neuromodulation seems to expand the range of possible algorithms that could be used for optimization .a number of models attempt to explain cortical learning on the basis of specific architectural features of the 6-layered cortical sheet .these models generally agree that a primary function of the cortex is some form of unsupervised learning via prediction .some cortical learning models are explicit attempts to map cortical structure onto the framework of message - passing algorithms for bayesian inference , while others start with particular aspects of cortical neurophysiology and seek to explain those in terms of a learning function .for example , the nonlinear and dynamical properties of cortical pyramidal neurons the principal excitatory neuron type in cortex are of particular interest here , especially because these neurons have multiple dendritic zones that are targeted by different kinds of projections , which may allow the pyramidal neuron to make comparisons of top - down and bottom - up inputs . other aspects of the laminar cortical architecture could be crucial to how the brain implements learning .local inhibitory neurons targeting particular dendritic compartments of the l5 pyramidal could be used to exert precise control over when and how the relevant feedback signals and associative mechanisms are utilized .notably , local inhibitory networks could also give rise to competition between different representations in the cortex , perhaps allowing one cortical column to suppress others nearby , or perhaps even to send more sophisticated messages to gate the state transitions of its neighbors .moreover , recurrent connectivity with the thalamus , structured bursts of spiking , and cortical oscillations ( not to mention other mechanisms like neuromodulation ) could control the storage of information over time , to facilitate learning based on temporal prediction .these concepts begin to suggest preliminary , exploratory models for how the detailed anatomy and physiology of the cortex could be interpreted within a machine - learning framework that goes beyond backpropagation .but these are early days : we still lack detailed structural / molecular and functional maps of even a single local cortical microcircuit .human learning is often one - shot : it can take just a single exposure to a stimulus to never forget it , as well as to generalize from it to new examples .one way of allowing networks to have such properties is what is described by i - theory , in the context of learning invariant representations for object recognition . instead of training via gradient descent ,image templates are stored in the weights of simple - complex cell networks while objects undergo transformations , similar to the use of stored templates in hmax .the theories then aim to show that you can invariantly and discriminatively represent objects using a single sample , even of a new class .additionally , the nervous system may have a way of replaying reality over and over , allowing to move an item from episodic memory into a long - term memory in the neural network .this solution effectively uses many iterations of weight updating to fully learn a single item , even if one has only been exposed to it once .finally , higher - level systems in the brain may be able to implement bayesian learning of sequential programs , which is a powerful means of one - shot learning .this type of cognition likely relies on an interaction between multiple brain areas such as the prefrontal cortex and basal ganglia .computer models , and neural network based models in particular , have not yet reached fully human - like performance in this area , despite significant recent advances .these potential substrates of one - shot learning rely on mechanisms other than simple gradient descent .it should be noted , though , that recent architectural advances , including specialized spatial attention and feedback mechanisms , as well as specialized memory mechanism , do allow some types of one - shot generalization to be driven by backpropagation - based learning .human learning is often active and deliberate .it seems likely that , in human learning , actions are chosen so as to generate interesting training examples , and sometimes also to test specific hypotheses .such ideas of active learning and `` child as scientist '' go back to piaget and have been elaborated more recently .we want our learning to be based on maximally informative samples , and active querying of the environment ( or of internal subsystems ) provides a way route to this . at some level of organization , of course , it would seem useful for a learning system to develop explicit representations of its uncertainty , since this can be used to guide the system to actively seek the information that would reduce its uncertainty most quickly . moreover , there are population coding mechanisms that could support explicit probabilistic computations .yet it is unclear to what extent and at what levels the brain uses an explicitly probabilistic framework , or to what extent probabilistic computations are emergent from other learning processes .standard gradient descent does not incorporate any such adaptive sampling mechanism , e.g. , it does not deliberately sample data so as to maximally reduce its uncertainty .interestingly , however , stochastic gradient descent can be used to generate a system that samples adaptively . in other words ,a system can learn , by gradient descent , how to choose its own input data samples in order to learn most quickly from them by gradient descent .ideally , the learner learns to choose actions that will lead to the largest improvements in its prediction or data compression performance . in , this is done in the framework of reinforcement learning , and incorporates a mechanisms for the system to measure its own rate of learning . in other words , it is possible to reinforcement - learn a policy for selecting the most interesting inputs to drive learning .adaptive sampling methods are also known in reinforcement learning that can achieve optimal bayesian exploration of markov decision process environments .these approaches achieve optimality in an arbitrary , abstract environment .but of course , evolution may also encode its implicit knowledge of the organism s natural environment , the behavioral goals of the organism , and the developmental stages and processes which occur inside the organism , as priors or heuristics which would further constrain the types of adaptive sampling that are optimal in practice .for example , simple heuristics like seeking certain perceptual signatures of novelty , or more complex heuristics like monitoring situations that other people seem to find interesting , might be good ways to bias sampling of the environment so as to learn more quickly .other such heuristics might be used to give internal brain systems the types of training data that will be most useful to those particular systems at any given developmental stage .we are only beginning to understand how active learning might be implemented in the brain .we speculate that multiple mechanisms , specialized to different brain systems and spatio - temporal scales , could be involved .the above examples suggest that at least some such mechanisms could be understood from the perspective of optimizing cost functions .we have described how the brain could implement learning mechanisms of comparable power to backpropagation .but in many cases , the system may be more limited by the available training signals than by the optimization process itself . in machine learning ,one distinguishes supervised learning , reinforcement learning and unsupervised learning , and the training data limitation manifests differently in each case . both supervised and reinforcement learning require some form of teaching signal , butthe nature of the teaching signal in supervised learning is different from that in reinforcement learning .in supervised learning , the trainer provides the entire vector of errors for the output layer and these are back - propagated to compute the gradient : a locally optimal direction in which to update all of the weights of a potentially multi - layer and/or recurrent network . in reinforcement learning , however , the trainer provides a scalar evaluation signal , but this is not sufficient to derive a low - variance gradient .hence , some form of trial and error twiddling must be used to discover how to increase the evaluation signal . consequently , reinforcement learning is generally much less efficient than supervised learning .reinforcement learning in shallow networks is simple to implement biologically . for reinforcement learning of a deep network to be biologically plausible , however , we need a more powerful learning mechanism , since we are learning based on a more limited evaluation signal than in the supervised case : we do not have the full target pattern to train towards .nevertheless , approximations of gradient descent can be achieved in this case , and there are cases in which the scalar evaluation signal of reinforcement learning can be used to efficiently update a multi - layer network by gradient descent .the `` attention - gated reinforcement learning '' ( agrel ) networks of , and variants like kickback , give a way to compute an approximation to the full gradient in a reinforcement learning context using a feedback - based attention mechanism for credit assignment within the multi - layer network . the feedback pathway , together with a diffusible reward signal , together gate plasticity . for networks with more than three layers ,this gives rise to a model based on columns containing parallel feedforward and feedback pathways .the process is still not as efficient as backpropagation , but it seems that this form of feedback can make reinforcement learning in multi - layer networks more efficient than a naive node perturbation or weight perturbation approach . the machine - learning field has recently been tackling the question of credit assignment in deep reinforcement learning .deep q - learning demonstrates reinforcement learning in a deep network , wherein most of the network is trained via backpropagation . in regular q learning, we define a function q , which estimates the best possible sum of future rewards ( the return ) if we are in a given state and take a given action . in deep q learning , this function is approximated by a neural network that , in effect , estimates action - dependent returns in a given state .the network is trained using backpropagation of local errors in q estimation , using the fact that the return decomposes into the current reward plus the discounted estimate of future return at the next moment . during training , as the agent acts in the environment , a series of loss functions is generated at each step , defining target patterns that can be used as the supervision signal for backpropagation . as q is a highly nonlinear function of the state ,tricks are sometimes needed to make deep q learning efficient and stable , including experience replay and a particular type of mini - batch training .it is also necessary to store the outputs from the previous iteration ( or clone the entire network ) in evaluating the loss function for the subsequent iteration .this process for generating learning targets provides a kind of bridge between reinforcement learning and efficient backpropagation - based gradient descent learning .importantly , only temporally local information is needed making the approach relatively compatible with what we know about the nervous system .even given these advances , a key remaining issue in reinforcement learning is the problem of long timescales , e.g. , learning the many small steps needed to navigate from london to chicago .many of the formal guarantees of reinforcement learning , for example , suggest that the difference between an optimal policy and the learned policy becomes increasingly loose as the discount factor shifts to take into account reward at longer timescales .although the degree of optimality of human behavior is unknown , people routinely engage in adaptive behaviors that can take hours or longer to carry out , by using specialized processes like _ prospective memory _ to `` remember to remember '' relevant variables at the right times , permitting extremely long timescales of coherent action .machine learning has not yet developed methods to deal with such a wide range of timescales and scopes of hierarchical action .below we discuss ideas of hierarchical reinforcement learning that may make use of callable procedures and sub - routines , rather than operating explicitly in a time domain .as we will discuss below , some form of deep reinforcement learning may be used by the brain for purposes beyond optimizing global rewards , including the training of local networks based on diverse internally generated cost functions .scalar reinforcement - like signals are easy to compute , and easy to deliver to other areas , making them attractive mechanistically .if the brain does employ internally computed scalar reward - like signals as a basis for cost functions , it seems likely that it will have found an efficient means of reinforcement - based training of deep networks , but it is an open question whether an analog of deep q networks , agrel , or some other mechanism entirely , is used in the brain for this purpose .moreover , as we will discuss further below , it is possible that reinforcement - type learning is made more efficient in the context of specialized brain systems like short term memories , replay mechanisms , and hierarchically organized control systems .these specialized systems could reduce reliance on a need for powerful credit assignment mechanisms for reinforcement learning .finally , if the brain uses a diversity of scalar reward - like signals to implement different cost functions , then it may need to mediate delivery of those signals via a comparable diversity of molecular substrates .the great diversity of neuromodulatory signals , e.g. , neuropeptides , in the brain makes such diversity quite plausible , and moreover , the brain may have found other , as yet unknown , mechanisms of diversifying reward - like signaling pathways and enabling them to act independently of one another .in the last section , we argued that the brain can optimize functions .this raises the question of what functions it optimizes . of course, in the brain , a cost function will itself be created ( explicitly or implicitly ) by a neural network shaped by the genome .thus , the cost function used to train a given sub - network in the brain is a key innate property that can be built into the system by evolution. it may be much cheaper in biological terms to specify a cost function that allows the rapid learning of the solution to a problem than to specify the solution itself . in * hypothesis 2 *, we proposed that the brain optimizes not a single `` end - to - end '' cost function , but rather a diversity of internally generated cost functions specific to particular functions . to understand how and why the brain may use a diversity of cost functions , it is important to distinguish the differing types of cost functions that would be needed for supervised , unsupervised and reinforcement learning .we can also seek to identify types of cost functions that the brain may need to generate from a functional perspective , and how each may be implemented as supervised , unsupervised , reinforcement - based or hybrid systems .what additional circuitry is required to actually impose a cost function on an optimizing network ? in the most familiar case , supervised learning may rely on computing a vector of errors at the output of a network , which will rely on some comparator circuitry to compute the difference between the network outputs and the target values .this difference could then be backpropagated to earlier layers .an alternative way to impose a cost function is to `` clamp '' the output of the network , forcing it to occupy a desired target state .such clamping is actually assumed in some of the putative biological implementations of backpropagation described above , such as xcal and target propagation .alternatively , as described above , scalar reinforcement signals are attractive as internally - computed cost functions , but using them in deep networks requires special mechanisms for credit assignment . in unsupervised learning , cost functions may not take the form of externally supplied training or error signals , but rather can be built into the dynamics inherent to the network itself , i.e. , there may be no need for a _ separate _ circuit to compute and impose a cost function on the network . indeed , beginning with hopfield s definition of an energy function for learning in certain classes of symmetric network , researchers have discovered networks with inherent learning dynamics that implicitly optimizes certain objectives , such as statistical reconstruction of the input ( e.g. , via stochastic relaxation in boltzmann machines ) , or the achievement of certain properties like temporally stable or sparse representations .alternatively , explicit cost functions could be computed , delivered to a network , and used for unsupervised learning , following a variety of principles being discovered in machine learning ( e.g. , ) , which typically find a way to encode the cost function into the error derivatives which are backpropagated .for example , prediction errors naturally give rise to error signals for unsupervised learning , as do reconstruction errors in autoencoders , and these error signals can also be augmented with additional penalty or regularization terms that enforce objectives like sparsity or continuity , as described below . in the next sections , we elaborate on these and other means of specifying and delivering cost functions in different learning contexts .there are many objectives that can be optimized in an unsupervised context , to accomplish different kinds of functions or guide a network to form particular kinds of representations .in one common form of unsupervised learning , higher brain areas attempt to produce samples that are statistically similar to those actually seen in lower layers .for example , the wake - sleep algorithm requires the sleep mode to sample potential data points whose distribution should then match the observed distribution .unsupervised pre - training of deep networks is an instance of this , typically making use of a stacked auto - encoder framework .similarly , in target propagation , a top - down circuit , together with lateral information , has to produce data that directs the local learning of a bottom - up circuit and vice - versa .ladder autoencoders make use of lateral connections and local noise injection to introduce an unsupervised cost function , based on internal reconstructions , that can be readily combined with supervised cost functions defined on the network s top layer outputs .compositional generative models generate a scene from discrete combinations of template parts and their transformations , in effect performing a rendering of a scene based on its structural description .hinton and colleagues have also proposed cortical `` capsules '' for compositional inverse rendering .the network can thus implement a statistical goal that embodies some understanding of the way that the world produces samples .learning rules for generative models have historically involved local message passing of a form quite different from backpropagation , e.g. , in a multi - stage process that first learns one layer at a time and then fine - tunes via the wake - sleep algorithm .message - passing implementations of probabilistic inference have also been proposed as an explanation and generalization of deep convolutional networks .various mappings of such processes onto neural circuitry have been attempted .feedback connections tend to terminate in distinct layers of cortex relative to the feedforward ones making the idea of separate but interacting networks for recognition and generation potentially attractive .interestingly , such sub - networks might even be part of the same neuron and map onto `` apical '' versus `` basal '' parts of the dendritic tree .generative models can also be trained via backpropagation .recent advances have shown how to perform variational approximations to bayesian inference inside backpropagation - based neural networks , and how to exploit this to create generative models .through either explicitly statistical or gradient descent based learning , the brain can thus obtain a probabilistic model that simulates features of the world .a perceiving system should exploit statistical regularities in the world that are not present in an arbitrary dataset or input distribution .for example , objects are sparse : there are far fewer objects than there are potential places in the world , and of all possible objects there is only a small subset visible at any given time .as such , we know that the output of an object recognition system must have sparse activations . building the assumption of sparseness into simulated systems replicates a number of representational properties of the early visual system , and indeed the original paper on sparse coding obtained sparsity by gradient descent optimization of a cost function .a range of unsupervised machine learning techniques , such as the sparse autoencoders used to discover cats in youtube videos , build sparseness into neural networks . building in such spatio - temporal sparseness priors should serve as an `` inductive bias '' that can accelerate learning .but we know much more about the regularities of objects . as young babies , we already know that objects tend to persist over time . the emergence or disappearance of an object from a region of space is a rare event .moreover , object locations and configurations tend to be coherent in time .we can formulate this prior knowledge as a cost function , for example by penalizing representations which are not temporally continuous .this idea of continuity is used in a great number of artificial neural networks and related models .imposing continuity within certain models gives rise to aspects of the visual system including complex cells , specific properties of visual invariance , and even other representational properties such as the existence of place cells .unsupervised learning mechanisms that maximize temporal coherence or slowness are increasingly used in machine learning .we also know that objects tend to undergo predictable sequences of transformations , and it is possible to build this assumption into unsupervised neural learning systems .the minimization of prediction error explains a number of properties of the nervous system , and biologically plausible theories are available for how cortex could learn using prediction errors by exploiting temporal differences or top - down feedback . in one implementation ,a system can simply predict the next input delivered to the system and can then use the difference between the actual next input and the predicted next input as a full vectorial error signal for supervised gradient descent .thus , rather than optimization of prediction error being implicitly implemented by the network dynamics , the prediction error is used as an explicit cost function in the manner of supervised learning , leading to error derivatives which can be back - propagated .then , no special learning rules beyond simple backpropagation are needed .this approach has recently been advanced within machine learning .recently , combining such prediction - based learning with a specific gating mechanism has been shown to lead to unsupervised learning of disentangled representations .neural networks can also be designed to learn to invert spatial transformations . statistically describing transformations or sequencesis thus an unsupervised way of learning representations . furthermore , there are multiple modalities of input to the brain .each sensory modality is primarily connected to one part of the brain .but higher levels of cortex in each modality are heavily connected to the other modalities .this can enable forms of self - supervised learning : with a developing visual understanding of the world we can predict its sounds , and then test those predictions with the auditory input , and vice versa .the same is true about multiple parts of the same modality : if we understand the left half of the visual field , it tells us an awful lot about the right . maximizing mutual information is a natural way of improving learning , and there are many other ways in which multiple modalities or processing streams could mutually train one another .relatedly , we can use observations of one part of a visual scene to predict the contents of other parts , and optimize a cost function that reflects the discrepancy . this way , each modality effectively produces training signals for the others . in what casesmight the brain use supervised learning , given that it requires the system to `` already know '' the exact target pattern to train towards ? one possibility is that the brain can store records of states that led to good outcomes . for example , if a baby reaches for a target and misses , and then tries again and successfully hits the target , then the difference in the neural representations of these two tries reflects the direction in which the system should change .the brain could potentially use a comparator circuit a non - trivial task since neural activations are always positive , although different neuron types can be excitatory vs. inhibitory to directly compute this vectorial difference in the neural population codes and then apply this difference vector as an error signal .another possibility is that the brain uses supervised learning to implement a form of `` chunking '' , i.e. , a consolidation of something the brain already knows how to do : routines that are initially learned as multi - step , deliberative procedures could be compiled down to more rapid and automatic functions by using supervised learning to train a network to mimic the overall input - output behavior of the original multi - step process .such a process is assumed to occur in cognitive models like act - r , and methods for compressing the knowledge in neural networks into smaller networks are also being developed .thus supervised learning can be used to train a network to do in `` one step '' what would otherwise require long - range routing and sequential recruitment of multiple systems .certain generalized forms of reinforcement learning may be ubiquitous throughout the brain .such reinforcement signals may be repurposed to optimize diverse internal cost functions .these internal cost functions could be specified at least in part by genetics .some brain systems such as in the striatum appear to learn via some form of temporal difference reinforcement learning .this is reinforcement learning based on a global value function that predicts total future reward or utility for the agent .reward - driven signaling is not restricted to the striatum , and is present even in primary visual cortex . remarkably , the reward signaling in primary visual cortex is mediated in part by glial cells , rather than neurons , and involves the neurotransmitter acetylcholine . on the other hand ,some studies have suggested that visual cortex learns the basics of invariant object recognition in the absence of reward , perhaps using reinforcement only for more refined perceptual learning . but beyond these well - known global reward signals, we argue that the basic mechanisms of reinforcement learning may be widely re - purposed to train local networks using a variety of internally generated error signals .these internally generated signals may allow a learning system to go beyond what can be learned via standard unsupervised methods , effectively guiding or steering the system to learn specific features or computations .special , internally - generated signals are needed specifically for learning problems where standard unsupervised methods based purely on matching the statistics of the world , or on optimizing simple mathematical objectives like temporal continuity or sparsity will fail to discover properties of the world which are statistically weak in an objective sense but nevertheless have special significance to the organism .indigo bunting birds , for example , learn a template for the constellations of the night sky long before ever leaving the nest to engage in navigation - dependent tasks . this memory template is directly used to determine the direction of flight during migratory periods , a process that is modulated hormonally so that winter and summer flights are reversed .learning is therefore a multi - phase process in which navigational cues are memorized prior to the acquisition of motor control . in humans , we suspect that similar multi - stage bootstrapping processes are arranged to occur .humans have innate specializations for social learning .we need to be able to read their expressions as indicated with hands and faces .hands are important because they allow us to learn about the set of actions that can be produced by agents .faces are important because they give us insight into what others are thinking .people have intentions and personalities that differ from one another , and their feelings are important .how could we hack together cost functions , built on simple genetically specifiable mechanisms , to make it easier for a learning system to discover such behaviorally relevant variables ? some preliminary studies are beginning to suggest specific mechanisms and heuristics that humans may be using to bootstrap more sophisticated knowledge . in a groundbreaking study , asked how could we explain hands , to a system that does not already know about them , in a cheap way , without the need for labeled training examples ?hands are common in our visual space and have special roles in the scene : they move objects , collect objects , and caress babies .building these biases into an area specialized to detect hands could guide the right kind of learning , by providing a downstream learning system with many likely positive examples of hands on the basis of innately - stored , heuristic signatures about how hands tend to look or behave .indeed , an internally supervised learning algorithm containing specialized , hard - coded biases to detect hands , on the basis of their typical motion properties , can be used to bootstrap the training of an image recognition module that learns to recognize hands based on their appearance .thus , a simple , hard - coded module bootstraps the training of a much more complex algorithm for visual recognition of hands . then further exploits a combination of hand and face detection to bootstrap a predictor for gaze direction , based on the heuristic that faces tend to be looking towards hands .of course , given a hand detector , it also becomes much easier to train a system for reaching , crawling , and so forth .efforts are underway in psychology to determine whether the heuristics discovered to be useful computationally are , in fact , being used by human children during learning .ullman refers to such primitive , inbuilt detectors as innate `` proto - concepts '' .their broader claim is that such pre - specification of mutual supervision signals can make learning the relevant features of the world far easier , by giving an otherwise unsupervised learner the right kinds of hints or heuristic biases at the right times . herewe call these approximate , heuristic cost functions `` bootstrap cost functions '' .the purpose of the bootstrap cost functions is to reduce the amount of data required to learn a specific feature or task , but at the same time to avoid a need for fully unsupervised learning .could the neural circuitry for such a bootstrap hand - detector be pre - specified genetically ?the precedent from other organisms is strong : for example , it is famously known that the frog retina contains circuitry sufficient to implement a kind of `` bug detector '' .ullman s hand detector , in fact , operates via a simple local optical flow calculation to detect `` mover '' events .this type of simple , local calculation could potentially be implemented in genetically - specified and/or spontaneously self - organized neural circuitry in the retina or early dorsal visual areas , perhaps similarly to the frog s `` bug detector '' .how could we explain faces without any training data ?faces tend to have two dark dots in their upper half , a line in the lower half and tend to be symmetric about a vertical axis .indeed , we know that babies are very much attracted to things with these generic features of upright faces starting from birth , and that they will acquire face - specific cortical areas in their first few years of life if not earlier .it is easy to define a local rule that produces a kind of crude face detector ( e.g. , detecting two dots on top of a horizontal line ) , and indeed some evidence suggests that the brain can rapidly detect faces without even a single feed - forward pass through the ventral visual stream .the crude detection of human faces used together with statistical learning should be analogous to semi - supervised learning and could allow identifying faces with high certainty .humans have areas devoted to emotional processing , and the brain seems to embody prior knowledge about the structure of emotions : emotions should have specific types of strong couplings to various other higher - level variables , should be expressed through the face , and so on .this prior knowledge , encoded into brain structure via evolution , could allow learning signals to come from the right places and to appear developmentally at the right times .what about agency ?it makes sense to describe , when dealing with high - level thinking , other beings as optimizers of their own goal functions .it appears that heuristically specified notions of goals and agency are infused into human psychological development from early infancy and that notions of agency are used to bootstrap heuristics for ethical evaluation .algorithms for establishing more complex , innately - important social relationships such as joint attention are under study , building upon more primitive proto - concepts like face detectors and ullman s hand detectors . the brain can thus use innate detectors to create cost functions and training procedures to train the next stages of learning .it is intuitive to ask whether this type of bootstrapping poses a kind of `` chicken and egg '' problem : if the brain already has an inbuilt heuristic hand detector , how can it be used to train a detector that performs any better than those heuristics ?after all , is nt a trained system only as good as its training data ?the work of illustrates why this is not the case .first , the `` innate detector '' can be used to train a downstream detector that operates based on different cues : for example , based on the spatial and body context of the hand , rather than its motion .second , once multiple such pathways of detection come into existence , they can be used to improve each other . in , appearance , body context , and mover motion are all used to bootstrap off of one another , creating a detector that is better than any of it training heuristics . in effect , the innate detectors are used not as supervision signals per se , but rather to guide or steer the learning process , enabling it to discover features that would otherwise be difficult . if such affordances can be found in other domains , it seems likely that the brain would make extensive use of them to ensure that developing animals learn the precise patterns of perception and behavior needed to ensure their later survival and reproduction .thus , generalizing previous ideas , we suggest that the brain uses optimization with respect to internally generated heuristic detection signals to bootstrap learning of biologically relevant features which would otherwise be missed by an unsupervised learner .in one possible implementation , such bootstrapping may occur via reinforcement learning , using the outputs of the innate detectors as local reinforcement signals , and perhaps using mechanisms similar to to perform reinforcement learning through a multi - layer network .it is also possible that the brain could use such internally generated heuristic detectors in other ways , for example to bias the inputs delivered to an unsupervised learning network towards entities of interest to humans ( joscha bach , personal communication ) , or to directly train simple classifiers .it has been widely noticed in cognitive science and ai that the generation and understanding of stories are crucial to human cognition .researchers such as winston have framed story understanding as the key to human - like intelligence .stories consist of a linear sequence of episodes , in which one episode refers to another through cause and effect relationships , with these relationships often involving the implicit goals of agents .many other cognitive faculties , such as conceptual grounding of language , could conceivably emerge from an underlying internal representation in terms of stories .perhaps the ultimate series of bootstrap cost functions would be those which would direct the brain to utilize its learning networks and specialized systems so as to construct representations that are specifically useful as components of stories , to spontaneously chain these representations together , and to update them through experience and communication .how could such cost functions arise ?one possibility is that they are bootstrapped through imitation and communication , where a child learns to mimic the story - telling behavior of others .another possibility is that useful representations and primitives for stories emerge spontaneously from mechanisms for learning state and action chunking in hierarchical reinforcement learning and planning .yet another is that stories emerge from learned patterns of saliency - directed memory storage and recall ( e.g. , ) .in addition , priors that direct the developing child s brain to learn about and attend to social agency seem to be important for stories .these systems will be discussed in more detail below .optimization of initially unstructured `` blank slate '' networks is not sufficient to generate complex cognition in the brain , we argue , even given a diversity of powerful genetically - specified cost functions and local learning rules , as we have posited above . instead , in * hypothesis 3 *, we suggest that specialized , pre - structured architectures are needed for at least two purposes .first , pre - structured architectures are needed to allow the brain to find efficient solutions to certain types of problems .when we write computer code , there are a broad range of algorithms and data structures employed for different purposes : we may use dynamic programming to solve planning problems , trees to efficiently implement nearest neighbor search , or stacks to implement recursion .having the right kind of algorithm and data structure in place to solve a problem allows it to be solved efficiently , robustly and with a minimum amount of learning or optimization needed .this observation is concordant with the increasing use of pre - specialized architectures and specialized computational components in machine learning . in particular , to enable the learning of efficient computational solutions , the brain may need pre - specialized systems for planning and executing sequential multi - step processes , for accessing memories , and for forming and manipulating compositional and recursive structures .second , the training of optimization modules may need to be coordinated in a complex and dynamic fashion , including delivering the right training signals and activating the right learning rules in the right places and at the right times . to allow this, the brain may need specialized systems for storing and routing data , and for flexibly routing training signals such as target patterns , training data , reinforcement signals , attention signals , and modulatory signals .these mechanisms may need to be at least partially in place in advance of learning .looking at the brain , we indeed seem to find highly conserved structures , e.g. , cortex , where it is theorized that a similar type of learning and/or computation is happening in multiple places .but we also see a large number of specialized structures , including thalamus , hippocampus , basal ganglia and cerebellum .some of these structures evolutionarily pre - date the cortex , and hence the cortex may have evolved to work in the context of such specialized mechanisms .for example , the cortex may have evolved as a trainable module for which the training is orchestrated by these older structures . even within the cortex itself , microcircuitry within different areasmay be specialized : tinkered variations on a common ancestral microcircuit scaffold could potentially allow different cortical areas , such as sensory areas vs. prefrontal areas , to be configured to adopt a number of qualitatively distinct computational and learning configurations , even while sharing a common gross physical layout and communication interface .within cortex , over forty distinct cell types differing in such aspects as dendritic organization , distribution throughout the six cortical layers , connectivity pattern , gene expression , and electrophysiological properties have already been found .central pattern generator circuits provide an example of the kinds of architectures that can be pre - wired into neural microcircuitry , and may have evolutionary relationships with cortical circuits .thus , while the precise degree of architectural specificity of particular cortical regions is still under debate , various mechanism could offer pre - specified heterogeneity . in this section, we explore the kinds of computational problems for which specialized structures may be useful , and attempt to map these to putative elements within the brain .our preliminary sketch of a functional decomposition can be viewed as a summary of suggestions for specialized functions that have been made throughout the computational neuroscience literature , and is influenced strongly by the models of oreilly , eliasmith , grossberg , marcus , hayworth and others .the correspondence between these models and actual neural circuitry is , of course , still the subject of extensive debate .many of the computational and neural concepts sketched here are preliminary and will need to be made more rigorous through future study .our knowledge of the functions of particular brain areas , and thus our proposed mappings of certain computations onto neuroanatomy , also remains tentative .finally , it is still far from established which processes in the brain emerge from optimization of cost functions , which emerge from other forms of self - organization , which are pre - structured through genetics and development , and which rely on an interplay of all these mechanisms .our discussion here should therefore be viewed as a sketch of potential directions for further study .one of the central elements of computation is memory .importantly , multiple different kinds of memory are needed . for example, we need memory that is stored for a long period of time and that can be retrieved in a number of ways , such as in situations similar to the time when the memory was first stored ( content addressable memory ) .we also need memory that we can keep for a short period of time and that we can rapidly rewrite ( working memory ) .lastly , we need the kind of implicit memory that we can not explicitly recall , similar to the kind of memory that is classically learned using gradient descent on errors , i.e. , sculpted into the weight matrix of a neural network .content addressable memories are classic models in neuroscience .most simply , they allow us to recognize a situation similar to one that we have seen before , and to `` fill in '' stored patterns based on partial or noisy information , but they may also be put to use as sub - components of many other functions .recent research has shown that including such memories allows deep networks to learn to solve problems that previously were out of reach , even of lstm networks that already have a simpler form of local memory and are already capable of learning long - term dependencies .hippocampal area ca3 may act as an auto - associative memory capable of content - addressable pattern completion , with pattern separation occurring in the dentate gyrus .such systems could permit the retrieval of complete memories from partial cues , enabling networks to perform operations similar to database retrieval or to instantiate lookup tables of historical stimulus - response mappings , among numerous other possibilities .cognitive science has long characterized properties of the working memory .it is somewhat limited , with the old idea being that it can represent `` seven plus or minus two '' elements .there are many models of working memory , some of which attribute it to persistent , self - reinforcing patterns of neural activation in the recurrent networks of the prefrontal cortex .prefrontal working memory appears to be made up of multiple functionally distinct subsystems .neural models of working memory can store not only scalar variables , but also high - dimensional vectors or sequences of vectors . workingmemory buffers seem crucial for human - like cognition , e.g. , reasoning , as they allow short - term storage while also in conjunction with other mechanisms enabling generalization of operations across anything that can fill the buffer .saliency , or interestingness , measures can be used to tag the importance of a memory .this can allow removal of the boring data from the training set , allowing a mechanism that is more like optimal experimentation . moreover, saliency can guide memory replay or sampling from generative models , to generate more training data drawn from a distribution useful for learning .conceivably , hippocampal replay could allow a batch - like training process , similar to how most machine learning systems are trained , rather than requiring all training to occur in an online fashion .plasticity mechanisms in memory systems which are gated by saliency are starting to be uncovered in neuroscience .importantly , the notions of `` saliency '' computed by the brain could be quite intricate and multi - faceted , potentially leading to complex schemes by which specific kinds of memories would be tagged for later context - dependent retrieval . as a hypothetical example, representations of both timing and importance associated with memories could perhaps allow retrieval only of important memories that happened within a certain window of time .storing and retrieving information selectively based on specific properties of the information itself , or of `` tags '' appended to that information , is a powerful computational primitive that could enable learning of more complex tasks .to use its information flexibly , the brain needs structured systems for routing data .such systems need to address multiple temporal and spatial scales , and multiple modalities of control .thus , there are several different kinds of information routing systems in the brain which operate by different mechanisms and under different constraints .if we can focus on one thing at a time , we may be able to allocate more computational resources to processing it , make better use of scarce data to learn about it , and more easily store and retrieve it from memory .notably in this context , attention allows improvements in learning : if we can focus on just a single object , instead of an entire scene , we can learn about it more easily using limited data .formal accounts in a bayesian framework talk about attention reducing the sample complexity of learning .likewise , in models , the processes of applying attention , and of effectively making use of incoming attentional signals to appropriately modulate local circuit activity , can themselves be learned by optimizing cost functions .the right kinds of attention make processing and learning more efficient , and also allow for a kind of programmatic control over multi - step perceptual tasks .how does the brain determine where to allocate attention , and how is the attentional signal physically mediated ? answering this question is still an active area of neuroscience .higher - level cortical areas may be specialized in allocating attention .the problem is made complex by the fact that there seem to be many different types of attention such as object - based , feature - based and spatial attention in vision that may be mediated by interactions between different brain areas .the frontal eye fields ( area fef ) , for example , are important in visual attention , specifically for controlling saccades of the eyes to attended locations .area fef contains `` retinotopic '' spatial maps whose activation determines the saccade targets in the visual field .other prefrontral areas such as the dorsolateral prefrontal cortex and inferior frontal junction are also involved in maintaining representations that specify the targets of certain types of attention .certain forms of attention may require a complex interaction between brain areas , e.g. , to determine targets of attention based on higher - level properties that are represented across multiple areas , like the identity and spatial location of a specific face .there are many proposed neural mechanisms of attention , including the idea that synchrony plays a role , perhaps by creating resonances that facilitate the transfer of information between synchronously oscillating neural populations in different areas .other proposed mechanisms include specific circuits for attention - dependent signal routing .various forms of attention also have specific neurophysiological signatures , such as enhancements in synchrony among neural spikes and with the ambient local field potential , changes in the sharpness of neural tuning curves , and other properties .these diverse effects and signatures of attention may be consequences of underlying pathways that wire up to particular elements of cortical microcircuits to mediate different attentional effects .one possibility is that the brain uses distinct groups of neurons , which we can call `` buffers '' , to store distinct variables , such as the subject or object in a sentence .having memory buffers allows the abstraction of a variable .as is ubiquitously used in computer science , this comes with the ability to generalize operations across any variable that could meaningfully fill the buffer and makes computation flexible .once we establish that the brain has a number of memory buffers , we need ways for those buffers to interact .we need to be able to take a buffer , do a computation on its contents and store the output into another buffer .but if the representations in each of two groups of neurons are learned , and hence are coded differently , how can the brain `` copy and paste '' information between these groups of neurons ?malsburg argued that such a system of separate buffers is impossible because the neural pattern for `` chair '' in buffer 1 has nothing in common with the neural pattern for `` chair '' in buffer 2 any learning that occurs for the contents of buffer 1 would not automatically be transferable to buffer 2 .various mechanisms have been proposed to allow such transferability , which focus on ways in which all buffers could be trained jointly and then later separated so that they can work independently when they need to .dense connectivity is only achieved locally , but it would be desirable to have a way for any two cortical units to talk to one another , if needed , regardless of their distance from one another , and without introducing crosstalk .it is therefore critical to be able to dynamically turn on and off the transfer of information between different source and destination regions , in much the manner of a switchboard . together with attention, such dedicated routing systems can make sure that a brain area receives exactly the information it needs . such a discrete routing system is , of course , central to cognitive architectures like act - r .the key feature of act - r is the ability to evaluate the if clauses of tens of thousands of symbolic rules ( called `` productions '' ) , in parallel , approximately every 50 milliseconds .each rule requires equality comparisons between the contents of many constant and variable memory buffers , and the execution of a rule leads to the conditional routing of information from one buffer to another . what controls which long - range routing operations occur when , i.e. , where is the switchboad and what controls it ?several models , including act - r , have attributed such parallel rule - based control of routing to the action selection circuitry of the basal ganglia ( bg ) , and its interaction with working memory buffers in the prefrontal cortex . in conventional models of thalamo - cortico - striatal loops ,competing actions of the direct and indirect pathways through the basal ganglia can inhibit or disinhibit an area of motor cortex , thereby gating a motor action .models like propose further that the basal ganglia can gate not just the transfer of information from motor cortex to downstream actuators , but also the transfer of information between cortical areas .to do so , the basal ganglia would dis - inhibit a thalamic relay linking two cortical areas .dopamine - related activity is thought to lead to temporal difference reinforcement learning of such gating policies in the basal ganglia . beyond the basal ganglia , there are also other , separate pathways involved in action selection , e.g. , in the prefrontal cortex . thus , multiple systems including basal ganglia and cortex could control the gating of long - range information transfer between cortical areas , with the thalamus perhaps largely constituting the switchboard itself . how is such routing put to use in a learning context ?one possibility is that the basal ganglia acts to orchestrate the training of the cortex .the basal ganglia may exert tight control over the cortex , helping to determine when and how it is trained .indeed , because the basal ganglia pre - dates the cortex evolutionarily , it is possible that the cortex evolved as a flexible , trainable resource that could be harnessed by existing basal ganglia circuitry .all of the main regions and circuits of the basal ganglia are conserved from our common ancestor with the lamprey more than five hundred million years ago .the major part of the basal ganglia even seems to be conserved from our common ancestor with insects .thus , in addition to its real - time action selection and routing functions , the basal ganglia may sculpt how the cortex learns .certain algorithmic problems benefit greatly from particular types of representation and transformation , such as a grid - like representation of space . in some cases , rather than just waiting for them to emerge via gradient descent optimization of appropriate cost functions , the brain may be pre - structured to facilitate their creation .we often have to plan and execute complicated sequences of actions on the fly , in response to a new situation . at the lowest level , that of motor control , our body and our immediate environment change all the time .as such , it is important for us to maintain knowledge about this environment in a continuous way . the deviations between our planned movements and those movements that we actually execute continuously provide information about the properties of the environment .therefore it seems important to have a specialized system that takes all our motor errors and uses them to update a dynamical model of our body and our immediate environment that can predict the delayed sensory results of our motor actions .it appears that the cerebellum is such a structure , and lesions to it abolish our way of dealing successfully with a changing body .incidentally , the cerebellum has more connections than the rest of the brain taken together , apparently in a largely feedforward architecture , and the tiny cerebellar granule cells , which may form a randomized high - dimensional input representation , outnumber all other neurons .the brain clearly needs a way of continuously correcting movements to minimize errors .newer research shows that the cerebellum is involved in a broad range of cognitive problems as well , potentially because they share computational problems with motor control .for example , when subjects estimate time intervals , which are naturally important for movement , it appears that the brain uses the cerebellum even if no movements are involved . even individual cerebellar purkinjie cells may learn to generate precise timings of their outputs .the brain also appears to use inverse models to rapidly predict motor activity that would give rise to a given sensory target .such mechanisms could be put to use far beyond motor control , in bootstrapping the training of a larger architecture by exploiting continuously changing error signals to update a real - time model of the system state .importantly , many of the control problems we appear to be solving are hierarchical .we have a spinal cord , which deals with the fast signals coming from our muscles and proprioception . within neuroscience, it is generally assumed that this system deals with fast feedback loops and that this behavior is learned to optimize its own cost function .the nature of cost functions in motor control is still under debate . in particular ,the timescale over which cost functions operate remains unclear : motor optimization may occur via real - time responses to a cost function that is computed and optimized online , or via policy choices that change over time more slowly in response to the cost function .nevertheless , the effect is that central processing in the brain has an effectively simplified physical system to control , e.g. , one that is far more linear .so the spinal cord itself already suggests the existence of two levels of a hierarchy , each trained using different cost functions .however , within the computational motor control literature ( see e.g. , ) , this idea can be pushed far further , e.g. , with a hierarchy including spinal cord , m1 , pmd , frontal , prefrontal areas .a low level may deal with muscles , the next level may deal with getting our limbs to places or moving objects , a next layer may deal with solving simple local problems ( e.g. , navigating across a room ) while the highest levels may deal with us planning our path through life .this factorization of the problem comes with multiple aspects : first , each level can be solved with its own cost functions , and second , every layer has a characteristic timescale .some levels , e.g. , the spinal cord , must run at a high speed .other levels , e.g. , high - level planning , only need to be touched much more rarely . converting the computationally hard optimal control problem into a hierarchical approximation promises to make it dramatically easier .does the brain solve control problems hierarchically ?there is evidence that the brain uses such a strategy , beside neural network demonstrations .the brain may use specialized structures at each hierarchical level to ensure that each operates efficiently given the nature of its problem space and available training signals . at higher levels, these systems may use an abstract syntax for combining sequences of actions in pursuit of goals .subroutines in such processes could be derived by a process of chunking sequences of actions into single actions .some brain areas like broca s area , known for its involvement in language , also appear to be specifically involved in processing the hierarchical structure of behavior , as such , as opposed to its detailed temporal structure . at the highest level of the decision making and control hierarchy ,human reward systems reflect changing goals and subgoals , and we are only beginning to understand how goals are actually coded in the brain , how we switch between goals , and how the cost functions used in learning depend on goal state .goal hierarchies are beginning to be incorporated into deep learning . given this hierarchical structure , the optimization algorithms can be fine - tuned . for the low levels , there is sheer unlimited training data . for the high levels ,a simulation of the world may be simple , with a tractable number of high - level actions to choose from .finally , each area needs to give reinforcement to other areas , e.g. , high levels need to punish lower levels for making planning complicated .thus this type of architecture can simplify the learning of control problems .progress is being made in both neuroscience and machine learning on finding potential mechanisms for this type of hierarchical planning and goal - seeking .this is beginning to reveal mechanisms for chunking goals and actions and for searching and pruning decision trees .the study of model - based hierarchical reinforcement learning and prospective optimization , which concerns the planning and evaluation of nested sequences of actions , implicates a network coupling the dorsolateral prefontral and orbitofrontal cortex , and the ventral and dorsolateral striatum .hierarchical rl relies on a hierarchical representation of state and action spaces , and it has been suggested that error - driven learning of an optimal such representation in the hippocampus gives rise to place and grid cell properties , with goal representations themselves emerging in the amygdala , prefrontal cortex and other areas . the question of how control problems can be successfully divided into component problems remains one of the central questions in neuroscience and machine learning , and the cost functions involved in learning to create such decompositions are still unknown .these considerations may begin to make plausible , however , how the brain could not only achieve its remarkable feats of motor learning such as generating complex `` innate '' motor programs , like walking in the newborn gazelle almost immediately after birth but also the kind of planning that allows a human to prepare a meal or travel from london to chicago .spatial planning requires solving shortest - path problems subject to constraints .if we want to get from one location to another , there are an arbitrarily large number of simple paths that could be taken .most naive implementations of such shortest paths problems are grossly inefficient .it appears that , in animals , the hippocampus aids at least in part through place cell and grid cell systems in efficient learning about new environments and in targeted navigation in such environments . in some simple models , targeted navigation in the hippocampusis achieved via the dynamics of `` bump attractors '' or propagating waves in a place cell network with hebbian plasticity and adaptation , which allows the network to effectively chart out a path in the space of place cell representations .higher - level cognitive tasks such as prospective planning appear to share computational sub - problems with path - finding .interaction between hippocampus and prefrontal cortex could perhaps support a more abstract notion of `` navigation '' in a space of goals and sub - goals . having specialized structures forpath - finding simplifies these problems .language and reasoning appear to present a problem for neural networks : we seem to be able to apply common grammatical rules to sentences regardless of the content of those sentences , and regardless of whether we have ever seen even remotely similar sentences in the training data . while this is achieved automatically in a computer with fixed registers , location addressable memories , and hard - coded operations , how it could be achieved in a biological brain , or emerge from an optimization algorithm , has been under debate for decades . as the putative key capability underlying such operations , variablebinding has been defined as `` the transitory or permanent tying together of two bits of information : a variable ( such as an x or y in algebra , or a placeholder like subject or verb in a sentence ) and an arbitrary instantiation of that variable ( say , a single number , symbol , vector , or word ) '' .a number of potential biologically plausible binding mechanisms are reviewed in .some , such as vector symbolic architectures , which were proposed in cognitive science , are also being considered in the context of efficiently - trainable artificial neural networks in effect , these systems learn how to use variable binding .variable binding could potentially emerge from simpler memory systems .for example , the scrub - jay can remember the place and time of last visit for hundreds of different locations , e.g. , to determine whether high - quality food is currently buried at any given location .it is conceivable that such spatially - grounded memory systems enabled a more general binding mechanism to emerge during evolution , perhaps through integration with routing systems or other content - addressable or working memory systems .fixed , static hierarchies ( e.g. , the hierarchical organization of cortical areas ) only take us so far : to deal with long chains of arbitrary nested references , we need _ dynamic _ hierarchies that can implement recursion on the fly .human language syntax has a hierarchical structure , which berwick et al described as `` composition of smaller forms like words and phrases into larger ones '' .specific fronto - temporal networks may be involved in representing and generating such hierarchies .little is known about the underlying circuit mechanisms for such dynamic hierarchies , but it is clear that specific affordances for representing such hierarchies in an efficient way would be beneficial .this may be closely connected with the issue of variable binding , and it is possible that operations similar to pointers could be useful in this context , in both the brain and artificial neural networks . augmenting neural networks with a differentiable analog of a push - down stack is another such affordance being pursued in machine learning .humans excel at stitching together sub - actions to form larger actions .structured , serial , hierarchical probabilistic programs have recently been shown to model aspects of human conceptual representation and compositional learning .in particular , sequential programs were found to enable one - shot learning of new geometric / visual concepts , a key capability that deep learning networks for object recognition seem to fundamentally lack .generative programs have also been proposed in the context of scene understanding .the ability to deal with problems in terms of sub - problems is central both in human thought and in many successful algorithms .one possibility is that the hippocampus supports the construction and learning of sequential programs .the hippocampus appears to explore , in simulation , possible future trajectories to a goal , even those involving previously unvisited locations .hippocampal - prefrontal interaction has been suggested to allow rapid , subconscious evaluation of potential action sequences during decision - making , with the hippocampus in effect simulating the expected outcomes of potential actions that are generated and evaluated in the prefrontal .the role of the hippocampus in imagination , concept generation , scene construction , mental exploration and goal - directed path planning suggests that it could help to create generative models to underpin more complex inference such as program induction or common - sense world simulation .another related possibility is that the cortex itself intrinsically supports the construction and learning of sequential programs .recurrent neural networks have been used for image generation through a sequential , attention - based process , although their correspondence with the brain is unclear .importantly , there are many other specialized structures known in neuroscience , which arguably receive less attention than they deserve , even for those interested in higher cognition . in the above , in addition to the hippocampus , basal ganglia and cortex, we emphasized the key roles of the thalamus in routing , of the cerebellum as a rapidly trainable control and modeling system , of the amygdala and other areas as a potential source of utility functions , of the retina or early visual areas as a means to generate detectors for motion and other features to bootstrap more complex visual learning , and of the frontal eye fields and other areas as a possible source of attention control .we ignored other structures entirely , whose functions are only beginning to be uncovered , such as the claustrum , which has been speculated to be important for rapidly binding together information from many modalities .our overall understanding of the functional decomposition of brain circuitry still seems very preliminary .a recent analysis suggested directions by which to modify and enhance existing neural - net - based machine learning towards more powerful and human - like cognitive capabilities , particularly by introducing new structures and systems which go beyond data - driven optimization .this analysis emphasized that systems should construct generative models of the world that incorporate compositionality ( discrete construction from re - usable parts ) , inductive biases reflecting causality , intuitive physics and intuitive psychology , and the capacity for probabilistic inference over discrete structured models ( e.g. , structured as graphs , trees , or programs ) to harness abstractions and enable transfer learning .we view these ideas as consistent with and complementary to the framework of cost functions , optimization and specialized systems discussed here .one might seek to understand how optimization and specialized systems could be used to implement some of the mechanisms proposed in inside neural networks . how incorporating additional structure into trainable neural networks can potentially give rise to systems that use compositional , causal and intuitive inductive biases and that `` learn to learn '' using structured models and shared data structures .for example , sub - dividing networks into units that can be modularly and dynamically combined , where representations can be copied and routed , may present a path towards improved compositionality and transfer learning .the control flow for recombining pre - existing modules and representations could be learned via reinforcement learning . how to implement the broad set of mechanisms discussed in is a key computational problem , and it remains open at which levels ( e.g. , cost functions and training procedures vs. specialized computational structures vs. underlying neural primitives ) architectural innovations will need to be introduced to capture these phenomena .primitives that are more complex than those used in conventional neural networks for instance , primitives that act as state machines with complex message passing or networks that intrinsically implement bayesian inference could potentially be useful , and it is plausible that some of these may be found in the brain .recent findings on the power of generic optimization also do not rule out the idea that the brain may explicitly generate and use particular types of structured representations to constrain its inferences ; indeed , the specialized brain systems discussed here might provide a means to enforce such constraints .it might be possible to further map the concepts of onto neuroscience via an infrastructure of interacting cost functions and specialized brain systems under rich genetic control , coupled to a powerful and generic neurally implemented capacity for optimization .for example , it was recently shown that complex probabilistic population coding and inference can arise automatically from backpropagation - based training of simple neural networks , without needing to be built in by hand .the nature of the underlying primitives in the brain , on top of which learning can operate , is a key question for neuroscience .hypotheses are primarily useful if they lead to concrete , experimentally testable predictions . as such , we now want to go through the hypotheses and see to which level they can be directly tested , as well as refined , through neuroscience .there are multiple general strategies for addressing whether and how the brain optimizes cost functions .a first strategy is based on observing the endpoint of learning .if the brain uses a cost function , and we can guess its identity , then the final state of the brain should be close to optimal for the cost function .if we know the statistics of natural environments , and know the cost function , we can compare receptive fields that are optimized in a simulation with the measured ones .this strategy is only beginning to be used at the moment because it has been difficult to measure the receptive fields or other representational properties across a large population of neurons , but this situation is beginning to improve technologically with the emergence of large - scale recording methods . a second strategy could directly quantify how well a cost function describes learning . if the dynamics of learning minimize a cost function then the underlying vector field should have a strong gradient descent type component and a weak rotational component .if we could somehow continuously monitor the synaptic strengths , while externally manipulating them , then we could , in principle , measure the vector field in the space of synaptic weights , and calculate its divergence as well as its rotation . for at least the subset of synapses that are being trained via some approximation to gradient descent , the divergence component should be strong relative to the rotational component .this strategy has not been developed yet due to experimental difficulties with monitoring large numbers of synaptic weights .a third strategy is based on perturbations : cost function based learning should undo the effects of perturbations which disrupt optimality , i.e. , the system should return to local minima after a perturbation , and indeed perhaps to the same local minimum after a sufficiently small perturbation .if we change synaptic connections , e.g. , in the context of a brain machine interface , we should be able to produce a reorganization that can be predicted based on a guess of the relevant cost function .this strategy is starting to be feasible in motor areas .lastly , if we knew structurally which cell types and connections mediated the delivery of error signals vs. input data or other types of connections , then we could stimulate specific connections so as to impose a user - defined cost function . in effect, we would use the brain s own networks as a trainable deep learning substrate , and then study how the network responds to training .brain machine interfaces can be used to set up specific local learning problems , in which the brain is asked to create certain user - specified representations , and the dynamics of this process can be monitored . in order to do this properly, we must first understand more about the system is wired to deliver cost signals .much of the structure that would be found in connectomic circuit maps , for example , would not just be relevant for short - timescale computing , but also for creating the infrastructure that supports cost functions and their optimization .many of the learning mechanisms that we have discussed in this paper make specific predictions about connectivity or dynamics . for example , the `` feedback alignment '' approach to biological backpropagation suggests that cortical feedback connections should , at some level of neuronal grouping , be largely sign - concordant with the corresponding feedforward connections , although not necessarily of concordant weight , and feedback alignment also makes predictions for synaptic normalization mechanisms .the kickback model for biologically plausible backpropagation has a specific role for nmda receptors .some models that incorporate dendritic coincidence detection for learning temporal sequences predict that a given axon should make only a small number of synapses on a given dendritic segment .models that involve stdp learning will make predictions about the dynamics of changing firing rates , as well as about the particular network structures , such as those based on autoencoders or recirculation , in which stdp can give rise to a form of backpropagation .it is critical to establish the unit of optimization .we want to know the scale of the modules that are trainable by some approximation of gradient descent optimization .how large are the networks which share a given error signal or cost function ?on what scales can appropriate training signals be delivered ? it could be that the whole brain is optimized end - to - end , in principle . in this casewe would expect to find connections that carry training signals from each layer to the preceding ones . on successively smaller scales, optimization could be within a brain area , a microcircuit , or an individual neuron .importantly , optimization may co - exist across these scales . there may be some slow optimization end - to - end , with stronger optimization within a local area and very efficient algorithms within each cell .careful experiments should be able to identify the scale of optimization , e.g. , by quantifying the extent of learning induced by a local perturbation .the tightness of the structure - function relationship is the hallmark of molecular and to some extent cellular biology , but in large connectionist learning systems , this relationship can become difficult to extract : the same initial network can be driven to compute many different functions by subjecting it to different training. it can be hard to understand the way a neural network solves its problems .how could one tell the difference , then , between a gradient - descent trained network vs. untrained or random networks vs. a network that has been trained against a different kind of task ?one possibility would be to train artificial neural networks against various candidate cost functions , study the resulting neural tuning properties , and compare them with those found in the circuit of interest .this has already been done to aid the interpretation of the neural dynamics underlying decision making in the pfc , working memory in the posterior parietal cortex and object representation in the visual system .some have gone on to suggest a direct correspondence between cortical circuits and optimized , appropriately regularized , recurrent neural networks . in any case , effective analytical methods to reverse engineer complex machine learning systems , and methods to reverse engineer biological brains , may have some commonalities .does this emphasis on function optimization and trainable substrates mean that we should give up on reverse engineering the brain based on detailed measurements and models of its specific connectivity and dynamics? on the contrary : we should use large - scale brain maps to try to better understand a ) how the brain implements optimization , b ) where the training signals come from and what cost functions they embody , and c ) what structures exist , at different levels of organization , to constrain this optimization to efficiently find solutions to specific kinds of problems .the answers may be influenced by diverse local properties of neurons and networks , such as homeostatic rules of neural structure , gene expression and function , the diversity of synapse types , cell - type - specific connectivity , patterns of inter - laminar projection , distributions of inhibitory neuron types , dendritic targeting and local dendritic physiology and plasticity or local glial networks .they may also be influenced by the integrated nature of higher - level brain systems , including mechanisms for developmental bootstrapping , information routing , attention and hierarchical decision making .mapping these systems in detail is of paramount importance to understanding how the brain works , down to the nanoscale dendritic organization of ion channels and up to the real - time global coordination of cortex , striatum and hippocampus , all of which are computationally relevant in the framework we have explicated here .we thus expect that large - scale , multi - resolution brain maps would be useful in testing these framework - level ideas , in inspiring their refinements , and in using them to guide more detailed analysis . clearly , we can map differences in structure , dynamics and representation across brain areas . when we find such differences , the question remains as to whether we can interpret these as resulting from differences in the internally - generated cost functions , as opposed to differences in the input data , or from differences that reflect other constraints unrelated to cost functions .if we can directly measure aspects of the cost function in different areas , then we can also compare them across areas .for example , methods from inverse reinforcement learning might allow backing out the cost function from observed plasticity .moreover , as we begin to understand the `` neural correlates '' of particular cost functions perhaps encoded in particular synaptic or neuromodulatory learning rules , genetically - guided local wiring patterns , or patterns of interaction between brain areas we can also begin to understand when differences in observed neural circuit architecture reflect differences in cost functions .we expect that , for each distinct learning rule or cost function , there may be specific molecularly identifiable types of cells and/or synapses .moreover , for each specialized system there may be specific molecularly identifiable developmental programs that tune it or otherwise set its parameters .this would make sense if evolution has needed to tune the parameters of one cost function without impacting others .how many different types of internal training signals does the brain generate ? when thinking about error signals , we are not just talking about dopamine and serotonin , or other classical reward - related pathways .the error signals that may be used to train specific sub - networks in the brain , via some approximation of gradient descent or otherwise , are not necessarily equivalent to reward signals .it is important to distinguish between cost functions that may be used to drive optimization of specific sub - circuits in the brain , and what are referred to as `` value functions '' or `` utility functions '' , i.e. , functions that predict the agent s aggregate future reward . in both cases ,similar reinforcement learning mechanisms may be used , but the interpretation of the cost functions is different . we have not emphasized global utility functions for the animal here , since they are extensively studied elsewhere ( e.g. , ) , and since we argue that , though important , they are only a part of the picture , i.e. , that the brain is not solely an end - to - end reinforcement trained system .progress in brain mapping could soon allow us to classify the types of reward signals in the brain , follow the detailed anatomy and connectivity of reward pathways throughout the brain , and map in detail how reward pathways are integrated into striatal , cortical , hippocampal and cerebellar microcircuits .this program is beginning to be carried out in the fly brain , in which twenty specific types of dopamine neuron project to distinct anatomical compartments of the mushroom body to train distinct odor classifiers operating on a set of high - dimensional odor representations .it is known that , even within the same system , such as the fly olfactory pathway , some neuronal wiring is highly specific and molecularly programmed , while other wiring is effectively random , and yet other wiring is learned .the interplay between such design principles could give rise to many forms of `` division of labor '' between genetics and learning .likewise , it is believed that birdsong learning is driven by reinforcement learning using a specialized cost function that relies on comparison with a memorized version of a tutor s song , and also that it involves specialized structures for controlling song variability during learning .these detailed pathways underlying the construction of cost functions for vocal learning are beginning to be mapped . starting with simple systems, it should become possible to map the reward pathways and how they evolved and diversified , which would be a step on the way to understanding how the system learns .if different brain structures are performing distinct types of computations with a shared goal , then optimization of a joint cost function will take place with different dynamics in each area .if we focus on a higher level task , e.g. , maximizing the probability of correctly detecting something , then we should find that basic feature detection circuits should learn when the features were insufficient for detection , that attentional routing structures should learn when a different allocation of attention would have improved detection and that memory structures should learn when items that matter for detection were not remembered .if we assume that multiple structures are participating in a joint computation , which optimizes an overall cost function ( but see * hypothesis 2 * ) , then an understanding of the computational function of each area leads to a prediction of the measurable plasticity rules .machine learning may be equally transformed by neuroscience . within the brain ,a myriad of subsystems and layers work together to produce an agent that exhibits general intelligence .the brain is able to show intelligent behavior across a broad range of problems using only relatively small amounts of data . as such , progress at understanding the brain promises to improve machine learning . in this section ,we review our three hypotheses about the brain and discuss how their elaboration might contribute to more powerful machine learning systems .a good practitioner of machine learning should have a broad range of optimization methods at their disposal as different problems ask for different approaches .the brain , we have argued , is an implicit machine learning mechanism which has been evolved over millions of years .consequently , we should expect the brain to be able to optimize cost functions efficiently , across many domains and kinds of data .indeed , across different animal phyla , we even see _ convergent _ evolution of certain brain structures , e.g. , the bird brain has no cortex yet has developed homologous structures which as the linguistic feats of the african grey parrot demonstrate can give rise to quite complex intelligence .it seems reasonable to hope to learn how to do truly general - purpose optimization by looking at the brain .indeed , there are multiple kinds of optimization that we may expect to discover by looking at the brain . at the hardware level, the brain clearly manages to optimize functions efficiently despite having slow hardware subject to molecular fluctuations , suggesting directions for improving the hardware of machine learning to be more energy efficient . at the level of learning rules , the brain solves an optimization problem in a highly nonlinear , non - differentiable , temporally stochastic , spiking system with massive numbers of feedback connections , a problem that we arguably still do not know how to efficiently solve for neural networks . at the architectural level ,the brain can optimize certain kinds of functions based on very few stimulus presentations , operates over diverse timescales , and clearly uses advanced forms of active learning to infer causal structure in the world .while we have discussed a range of theories for how the brain can carry out optimization , these theories are still preliminary .thus , the first step is to understand whether the brain indeed performs multi - layer credit assignment in a manner that approximates full gradient descent , and if so , how it does this . either way , we can expect that answer to impact machine learning .if the brain does not do some form of backpropagation , this suggests that machine learning may benefit from understanding the tricks that the brain uses to avoid having to do so .if , on the other hand , the brain does do backpropagation , then the underlying mechanisms clearly can support a very wide range of efficient optimization processes across many domains , including learning from rich temporal data - streams and via unsupervised mechanisms , and the architectures behind this will likely be of long - term value to machine learning .moreover , the search for biologically plausible forms of backpropagation has already led to interesting insights , such as the possibility of using random feedback weights ( feedback alignment ) in backpropagation , or the unexpected power of internal force learning in chaotic , spontaneously active recurrent networks .this and other findings discussed here suggest that there are still fundamental things we do nt understand about backpropagation which could potentially lead not only to more biologically plausible ways to train recurrent neural networks , but also to fundamentally simpler and more powerful ones .a good practitioner of machine learning has access to a broad range of learning techniques and thus implicitly is able to use many different cost functions .some problems ask for clustering , others for extracting sparse variables , and yet others for prediction quality to be maximized .the brain also needs to be able to deal with many different kinds of datasets . as such, it makes sense for the brain to use a broad range of cost functions appropriate for the diverse set of tasks it has to solve to thrive in this world .many of the most notable successes of deep learning , from language modeling , to vision , to motor control , have been driven by end - to - end optimization of single task objectives .we have highlighted cases where machine learning has opened the door to multiplicities of cost functions that shape network modules into specialized roles .we expect that machine learning will increasingly adopt these practices in the future . in computer vision ,we have begun to see researchers re - appropriate neural networks trained for one task ( e.g. , imagenet classification ) and then deploy them on new tasks other than the ones they were trained for or for which more limited training data is available .we imagine this procedure will be generalized , whereby , in series and in parallel , diverse training problems , each with an associated cost function , are used to shape visual representations . for example, visual data streams can be segmented into elements like foreground vs. background , objects that can move of their own accord vs. those that can not , all using diverse unsupervised criteria .networks so trained can then be shared , augmented , and retrained on new tasks .they can be introduced as front - ends for systems that perform more complex objectives or even serve to produce cost functions for training other circuits . as a simple example , a network that can discriminate between images of different kinds of architectural structures ( pyramid , staircase , etc . ) could act as a critic for a building - construction network .scientifically , determining the order in which cost functions are engaged in the biological brain will inform machine learning about how to construct systems with intricate and hierarchical behaviors via divide - and - conquer approaches to learning problems , active learning , and more .a good practitioner of machine learning should have a broad range of algorithms at their disposal .some problems are efficiently solved through dynamic programming , others through hashing , and yet others through multi - layer backpropagation .the brain needs to be able to solve a broad range of learning problems without the luxury of being reprogrammed . as such, it makes sense for the brain to have specialized structures that allow it to rapidly learn to approximate a broad range of algorithms .the first neural networks were simple single - layer systems , either linear or with limited non - linearities .the explosion of neural network research in the 1980s saw the advent of multilayer networks , followed by networks with layer - wise specializations as in convolutional networks . in the last two decades , architectures with specializations for holding variables stable in memory likethe lstm , the control of content - addressable memory , and game playing by reinforcement learning have been developed .these networks , though formerly exotic , are now becoming mainstream algorithms in the toolbox of any deep learning practitioner .there is no sign that progress in developing new varieties of structured architectures is halting , and the heterogeneity and modularity of the brain s circuitry suggests that diverse , specialized architectures are needed to solve the diverse challenges that confront a behaving animal .the brain combines a jumble of specialized structures in a way that works . solving this problem _ de novo _ in machine learning promises to be very difficult , making it attractive to be inspired by observations about how the brain does it .an understanding of the breadth of specialized structures , as well as the architecture that combines them , should be quite useful .deep learning methods have taken the field of machine learning by storm . drivingthe success is the separation of the problem of learning into two pieces : * ( 1 ) * an algorithm , backpropagation , that allows efficient distributed optimization , and * ( 2 ) * approaches to turn any given problem into an optimization problem , by designing a cost function and training procedure which will result in the desired computation .if we want to apply deep learning to a new domain , e.g. , playing jeopardy , we do not need to change the optimization algorithm we just need to cleverly set up the right cost function .a lot of work in deep learning , perhaps the majority , is now focused on setting up the right cost functions .we hypothesize that the brain also acquired such a separation between optimization mechanisms and cost functions .if neural circuits , such as in cortex , implement a general - purpose optimization algorithm , then any improvement to that algorithm will improve function across the cortex . at the same time , different cortical areas solve different problems , so tinkering with each area s cost function is likely to improve its performance . as such , functionally and evolutionarily separating the problems of optimization and cost function generation could allow evolution to produce better computations , faster .for example , common unsupervised mechanisms could be combined with area - specific reinforcement - based or supervised mechanisms and error signals , much as recent advances in machine learning have found natural ways to combine supervised and unsupervised objectives in a single system .this suggests interesting questions neurons are locally optimized to perform disentangling of the manifolds corresponding to their local views of the transformations of an object , allowing these manifolds to be linearly separated by readout areas . yet, also emphasizes the possibility that certain computations such as normalization are pre - initialized in the circuitry prior to learning - based optimization . ] : when did the division between cost functions and optimization algorithms occur? how is this separation implemented ?how did innovations in cost functions and optimization algorithms evolve ? and how do our own cost functions and learning algorithms differ from those of other animals ?there are many possibilities for how such a separation might be achieved in the brain .perhaps the six - layered cortex represents a common optimization algorithm , which in different cortical areas is supplied with different cost functions .this claim is different from the claim that all cortical areas use a single unsupervised learning algorithm and achieve functional specificity by tuning the inputs to that algorithm . in that case , both the optimization mechanism and the implicit unsupervised cost function would be the same across areas ( e.g. , minimization of prediction error ) , with only the training data differing between areas , whereas in our suggestion , the optimization mechanism would be the same across areas but the cost function , _ as well as _ the training data , would differ .thus the cost function itself would be like an ancillary input to a cortical area , in addition to its input and output data . some cortical microcircuitscould then , perhaps , compute the cost functions that are to be delivered to other cortical microcircuits .another possibility is that , within the same circuitry , certain aspects of the wiring and learning rules specify an optimization mechanism and are relatively fixed across areas , while others specify the cost function and are more variable. this latter possibility would be similar to the notion of cortical microcircuits as molecularly and structurally configurable elements , akin to the cells in a field - programmable gate array ( fpga ) , rather than a homogenous substrate .the biological nature of such a separation , if any exists , remains an open question .for example , individual parts of a neuron may separately deal with optimization and with the specification of the cost function , or different parts of a microcircuit may specialize in this way , or there may be specialized types cells , some of which deal with signal processing and others with cost functions .due to the complexity and variability of the brain , pure `` bottom up '' analysis of neural data faces potential challenges of interpretation .theoretical frameworks can potentially be used to constrain the space of hypotheses being evaluated , allowing researchers to first address higher - level principles and structures in the system , and then `` zoom in '' to address the details . proposed `` top down '' frameworks for understanding neural computation include entropy maximization , efficient encoding , faithful approximation of bayesian inference , minimization of prediction error , attractor dynamics , modularity , the ability to subserve symbolic operations , and many others .interestingly , many of the `` top down '' frameworks boil down to assuming that the brain simply optimizes a single , given cost function for a single computational architecture .we generalize these proposals assuming both a heterogeneous combination of cost functions unfolding over development , and a diversity of specialized sub - systems .much of neuroscience has focused on the search for `` the neural code '' , i.e. , it has asked which stimuli are good at driving activity in individual neurons , regions , or brain areas .but , if the brain is capable of generic optimization of cost functions , then we need to be aware that rather simple cost functions can give rise to complicated stimulus responses .this potentially leads to a different set of questions .are differing cost functions indeed a useful way to think about the differing functions of brain areas ? how does the optimization of cost functions in the brain actually occur , and how is this different from the implementations of gradient descent in artificial neural networks ?what additional constraints are present in the circuitry that remain fixed while optimization occurs ?how does optimization interact with a structured architecture , and is this architecture similar to what we have sketched ?which computations are wired into the architecture , which emerge through optimization , and which arise from a mixture of those two extremes ? to what extent are cost functions explicitly computed in the brain , versus implicit in its local learning rules ? did the brain evolve to separate the mechanisms involved in cost function generation from those involved in the optimization of cost functions , and if so how ? what kinds of meta - level learning might the brain apply , to learn when and how to invoke different cost functions or specialized systems , among the diverse options available , to solve a given task ?what crucial mechanisms are left out of this framework ?a more in - depth dialog between neuroscience and machine learning could help elucidate some of these questions .much of machine learning has focused on finding ever faster ways of doing end - to - end gradient descent in neural networks .neuroscience may inform machine learning at multiple levels .the optimization algorithms in the brain have undergone a couple of hundred million years of evolution .moreover , the brain may have found ways of using heterogeneous cost functions that interact over development so as to simplify learning problems by guiding and shaping the outcomes of unsupervised learning .lastly , the specialized structures evolved in the brain may inform us about ways of making learning efficient in a world that requires a broad range of computational problems to be solved over multiple timescales . looking at the insights from neuroscience may help machine learning move towards general intelligence in a structured heterogeneous world with access to only small amounts of supervised data . in some waysour proposal is opposite to many popular theories of neural computation .there is not one mechanism of optimization but ( potentially ) many , not one cost function but a host of them , not one kind of a representation but a representation of whatever is useful , and not one homogeneous structure but a large number of them .all these elements are held together by the optimization of internally generated cost functions , which allows these systems to make good use of one another .rejecting simple unifying theories is in line with a broad range of previous approaches in ai .for example , minsky and papert s work on the society of mind and more broadly on ideas of genetically staged and internally bootstrapped development in connectionist systems emphasizes the need for a system of internal monitors and critics , specialized communication and storage mechanisms , and a hierarchical organization of simple control systems . at the time these early works were written, it was not yet clear that gradient - based optimization could give rise to powerful feature representations and behavioral policies .one can view our proposal as a renewed argument against simple end - to - end training and in favor of a heterogeneous approach .in other words , this framework could be viewed as proposing a kind of `` society '' of cost functions and trainable networks , permitting internal bootstrapping processes reminiscent of the society of mind . in this view, intelligence is enabled by many computationally specialized structures , each trained with its own developmentally regulated cost function , where both the structures and the cost functions are themselves optimized by evolution like the hyperparameters in neural networks .
neuroscience has focused on the detailed implementation of computation , studying neural codes , dynamics and circuits . in machine learning , however , artificial neural networks tend to eschew precisely designed codes , dynamics or circuits in favor of brute force optimization of a cost function , often using simple and relatively uniform initial architectures . two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives . first , structured architectures are used , including dedicated systems for attention , recursion and various forms of short- and long - term memory storage . second , cost functions and training procedures have become more complex and are varied across layers and over time . here we think about the brain in terms of these ideas . we hypothesize that ( 1 ) the brain optimizes cost functions , ( 2 ) these cost functions are diverse and differ across brain locations and over development , and ( 3 ) optimization operates within a pre - structured architecture matched to the computational problems posed by behavior . such a heterogeneously optimized system , enabled by a series of interacting cost functions , serves to make learning data - efficient and precisely targeted to the needs of the organism . we suggest directions by which neuroscience could seek to refine and test these hypotheses . cost functions , neural networks , neuroscience , cognitive architecture 2
the modern mathematical approach of shannon describes information in terms of occurrence of events .the notion of informational content of an event is defined as a quantity that is inversely proportional to the probability with which the event occurs , so that smaller the probability of occurrence of an event the more informational content in the event , and vice versa .the fundamental quantifier of information is therefore a probability distribution , which in the case of a collection of finitely many events is a sequence of real numbers adding up to one , with each number in the sequence representing the probability with which a particular event in the collection occurs .quantum information studies the behavior of information under a quantum mechanical model . as such ,the notion of probability distribution is replaced with the notion of quantum superposition followed by measurement .quantum information and `` classical '' information are fundamentally different , with perhaps the most dramatic difference exhibited by the fact that certain quantum superpositions of two independent quantum objects produce , upon measurement , probability distributions that are impossible to produce by independent classical objects .for example , no flip of two independent classical coins will ever produce the probability distribution over the four possible states of the two coins ; however , certain `` flips '' of two independent quantum coins ( or qubits ) can give rise to quantum superpositions of the quantum coins called entangled states which , upon measurement , produce the probability distribution above . with its different than usual behavior together with the possibility that such behavior could make contributions to scientific and technological advancement, quantum information continues to be an active area of research .relatively recent papers such as are good resources for initiating a study of quantum information and its potential benefits to the fields of algorithms , cryptography , computation , and artificial intelligence .quantum games form a relatively new and exciting area of research within quantum information theory . in the theory of quantum games ,one typically identifies features of quantum information with those of multiplayer , non - cooperative games and looks for different than usual game - theoretic behavior such as enhanced nash equilibria .indeed , it has been established that when features of quantum information are introduced in multiplayer non - cooperative games such as prisoners dilemma and other simple two player , two strategy games , nash equilibrium outcomes can sometimes be observed that are better paying than those available originally .recent surveys of quantum games can be found in . in the case of single player games that are modeled by markovian dynamics , quantum analogueshave been constructed that offer insights into the quantization of certain markov processes and quantum algorithms .recent developments in representing quantum games using octonions and geometric algebras have also provided insights into the behavior of quantum games . as noted above ,the prevailing research trend in the theory of quantum games involves _ quantizing _ games , that is , introducing features of quantum information theory to games and seeking insightful game - theoretic results such as nash equilibria that are better paying than those available originally .an opposite approach where one would introduce features of game theory to quantum information does not appear as a prominent research trend in the current quantum game theory literature . picking up on this deficiency in the literature ,we propose here that this latter approach , which we refer to as _ gaming the quantum _ , can potentially produce insightful quantum information theoretic results . to this end ,sections [ sec : game ] and [ sec : mixing ] together provide a mathematically formal review of non - cooperative games , the fundamental solution concept of nash equilibrium in such games , and the notion of randomization via probability distributions in these games that gives rise to the so - called mixed game . skipping section [ sec : gaming the mixture ] for the moment, readers will find in section [ sec : quantizing a game ] a formal treatment of the notion of a quantized non - cooperative game that builds up on the formalism developed in sections [ sec : game ] and [ sec : mixing ] , and shows how quantized games are the result of replacing randomization via probability distributions with the higher order randomization via quantum superposition followed by measurement . to delineate the notions of quantizing a game and gaming the quantum , we first refer back to section [ sec : gaming the mixture ] where the notion of a stochastic game is introduced by suppressing reference to the game that underlies the mixed game of section [ sec : mixing ] .this delineation is completed in section [ sec : gamedquantum ] where reference to the game underlying a quantized game is suppressed and game - theoretic ideas are applied directly to the state space of quantum superpositions .this gives rise to the notion of a quantum game that is more general than a quantized game .further , section [ sec : gamedquantum ] introduces , for the first time as far as we can tell , a notion of players preferences over quantum superpositions which is used to construct a novel geometric characterization of nash equilibrium in quantum games .these ideas are brought together in theorem 1 and corollary 1 which connect the study of nash equilibria in quantum games to a simultaneous minimization problem in the hilbert space of quantum superpositions .finally , section [ sec : quantum mechanism ] proposes an novel synthesis of quantum games and quantum logic circuits by way of an application of theorem 1 and corollary 1 to the problem of designing mechanisms for quantum games at nash equilibrium .multiplayer , non - cooperative game theory can be described as the science of making optimal choices under given constraints . to this end , introduce a set of _ outcomes _ , a finite number , say , of individuals called _ players _ with non - identical preferences over these outcomes , and assume that the players interact with each other within the context of these outcomes . call this interaction between players a _ game _ and define it to be a function with range equal to the set and domain equal to the cartesian product with defined to the set of players _pure strategies_. call an element of the set a _ play of the game _ or a _ strategy profile_. in symbols , a game is a function note that a game is distinguished from an ordinary function by the notion of non - identical preferences , one per player , defined over the elements of its range or the outcomes . also note that it is assumed that players make strategic choices independently , a fact stressed by the fact that a game s domain is the cartesian product of the pure strategy sets of the players . givena game as above , assume further that the each player has full knowledge of his and his opponents preferences over the outcomes , of pure strategies available to him and to his opponents , and that each player is aware that he and his opponents are all _ rational _ in that they all engage in a play of the game that is consistent with their respective preferences over the outcomes .in an ideal scenario , players will seek out a play that produces an _( also known as pareto - optimal ) outcome , that is , an outcome that can not be improved upon without hurting the prospects of at least one player .however , players typically succeed in seeking out a play that satisfies the constraint of their non - identical preferences over the outcomes .such an outcome has the property that in the corresponding strategy profile ( or the play of the game ) each player s strategy is a _best reply _ to all others , that is , any unilateral change of strategy in such a play of the game by any one player will produce an outcome of equal or less preference than before for _ that _ player . such a play of a game is called a _nash equilibrium _ . a nash equilibrium that happens to correspond to an optimal outcomeis called pareto - optimal or simply an optimal nash equilibrium .consider as an example the two player , two strategy non - cooperative game with the set of outcomes and preferences of the players , labeled player i and player ii , given by with the symbol standing as short - hand for `` more preferred than '' .the set of pure strategies of each player is the two element set and the game is defined as note from the players preferences that the outcome , corresponding to the play , is optimal .however , the outcome is the unique nash equilibrium in this game , corresponding to the strategy profile , because it satisfies the constraint of the players non - identical preferences over the outcomes in expressions ( [ eqn : i preferences ] ) and ( [ eqn : ii preferences ] ) in that unilateral deviation from this play of the game by either player results in an outcome that is less preferable .this can be seen more clearly from the tabular format of the game in figure [ dove - hawk ] which shows that the player who unilaterally deviates from the strategy profile is left with with an outcome that he prefers less than the outcome . in the game , note that while an optimal outcome exists , it is not the one that manifests as the nash equilibrium .this situation is considered to be an undesirable solution to the game .worse game - theoretic situations can occur .for instance , the matching pennies game of figure [ matchingpennies ] , where each player prefers over in any play of this game , entertains no optimal outcomes nor any nash equilibrium !a fundamental problem in game theory is that of finding ways around such undesirable situations .one way this can be achieved is by simple changing the game .for instance , one can talk of changing the players preferences in the game or even changing the function itself .however , changing a game can naturally be viewed as a form of cheating .indeed , a humorous lament betrays this assessment of changing a game as follows : `` when i finally figured out answers to all of life s questions , they changed all the questions ! '' on a more practical note , one could argue that changing a game can be costly .for example , the cost of changing a game may be social , such as the one that ghandi and the inhabitants of the indian sub - continent incurred in their changing of the british raj game .is there a way then to overcome undesirable game - theoretic situations where nash equilibria are non - existent or sub - optimal , without changing the game ?an affirmative answer can be found in the form of randomization via probability distributions .randomization via probability distribution is a time - honored method of avoiding undesirable game - theoretic situations that has been practiced by players since time immemorial in the form of tossing coins , rolling die , or drawing straws , but was formally introduced to game - theory by von neumann .randomization via probability distributions is introduced in a game formally as an exercise in the extension of the game .bleiler provides an excellent treatment of this perspective in . taking the game as an example, we extend its range to include probability distributions over the outcomes by identifying these outcomes with the corners of the simplex of probability distributions over four things , or the 3-simplex .now probability distributions over the outcomes of can be formed while at the same time any one of these original outcomes can be recovered by putting full probabilistic weight on that outcome .formally , the set is embedded in the set via the identification of each outcome with some corner of so that is the probabilistic weight on .next , a notion of non - identical preferences of the players over probability distributions is defined .a typical way to define such preferences in game theory is via the notion of _ expectation _ , constructed by assigning a numeric value , one per player , to each of the outcomes such that the assignment respects the preferences of the players over the outcomes . for a given probability distribution, expectation is defined to be with being the numeric value of the outcome .a player will now prefer one probability distribution over another if .the _ mixed _game is defined next with domain equal to the cartesian product of the sets of probability distributions over the pure strategies of the players . in symbols , where is the set of probability distributions over the pure strategies of each player andis referred to as the set of _ mixed strategies _ of the players .the mixed game is defined as figure [ mixedgame ] gives a pictorial representation of the construction of a mixed game for the game .note that the original game can be recovered from the mixed game by restricting the domain of to .therefore , even though the mixed game is an entirely different game , the fact that the original game `` sits inside '' it and can be recovered if so desired , game theorists argue that the mixed game is a way around undesirable game - theoretic situations such as sub - optimal nash equilibria , _ without changing the game_. the existence of nash equilibria in the mixed game was addressed by john nash who showed that a mixed game with a finite number of players is guaranteed to entertain at least one nash equilibrium .this powerful result offers a way around the most undesirable situation possible in multi - player non - cooperative game theory , namely , a game without any any nash equilibria .moreover , it is sometimes the case that nash equilibrium in the mixed game are near or fully optimal relative to the original game .what can be said about the remains of the mathematical construction employed to produce the mixed game when the underlying game and its domain are removed from consideration ? in this case , the function that remains , call it , maps directly from into and is no longer an extension of .the corners of each no longer correspond to pure strategies of the players in the game and is no longer recoverable from via appropriate restrictions .however , because the range of the game , together with players preferences defined over its outcomes , is still intact within , a notion of players preferences over elements of as constructed in section [ sec : mixing ] still holds .the new function can therefore be considered to be a multi - player non - cooperative game in which the set of outcomes is with players preferences over these outcomes ( probability distributions ) defined in terms of players preferences over the corners of via expectation , and the `` pure strategy '' sets of each player equal : in other words , is the result of `` gaming the mixture '' or an application of ideas from multi - player non - cooperative game theory discussed in section [ sec : game ] to the stochastic function .contrast this with the construction of the mixed game which can be described as an application of stochastic functions to multi - player non - cooperative game theory .the function can appropriately be called a _stochastic game_. note that players preferences over probability distributions need not be left over artifacts of players preferences over the outcomes of the now non - existent underlying game .players preferences over probability distributions can always be defined from scratch .this idea of casting a given function in a game - theoretic setting by starting with a game extension and then removing the underlying game from consideration is extended to functions used in quantum mechanics in section [ sec : gamedquantum ] . to this end ,a relevant discussion on extensions of games to include quantum mechanics appears in section [ sec : quantizing a game ] below .note that because the function does not map onto , the image of the mixed game may not contain probability distributions that are optimal or near - optimal with respect to players preferences over probability distributions .as such , the mixed game might not entertain nash equilibria that are better those available originally .this is indeed the case with the game of section [ sec : game ] . in such persistent unsatisfactory game - theoretic situations ,game - theorists seek other extensions of the original game .of special relevance here is the extension of a game , formally suggested by meyer , to included higher order randomization via quantum superpositions followed by measurement . to this end ,the outcomes of a game are identified with an orthogonal basis of the space of quantum superpositions .mathematically , the space of quantum superpositions is a projective complex hilbert space . for the game , a four dimensional projective hilbert space is required with the four outcomes identified with an orthogonal basis of .now quantum superpositions of the outcomes of can be formed .formally , the set is embedded in the set via the identification of of each with an element of the orthogonal basis so that is the projective complex - valued weight on and is the square of the norm of the complex number .measurement , denoted here as , of a quantum superposition produces a probability distribution in from which the expectation of the outcomes of to the players can be computed and the optimality of a quantum superposition can be defined .note that anyone of the outcomes of the game can be recovered by putting full quantum superpositional weight on the basis element corresponding to that outcome and making a measurement .quantized game _ is defined next , typically as a unitary function from the cartesian product of sets of quantum superpositions of the players pure strategies into , with the added property that it reduces to the original game under appropriate restrictions . in symbols , where denotes the set of quantum superpositions of the strategies of the players , referred to in the literature as quantum strategies of the players . however , because a quantized game is meant to be an extension of the original game in a fashion analogous to the mixed game of section [ sec : mixing ] , we propose that it is more appropriate to refer to as the set of a player s _ quantized strategies_. we also point out that the term quantized game in fact refers to an entire family of games , in stark contrast to the term mixed game which refers to a specific function .depending on the exact nature of , when followed by measurement , the image of this composite map can be larger than the image of mixed game and may contain optimal or near - optimal probability distributions .quantization of games with two players , each having two pure strategies , have been studied extensively with a recent survey of this subject appearing in .one quantization that underpins most studies of two player , two strategy games is the one proposed by eisert , wilkens , and lewenstein ( ewl ) .it can be shown that the ewl quantization is the specific family of functions explicitly defined as where and focuses on a particular variation of the game with specific numeric values replacing the outcomes .this is the popular game known as prisoner s dilemma .the authors of ewl show that a nash equilibrium with an expectation equal to the optimal outcome in the original game of prisoners dilemma manifests in the quantized game when the play of the quantized game is restricted to a certain sub - class of quantized strategies .however , when plays of the quantized game consisting of the most general class of quantized strategies are considered , no nash equilibria manifest ! this situation is remarkably different from that of the mixed game where nash s theorem guarantees the existence of at least one nash equilibrium .this property of ewl quantization is presented in a more general setting of two player , two strategy games by landsburg in where quaternionic coordinates are utilized to produce the result .just as removing the underlying game and its domain from consideration in the construction of the mixed game in section [ sec : mixing ] leaves the function that can be viewed as an example of applying multi - player non - cooperative game theory to stochastic functions , removing the game from consideration in the construction of a quantized game leaves behind a function that can be viewed as an example of an application of multi - player , non - cooperative game theory to quantum mechanics or `` gaming the quantum '' .viewing the function as quantum physical function because it maps into the joint state space of quantum mechanical objects , it truly deserves to be called a quantum game . in other words ,_ a quantum game is any function mapping into a projective complex hilbert space provided a notion of preferences , one per player , is defined over quantum superpositions_. the factors in the domain of a quantum game can now be correctly referred to as the set of _ quantum strategies _ of the players . in a more generalsetting , the set of quantum strategies can conceivably be any set and need not be restricted to . while it is certainly valid to utilize post - measurement expectation from quantum superpositions to define players preferences over quantum superposition, doing so neglects the mathematical structure of the projective complex hilbert space of quantum superpositions which is richer than that of the simplex of probability distributions .more precisely , the space of quantum superposition entertains a natural notion of distance in terms of its inner - product which can used to define players preferences via an orthogonal basis of the space of quantum superpositions .consider with the orthogonal basis from section [ sec : quantizing a game ] above as an example and define the players preferences over the elements of as where the symbol `` '' represents a player s indifference between the two basis elements surrounding the symbol .the choice of preferences in equations ( [ eqn : quantumpref ] ) and ( [ eqn : quantumpref1 ] ) is motivated by the setting of grover s quantum search algorithm where exactly one element of some database , after identification with an orthogonal basis of some quantum system , is sought out or most preferred and all other elements are less preferable .preferences like these correspond to the proverbial `` one man s meat is another man s poison '' situation and give rise to strictly competitive games where what is best for one player is the worst for the other(s ) . from this point of view , quantum search algorithms like grover s algorithm are strictly competitive quantum games . for strictly competitive games ,nash equilibrium takes on a more restricted nature in the form of a min - max solution where one player attempts to minimizes his maximum possible loss in response to the other player s attempts to maximize his minimum possible gain . returning to the discussion on the mathematical structure of the hilbert spaces ,let , .the distance between these two elements is given by the angle where is the inner - product of and , represents its length or norm , and $ ] .player i will now prefer one quantum superposition over another if is closer to than is , that is similarly , player ii will prefer one quantum superposition over another if is closer to than is : the notion of nash equilibrium can now be characterized as a play of a quantum game that satisfies the constraints of the players preferences via this distance notion .let be a quantum superposition corresponding to a play of the quantum game , that is the quantum superposition will be a nash equilibrium outcome if unilateral deviation on part of any one player from the corresponding play will produce a quantum superposition of lesser preference for _ that _ player than .therefore , if player i deviates from his quantum strategy and instead employs the quantum strategy , then also , if player ii deviates from his quantum strategy and instead employs the quantum strategy , then the characterization of nash equilibrium captured by equations ( [ eqn : qi ] ) and ( [ eqn : qii ] ) corresponds to a simultaneous distance minimization problem in the hilbert space , giving the following result : + * theorem 1 * : a _ necessary _ condition for a play of a two player , strictly competitive quantum game to be a nash equilibrium is that it minimize the distance between its image in under and the most preferred basis element of each player in .+ the theory of hilbert space shows that for a given sub - hilbert space of a hilbert space , there always exists a unique element that minimizes the distance between elements of and any fixed .this gives : + * corollary 1 * : a _ sufficient _ condition for the existence of nash equilibrium in a strictly competitive quantum game is for the image of to form a sub - hilbert space of the . physical implementation of a strictly competitive quantum game at nash equilibrium is a problem of mechanism design .a mechanism design approach to quantum games has been proposed in . here , we propose a mechanism design approach for studying strictly competitive quantum games at nash equilibrium based on techniques from quantum logic synthesis .only the outline of this proposal is discussed below , with a more detailed analysis of this approach deferred to a subsequent publication .continuing with the example developed in the preceding section , start with the orthogonal basis of and preferences of player i and player ii over the elements of defined as in ( [ eqn : quantumpref ] ) and ( [ eqn : quantumpref1 ] ) .design next a quantum mechanism and sets of quantum strategies , for player i and player ii respectively , so that as per corollary 1 and theorem 1 , the image of is a sub - hilbert space of and there exist , such that simultaneously minimizes and . restricting to the case where the quantum game is a unitary function mapping into and ,the task of designing a quantum game at nash equilibrium equals that of identifying a unitary matrix and quantum superpositions in each such that the conditions in both theorem 1 and corollary 1 are satisfied .note the assumption here that the players will make independent quantum strategic choices , sometimes referred to as players `` local '' actions , although a quantum mechanism where this condition is relaxed is conceivable . because the image of any linear function mapping into a finite dimensional hilbert space is a sub - hilbert space , and because every unitary function is linear ( by definition ) , the sufficiency condition for a nash equilibrium of corollary 1 is satisfied .the problem lies in identifying conditions under which will satisfy the necessary condition of theorem 1 .a solution to this problem based on quantum logic synthesis is both elegant and appeals to the ultimately physical nature of the problem . viewing as a quantum logic gate , the mechanism design problem for a quantum game at nash equilibrium resolves to synthesizing or constructing a circuit for in terms of _ universal _ quantum logic gates , that is , quantum logic gates which form a circuit that can approximate up to arbitrary accuracy .it is known that sets of quantum logic gates that map quantum superpositions in ( single qubit ) and ( two qubits ) are universal . moreover ,both single and two qubit gates can be implemented practically using most technologies currently available for performing quantum mechanical operations .one technique for quantum logic synthesis , known as the cosine - sine decomposition ( csd ) , is inspired by the corresponding unitary matrix decomposition technique .the csd expresses a quantum logic gate as a circuit composed of multiply controlled single qubit gates , making the circuit implementable in a practical sense .the csd quantum logic circuit of an arbitrary unitary matrix , call it , appears in figure [ csdofu ] where the wires carry qubits and the one qubit gates , , , , , and are all controlled either by qubit value , represented by the symbol , or qubit value , represented by the symbol .the quantum circuit can be used to study the existence of specific quantum games or possibly families of quantum games at nash equilibrium as per theorem 1 .we point out that is a more general construction than the quantum circuit used in the ewl quantization of two player , two strategy games , appearing here in figure [ ewlcircuit ] , because it can approximate any quantum logic gate ( and circuit ) to arbitrary accuracy . as such ,it is possible that at a functional level , the ewl quantum circuit can be implemented via some particular instantiation of the quantum circuit .but to the best of our knowledge , this possibility has not been explored and remains an open question requiring further study .also note that the ewl quantum circuit is specifically designed so as to recover the underlying classical game such as prisoners dilemma .no such conditions are assumed here for the quantum circuit .we envision several potential future directions in the area of quantum games . for one , players preferences given in section [ gamedquantum ] are strictly competitive in nature and are motivated by a quantum computational and algorithmic context where exactly one outcome , corresponding to a particular calculation or searched item , is the `` correct '' and therefore the most desired outcome of a player .all others are less preferable . on the other hand, another player ( or even players ) most prefers at least one of the latter .the equilibrium behavior of quantum circuits can potentially be studied from this game - theoretic perspective .indeed , other preferences that are not strictly competitive in nature may be possible over quantum superpositions , and an entire separate project can be devoted to the study preferences that produce insightful results for quantum games that are not strictly competitive games .further generalization is another possible future direction .one would start with the study of the class of functions into , or indeed into for any , that would satisfy the necessary and sufficient conditions for the existence of nash equilibria .generalizing further would allow a game - theoretic study of a broader class of functions , culminating with the positive operator valued measurement mapping into infinite dimensional vector spaces with a continuum of basis elements .note that these generalizations beyond the hilbert space to more interesting mathematical spaces and objects are still grounded in the physics of the quantum , and hence one can still accurately refer to these generalizations as attempts at gaming the quantum .such studies can potentially produce insightful results in the engineering of control of quantum systems those corresponding to quantum circuits and algorithms .the authors gratefully acknowledge useful discussions with steven bleiler and joel lucero - bryan .s. landsburg , _ nash equilibria in quantum games _ , proceedings of the american mathematical society , volume 139 , number 12 , pages 4423 - 4434 , 2011 .s 0002 - 9939(2011)10838 - 4 .article electronically published on april 19 , 2011 .p. sharif , h. heydari , _ an introduction to multi - player , multi - choice quantum games _ , preprint available at http://arxiv.org/abs/1204.0661 . to appear in proceedings for econophys - kolkata vi ( econophysics of systemic risk and network dynamics ) .a. o. ahmed , s. a. bleiler , f. s. khan , _ octonionization of three player , two strategy maximally entangled quantum games _ , international journal of quantum information , volume 3 , issue 8 , pages 411 - 434 , 2010 .
in the time since the merger of quantum mechanics and game theory was proposed formally in 1999 , the two distinct perspectives apparent in this merger of applying quantum mechanics to game theory , referred to henceforth as the theory of `` quantized games '' , and of applying game theory to quantum mechanics , referred to henceforth as `` gaming the quantum '' , have become synonymous under the single ill - defined term `` quantum game '' . here , these two perspectives are delineated and a game - theoretically proper description of what makes a multiplayer , non - cooperative game quantum mechanical , is given . within the context of this description , finding nash equilibrium in a strictly competitive quantum game is exhibited to be equivalent to finding a solution to a simultaneous distance minimization problem in the state space of quantum objects , thus setting up a framework for a game theory inspired study of `` equilibrium '' behavior of quantum physical systems such as those utilized in quantum information processing and computation .
this work deals with the discretization of darcy flows in fractured porous media for which the fractures are modelized as interfaces of codimension one . in this framework ,the dimensional flow in the fractures is coupled with the dimensional flow in the matrix leading to the so called , hybrid dimensional darcy flow model .we consider the case for which the pressure can be discontinuous at the matrix fracture interfaces in order to account for fractures acting either as drains or as barriers as described in , and . in this paper , we will study the family of models described in and .it is also assumed in the following that the pressure is continuous at the fracture intersections .this corresponds to a ratio between the permeability at the fracture intersection and the width of the fracture assumed to be high compared with the ratio between the tangential permeability of each fracture and its length .we refer to for a more general reduced model taking into account discontinuous pressures at fracture intersections in dimension .+ the discretization of such hybrid dimensional darcy flow model has been the object of several works . in , , a cell - centered finite volume scheme using a two point flux approximation ( tpfa ) is proposed assuming the orthogonality of the mesh and isotropic permeability fields .cell - centered finite volume schemes have been extended to general meshes and anisotropic permeability fields using multipoint flux approximations ( mpfa ) in , , and . in , a mixed finite element ( mfe )method is proposed and a mfe discretization adapted to non - matching fracture and matrix meshes is studied in .more recently the hybrid finite volume ( hfv ) scheme , introduced in , has been extended in for the non matching discretization of two reduced fault models .also a mimetic finite difference ( mfd ) scheme is used in in the matrix domain coupled with a tpfa scheme in the fracture network .discretizations of the related reduced model assuming a continuous pressure at the matrix fracture interfaces have been proposed in using a mfe method , in using a control volume finite element method ( cvfe ) , in using the hfv scheme , and in using an extension of the vertex approximate gradient ( vag ) scheme introduced in . in terms of convergence analysis ,the case of continuous pressure models at the matrix fracture interfaces is studied in for a general fracture network but the current state of the art for the discontinuous pressure models at the matrix fracture interfaces is still limited to rather simple geometries .let us recall that the family of models introduced in and depends on a quadrature parameter denoted by ] . in ,the case of one fully immersed fracture in dimension using a tpfa discretization is analysed for the full range of parameters ] excluding the value in order to allow for a primal variational formulation .two examples of gradient discretizations will be provided , namely the extension of the vag and hfv schemes defined in and to the family of hybrid dimensional darcy flow models . in both cases , it is assumed that the fracture network is conforming to the mesh in the sense that it is defined as a collection of faces of the mesh .the mesh is assumed to be polyhedral with possibly non planar faces for the vag scheme and planar faces for the hfv scheme .two versions of the vag scheme will be studied , the first corresponding to the conforming finite element on a tetrahedral submesh , and the second to a finite volume scheme using lumping for the source terms as well as for the matrix fracture fluxes .the vag scheme has the advantage to lead to a sparse discretization on tetrahedral or mainly tetrahedral meshes .it will be compared to the hfv discretization using face and fracture edge unknowns in addition to the cell unknowns .note that the hfv scheme of has been generalized in as the family of hybrid mimetic mixed methods which which encompasses the family of mfd schemes .in this article , we will focus without restriction on the particular case presented in for the sake of simplicity .+ in section [ sec_model ] we introduce the geometry of the matrix and fracture domains and present the strong and weak formulation of the model .section [ sec_gs ] is devoted to the introduction of the general framework of gradient discretizations and the derivation of the error estimate [ properror ] . in section [ sec_vaghfv ]we define and investigate the families of vag and hfv discretizations . having in mind applications to multi - phase flow, we also present a finite volume formulation involving conservative fluxes , which applies for both schemes . in section [ sec_num ] , the vag and hfv schemesare compared in terms of accuracy and cpu efficiency for both cartesian and tetrahedral meshes on hererogeneous isotropic and anisotropic media using a family of analytical solutions .let denote a bounded domain of , assumed to be polyhedral for and polygonal for . to fix ideas the dimensionwill be fixed to when it needs to be specified , for instance in the naming of the geometrical objects or for the space discretization in the next section .the adaptations to the case are straightforward .+ let and its interior denote the network of fractures , , such that each is a planar polygonal simply connected open domain included in a plane of .it is assumed that the angles of are strictly smaller than , and that for all .for all , let us set , with as unit vector in , normal to and outward to .further , , , , and .it is assumed that . and 3 intersecting fractures .we might define the fracture plane orientations by for , for , and for .,title="fig:",scaledwidth=25.0% ] and 3 intersecting fractures .we might define the fracture plane orientations by for , for , and for .,title="fig:",scaledwidth=25.0% ] we will denote by the dimensional lebesgue measure on . on the fracture network , we define the function space endowed with the norm and its subspace consisting of functions such that , with continuous traces at the fracture intersections , . the space is endowed with the norm .we also define it s subspace with vanishing traces on , which we denote by . on , the gradient operator from to denoted by . on the fracture network , the tangential gradient , acting from to , is denoted by , and such that where , for each , the tangential gradient is defined from to by fixing a reference cartesian coordinate system of the plane containing .we also denote by the divergence operator from to .+ we assume that there exists a finite family such that for all holds : and there exists a lipschitz domain , such that . for and an apropriate choice of assume that .furthermore should hold .we also assume that each is contained in for exactly two and that we can define a unique mapping from to , such that and ( cf .figure [ fig_network ] ) . for all , defines the two sides of the fracture in and we can introduce the corresponding unit normal vectors at outward to , such that .we therefore obtain for and a.e . a unique unit normal vector outward to . a simple choice of is given by both sides of each fracture but more general choices are also possible such as for example the one exhibited in figure [ fig_network ] .+ then , for , we can define the trace operator on : and the normal trace operator on outward to the side : we now define the hybrid dimensional function spaces that will be used as variational spaces for the darcy flow model in the next subsection : and its subspace where ( with denoting the trace operator on ) as well as where on , we define the positive semidefinite , symmetric bilinear form for , which induces the seminorm .note that is a scalar product and is a norm on , denoted by in the following .+ we define for all the scalar product which induces the norm , and where we have used the notation on for all and .+ using similar arguments as in the proof of , example ii.3.4 , one can prove the following poincar type inequality .[ proppoincarecont ] the norm satisfies the following inequality for all .we apply the ideas of the proof of , example ii.3.4 and assume that the statement of the proposition is not true. then we can define a sequence in , such that where , for this proof , .the imbedding is compact , provided that has the cone property ( see , theorem 6.2 ) .thus , there is a subsequence of and , such that on the other hand it follows from ( [ proofproppoincarecont1 ] ) that since is complete , we have with since is a norm on , we have , but , which is a contradiction . with the precedent proof it is readily seen that inequality ( [ poincarecont ] ) holds for all functions whose trace vanishes on a subset of with positive surface measure .the requirement is that has to be in a closed subspace of for which is a well defined norm .the convergence analysis presented in section [ sec_vaghfv ] requires some results on the density of smooth subspaces of and , which we state below . 1 . is defined as the subspace of functions in vanishing on a neighbourhood of the boundary , where is the set of functions , such that for all there exists , such that for all connected components of one has . is defined as the image of of the trace operator .3 . .4 . .let us first state the following lemma that will be used to prove the density of in .[ lemmaweakderivatives ] let and such that for all . then holds , and .firstly , for all , we have and therefore and . for a.e . , there exists an open planar domain containing such that for all there exists with where denotes the normal trace operator on the boundary of . from ,taking , we obtain where denotes the trace operator on the boundary of .we deduce a.e . on .hence .further , for a.e . there exists an open planar domain containing such that for all there exists with from we obtain we deduce a.e . on .next , for all , we have from and therefore for and .let , . for a.e . there exists an open interval containing such that for all there exists with from we obtain denoting the dimensional lebesgue measure on .we deduce a.e . on .the proof of a.e .on goes analogously .hence . is dense in .firstly , note that we have i.e. is equivalent to the standard norm on .the density of in being a classical result , we are concerned to prove the density of in in the following . since , we can define . in proposition 2 of is shown that is dense in .hence is dense in . is dense in . since is a closed subspace of the hilbert space , any linear form is the restriction to of a linear form still denoted by in .then , for some and holds for all .let us assume now that for all .corresponding to lemma [ lemmaweakderivatives ] holds . from the definition of conclude that for all .let now .then there exist and , such that for all .furthermore , let us assume that for all . from lemma [ lemmaweakderivatives ]we deduce that , that and that . using this , we conclude , again by the rule of partial integration , that for all . in the matrix domain , let us denote by the permeability tensor such that there exist with analogously , in the fracture network , we denote by the tangential permeability tensor , and assume that there exist , such that holds at the fracture network , we introduce the orthonormal system , defined a.e .on . inside the fractures ,the normal direction is assumed to be a permeability principal direction .the normal permeability is such that for a.e . with .we also denote by the width of the fractures assumed to be such that there exist with for a.e .let us define the weighted lebesgue dimensional measure on by .we consider the source terms ( resp . ) in the matrix domain ( resp . in the fracture network ) .the half normal transmissibility in the fracture network is denoted by .+ given ] , the variational problem has a unique solution which satisfies the a priori estimate with depending only on , , , , , , and .in addition belongs to . using that for all ] and be a gradient discretization , then has a unique solution satisfying the a priori estimate with depending only on , , , , , , and . the lax - milgram theorem applies , which ensures this result . the main theoretical result for gradient schemes is stated by the following proposition : _ ( error estimate ) _ [ properror ] let , be the solution of .let ] with the solution obtained with a 3d representation of the fractures . table [ table1 ] exhibits for the cartesian and tetrahedral meshes , as well as for both the vag and hfv schemes , the number of degrees of freedom ( nb dof ) , the number of d.o.f .after elimination of the cell and dirichlet unknowns ( nb dof el . ) , and the number of nonzero element in the linear system after elimination without any fill - in of the cell unknowns ( nb jac ) . in all test cases ,the linear system obtained after elimination of the cell unknowns is solved using the gmres iterative solver with the stopping criteria .the gmres solver is preconditioned by ilut , using the thresholding parameter chosen small enough in such a way that all the linear systems can be solved for both schemes and for all meshes . in tables [ table2 ] and [ table3 ] , we report the number of gmres iterations and the cpu time taking into account the elimination of the cell unknowns , the ilut factorization , the gmres iterations , and the computation of the cell values .we ran the program on a 2,6 ghz intel core i5 processor with 8 gb 1600 mhz ddr3 memory .we consider a 3-dimensional open , bounded , simply connected domain with four intersecting fractures , , and .we also introduce the piecewise disjoint , connex subspaces of , , , and .[ [ derivation ] ] derivation : + + + + + + + + + + + for , we denote and where we have introduced .we assume that a solution of the discontinuous pressure model writes in the fracture network and in the matrix domain on we assume such that the continuity of is well established at the fracture - fracture intersection , as well as to ease the following calculations . for let and for let . from the conditions we then get , after some effort in computation , obviously , we have taken and as degrees of freedom , here .however , these functions must be chosen in such a way that for .we would like to explicitly calculate the jump at the matrix - fracture interfaces for this class of solutions .at we have from , we observe , that the pressure becomes continuous at the matrix - fracture interfaces , as the tend to uniformly . in order to obtain solutions with discontinuities at the matrix - fracture interfaces , we had to omit the constraint of flux conservation at fracture - fracture intersections .we define a solution by setting , , , , , .the parameters we used for the different test cases are * isotropic heterogeneous permeability : * anisotropic heterogeneous permeability : in the following figures we plot the normalized norms of the errors , which are calculated as follows : * normalized error of the solution : * normalized error of the gradient : in the following tables is additionally found the normalized error of the jump : .+ 0.5 cm 0.5 cm 0.5 cm the test case shows that , on cartesian grids , we obtain , as classically expected , convergence of order 2 for both , the solution and it s gradient . for tetrahedral grids ,we obtain convergence of order 2 for the solution and convergence of order 1 for it s gradient .we observe that the vag scheme is more efficient then the hfv scheme and this observation gets more obvious with increasing anisotropy .comparing the precision of the discrete solution ( and it s gradient ) for vag and hfv on a given mesh , we see that on hexahedral meshes , the advantage is on the side of vag , whereas on tetrahedral meshes hfv is more precise ( but much more expensive ) . on a given mesh , hfv is usually ( see ) more accurate than vag both for tetrahedral and hexahedral meshes .this is not the case for our test cases on cartesian meshes maybe due to the higher number for vag than for hfv of d.o.f . at the interfaces on the matrix side .it is also important to notice that there is literally no difference between vag with finite element respectively lumped _mf_-fluxes concerning accuracy and convergence rate .in this work , we extended the framework of gradient schemes ( see ) to the model problem of stationary darcy flow through fractured porous media and gave numerical analysis results for this general framework .the model problem ( an extension to a network of fractures of a pde model presented in , and ) takes heterogeneities and anisotropy of the porous medium into account and involves a complex network of planar fractures , which might act either as barriers or as drains .we also extended the vag and hfv schemes to our model , where fractures acting as barriers force us to allow for pressure jumps across the fracture network .we developed two versions of vag schemes , the conforming finite element version and the non - conforming control volume version , the latter particularly adapted for the treatment of material interfaces ( cf .we showed , furthermore , that both versions of vag schemes , as well as the proposed non - conforming hfv schemes , are incorporated by the gradient scheme s framework .then , we applied the results for gradient schemes on vag and hfv to obtain convergence , and , in particular , convergence of order 1 for `` piecewise regular '' solutions . for implementation purposes and in view of the application to multi - phase flow , we also proposed a uniform finite volume formulation for vag and hfv schemes .the numerical experiments on a family of analytical solutions show that the vag scheme offers a better compromise between accuracy and cpu time than the hfv scheme especially for anisotropic problems . ahmed , r. , edwards , m.g . ,lamine , s. , huisman , b.a.h ., control - volume distributed multi - point flux approximation coupled with a lower - dimensional fracture model , j. comp .physics , 462 - 489 , vol .284 , 2015 .brenner , k. , groza , m. , guichard , c. , masson , r. vertex approximate gradient scheme for hybrid dimensional two - phase darcy flows in fractured porous media .esaim mathematical modelling and numerical analysis , 49 , 303 - 330 ( 2015 ) .eymard , r. , gallout , t. , herbin , r. : discretization of heterogeneous and anisotropic diffusion problems on general nonconforming meshes sushi : a scheme using stabilisation and hybrid interfaces .i m a j numer anal ( 2010 ) 30 ( 4 ) : 1009 - 1043 .droniou , j. , eymard , r. , gallout , t. , herbin , r. : gradient schemes : a generic framework for the discretisation of linear , nonlinear and nonlocal elliptic and parabolic equations .models methods appl .23 , 13 , 2395 - 2432 ( 2013 ) .droniou , j. , eymard , r. , gallout , t. , herbin , r. : a unified approach to mimetic finite difference , hybrid finite volume and mixed finite volume methods .math . models and methods in appl .20,2 , 265 - 295 ( 2010 ) .brezzi f. , lipnikov k. , simoncini v. , a family of mimetic finite difference methods on polygonal and polyhedral meshes , mathematical models and methods in applied sciences , vol .15 , 10 , 2005 , 1533 - 1552 .i. faille , a. fumagalli , j. jaffr , j. robert , reduced models for flow in porous media containing faults with discretization using hybrid finite volume schemes .
we investigate the discretization of darcy flow through fractured porous media on general meshes . we consider a hybrid dimensional model , invoking a complex network of planar fractures . the model accounts for matrix - fracture interactions and fractures acting either as drains or as barriers , i.e. we have to deal with pressure discontinuities at matrix - fracture interfaces . the numerical analysis is performed in the general framework of gradient discretizations which is extended to the model under consideration . two families of schemes namely the vertex approximate gradient scheme ( vag ) and the hybrid finite volume scheme ( hfv ) are detailed and shown to satisfy the gradient scheme framework , which yields , in particular , convergence . numerical tests confirm the theoretical results . gradient discretization ; darcy flow , discrete fracture networks , finite volume
deep learning has achieved remarkable successes in object and voice recognition , machine translation , reinforcement learning and other tasks . from a practical standpointthe problem of supervised learning is well - understood and has largely been solved at least in the regime where both labeled data and computational power are abundant .the workhorse underlying most deep learning algorithms is error backpropagation , which is simply gradient descent distributed across a neural network via the chain rule .gradient descent and its variants are well - understood when applied to convex or nearly convex objectives .in particular , they have strong performance guarantees in the stochastic and adversarial settings .the reasons for the success of gradient descent in non - convex settings are less clear , although recent work has provided evidence that most local minima are good enough ; that modern convolutional networks are close enough to convex for many results on rates of convergence apply ; and that the rate of convergence of gradient - descent can control generalization performance , even in nonconvex settings .taking a step back , gradient - based optimization provides a well - established set of computational primitives , with theoretical backing in simple cases and empirical backing in others .first - order optimization thus falls in broadly the same category as computing an eigenvector or inverting a matrix : given sufficient data and computational resources , we have algorithms that reliably find good enough solutions for a wide range of problems .this essay proposes to abstract out the optimization algorithms used for weight updates and focus on how the components of deep learning algorithms interact . treating optimization as a computational primitive encourages a shift from low - level algorithm design to higher - level mechanism design : we can shift attention to designing architectures that are guaranteed to learn distributed representations suited to specific objectives .the goal is to introduce a language at a level of abstraction where designers can focus on formal specifications ( grammars ) that specify how plug - and - play optimization modules combine into larger learning systems .let us recall how representation learning is commonly understood .et al _ describe representation learning as `` learning transformations of the data that make it easier to extract useful information when building classifiers or other predictors '' .more specifically , `` a deep learning algorithm is a particular kind of representation learning procedure that discovers multiple levels of representation , with higher - level features representing more abstract aspects of the data '' . finally , lecun _ et al _ state that multiple levels of representations are obtained `` by composing simple but non - linear modules that each transform the representation at one level ( starting with the raw input ) into a representation at a higher , slightly more abstract level . with the composition of enough such transformations , very complex functions can be learned . for classification tasks , higher layers of representationamplify aspects of the input that are important for discrimination and suppress irrelevant variations '' .the quotes describe the operation of a successful deep learning algorithm .what is lacking is a characterization of what makes a deep learning algorithm work in the first place. what properties must an algorithm have to learn layered representations ?what does it mean for the representation learned by one layer to be useful to another ?what , exactly , is a representation ? in practice ,almost all deep learning algorithms rely on error backpropagation to `` align '' the representations learned by different layers of a network .this suggests that the answers to the above questions are tightly bound up in first - order ( that is , gradient - based ) optimization methods .it is therefore unsurprisingly that the bulk of the paper is concerned with tracking the flow of first - order information .the framework is intended to facilitate the design of more general first - order algorithms than backpropagation . * * to get started , we need a theory of the meaning or semantics encoded in neural networks . since there is nothing special about neural networks , the approach taken is inclusive and minimalistic . definition [ d : meaning ] states that the meaning of _ any _ function is how it implicitly categorizes inputs by assigning them to outputs .the next step is to characterize those functions whose semantics encode knowledge , and for this we turn to optimization . * * nemirovski and yudin developed the black - box computational model to analyze the computational complexity of first - order optimization methods .the black - box model is a more abstract view on optimization than the turing machine model : it specifies a _ communication protocol _ that tracks how often an algorithm makes _queries _ about the objective .it is useful to refine nemirovski and yudin s terminology by distinguishing between black - boxes , which _ respond _ with zeroth - order information ( the value of a function at the query - point ) , and gray - boxes , which respond with zeroth- and first - order information ( the gradient or subgradient ) . with these preliminaries in hand, definition [ d : foo ] proposes that a _ representation _ is a function that is a _ local _ solution to an optimization problem . since we do not restrict to convex problems , finding global solutions is not feasible .indeed , recent experience shows that global solutions are often not necessary practice .the local solution has similar semantics to that is , it represents the ideal solution .the ideal solution usually can not be found : due to computational limitations , since the problem is nonconvex , because we only have access to a finite sample from an unknown distribution , etc . to see how definition [ d : foo ] connects with representation learning as commonly understood , it is necessary to take a detour through distributed optimization and game theory .game theory provides tools for analyzing distributed optimization problems where a set of players aim to minimizes losses that depend not only on their actions , but also the actions of all other players in the game .game theory has traditionally focused on convex losses since they are more theoretically amenable . here, the only restriction imposed on losses is that they are differentiable almost everywhere . allowing nonconvex lossesmeans that error - backpropagation can be reformulated as a game .interestingly , there is enormous freedom in choosing the players .they can correspond to individual units , layers , entire neural networks , and a variety of other , intermediate choices .an advantage of the game - theoretic formulation is thus that it applies at many different scales .nonconvex losses and local optima are essential to developing a _ scale - free _ formalism . even when it turns out that particular units or a particular layer of a neural network are solving a convex problem , convexity is destroyed as soon as those units or layers are combined to form larger learning systems .convexity is not a property that is preserved in general when units are combined into layers or layers into networks .it is therefore convenient to introduce the computational primitive to denote the output of a first - order optimization procedure , see definition [ d : foo ] . ** a potential criticism is that the formulation is too broad .very little can be said about nonconvex optimization in general ; introducing games where many players jointly optimize a set of arbitary nonconvex functions only compounds the problem .additional structure is required .a successful case study can be found in , which presents a detailed game - theoretic analysis of rectifier neural networks .the key to the analysis is that rectifier units are almost convex .the main result is that the rate of convergence of a neural network to a local optimum is controlled by the ( waking-)regret of the algorithms applied to compute weight updates in the network . whereas relied heavily on specific properties of rectifer nonlinearities, this paper considers a wide - range of deep learning architectures . nevertheless , it is possible to carve out an interesting subclass of nonconvex games by identifying the composition of simple functions as an essential feature common to deep learning architectures .compositionality is formalized via distributed communication protocols and grammars . * * neural networks are constructed by composing a series of elementary operations . the resulting feedforward computation is captured via as a computation graph .backpropagation traverses the graph in reverse and recursively computes the gradient with respect to the parameters at each node .section [ sec : comp ] maps the feedforward and feedback computations onto the queries and responses that arise in nemirovski and yudin s model of optimization .however , queries and responses are now highly structured . in the query phase , players feed parameters into a computation graph ( the query graph ) that performs the feedforward sweep . in the response phase , oracles reveal first - order information that is fed into a second computation graph ( the response graph ) . in most casesthe response graph simply implements backpropagation. however , there are examples where it does not .three are highlighted here , see section [ sec : advers ] , and especially sections [ sec : pg ] and [ sec : kb ] .other algorithms where the response graphs do not simply implement backprop include difference target propagation and feedback alignment ( both discussed briefly in section [ sec : kb ] ) and truncated backpropagation through time , where a choice is made about where to cut backprop short .examples where the query and response graph differ are of particular interest , since they point towards more general classes of deep learning algorithms .a _ distributed communication protocol _ is a game with additional structure : the query and response graphs , see definition [ d : dcp ] .the graphs capture the compositional structure of the functions learned by a neural network and the compositional structure of the learning procedure respectively .it is important for our purposes that ( i ) the feedforward and feedback sweeps correspond to two distinct graphs and ( ii ) the communication protocol is kept distinct from the optimization procedure .that is , the communication protocol specifies how information flows through the networks without specifying how players make use of it .players can be treated as plug - and - play rational agents that are provided with carefully constructed and coordinated first - order information to optimize as they see fit . finally , a _ grammar _ is a distributed communication protocol equipped with a guarantee that the response graph encodes sufficient information for the players to jointly find a local optimum of an objective function .the paradigmatic example of a grammar is backpropagation . a grammar is a thus a game designed to perform a task . a representation learned by one ( p)layer is useful to another if the game is guaranteed to converge on a local solution to an objective that is , if the players interact though a grammar .it follows that the players build representations that jointly encode knowledge about the task .* * what follows is provisional .the definitions are a first attempt to capture an interesting , and perhaps useful , perspective on deep learning .the essay contains no new theorems , algorithms or experiments , see for `` real work '' based on the ideas presented here .the essay is not intended to be comprehensive .many details are left out and many important aspects are not covered : most notably , probabilistic and bayesian formulations , and various methods for unsupervised pre - training . * * in line with its provisional nature , much of the essay is spent applying the framework to worked examples : error backpropagation as a supervised model ; variational autoencoders and generative adversarial networks for unsupervised learning ; the deviator - actor - critic ( dac ) model for deep reinforcement learning ; and kickback , a biologically plausible variant of backpropagation .the examples were chosen , in part , to maximize variety and , in part , based on familiarity .the discussions are short ; the interested reader is encouraged to consult the original papers to fill in the gaps .the last two examples are particularly interesting since their response graphs differ substantially from backpropagation .the dac model constructs a zeroth - order black - box to estimate gradients rather than querying a first - order gray - box .kickback prunes backprop s response graph by replacing most of its gray - boxes with black - boxes and approximating the chain rule with ( primarily ) local computations .bottou and gallinari proposed to decompose neural networks into cooperating modules .decomposing more general algorithms or models into collections of interacting agents dates back to the shrieking demons that comprised selfridge s pandemonium and a long line of related work .the focus on components of neural networks as players , or rational agents , in their own right developed here derives from work aimed at modeling biological neurons game - theoretically , see .a related approach to semantics based on general value functions can be found in sutton _ et al _ , see remark [ rem : sutton ] .computation graphs as applied to backprop are the basis of the python library theano and provide the backbone for automatic / algorithmic differentiation .grammars are a technical term in the theory of formal languages relating to the chomsky hierarchy .there is no apparent relation between that notion of grammar and the one presented here , aside from both relating to structural rules governing composition .formal languages and deep learning are sufficiently disparate fields that there is little risk of terminological confusion . similarly , the notion of semantics introduced here is distinct from semantics in the theory of programming languages .although game theory was originally developed to model human interactions , it has been pointed out that it may be more directly applicable to interacting populations of algorithms , so - called _ machina economicus _ .this paper goes one step further to propose that games played over first - order communication protocols are a key component of the foundations of deep learning .a source of inspiration for the essay is bayesian networks and markov random fields .probabilistic graphical models and factor graphs provide simple , powerful ways to encode a multivariate distribution s independencies into a diagram .they have greatly facilitated the design and analysis of probabilistic algorithms .however , there is no comparable framework for distributed optimization and deep learning .the essay is intended as a first step in this direction .this section defines semantics and representations . in short , the semantics of a function is how it categorizes its inputs ; a function is a representation if it is selected to optimize an objective .the connection between the definition of representation below and `` representation learning '' is clarified in section [ sec : ebp ] .possible world semantics was introduced by lewis to formalize the meaning of sentences in terms of counterfactuals .let be a proposition about the world .its truth depends on its content and the state of the world . rather than allowing the state of the world to vary, it is convenient to introduce the set of all possible worlds. let us denote proposition applied in world by .the meaning of is then the mapping which assigns 1 or 0 to each according to whether or not proposition is true .equivalently , the meaning of the proposition is the ordered pair consisting of : all worlds , and the subset of worlds where it is true : for example , the meaning of ``__that _ _ is blue '' is the subset of possible worlds where i am pointing at a blue object .the concept of blue is rendered explicit in an exhaustive list of possible examples .a simple extension of possible world semantics from propositions to arbitrary functions is as follows : [ d : meaning] + given function , the * semantics * or * meaning * of output is the ordered pair of sets functions implicitly categorize inputs by assigning outputs to them ; the meaning of an output is the category .whereas propositions are true or false , the output of a function is neither . however, if two functions both optimize a criterion , then one can refer to how _accurately _ one function _ represents _ the other . before we can define representationswe therefore need to take a quick detour through optimization : [ d : opt] + an * optimization problem * is a pair consisting in parameter - space and objective that is differentiable almost everywhere .the * solution * to the global optimization problem is which is either a maximum or minimum according to the nature of the objective .the solution may not be unique ; it also may not exist unless further restrictions are imposed .such details are ignored here .next recall the black - box optimization framework introduced by nemirovski and yudin .[ d : protocol] + a * communication protocol * for optimizing an unknown objective consists in a user ( or player ) and an oracle . on each round, user presents a * query * .oracle can * respond * in one of two ways , depending on the nature of the protocol : * _ black - box ( zeroth - order ) protocol . _ + oracle responds with the value .+ + at ( 0,0 ) ( player ) * player * ; at ( 3,0 ) ( block ) ; ( player ) edge node[above ] ( block ) ; at ( 8,0 ) ( player_a ) * player * ; at ( 11,0 ) ( oracle ) ; ( oracle ) edge node[above ] ( player_a ) ; + + * _ gray - box ( first - order ) protocol . _+ oracle responds with either the gradient or with the gradient together with the value .+ + at ( 0,0 ) ( player ) * player * ; at ( 3,0 ) ( block ) ; ( player ) edge node[above ] ( block ) ; at ( 8,0 ) ( player_a ) * player * ; at ( 11,0 ) ( oracle ) * oracle * ; ( oracle ) edge node[above ] ( player_a ) ; the protocol specifies how player and oracle interact without specifying the algorithm used by player to decide which points to query .the next section introduces _ distributed communication protocols _ as a general framework that includes a variety of deep learning architectures as special cases again without specifying the precise algorithms used to perform weight updates . unlike we do not restrict to convex problems . finding a global optimumis not always feasible , and in practice often unnecessary .[ d : foo] + let be a function space and be a map from parameter - space to functions .further suppose that objective function is given .a * representation * is a local solution to the optimization problem corresponding to a _local _ maximum or minimum according to whether the objective is minimized or maximized .intuitively , the objective quantifies the extent to which functions in categorize their inputs similarly .the operation applies a first - order method to find a function whose semantics resembles the optimal solution where . in short , representations are functions with useful semantics , where usefulness is quantifed using a specific objective : the lower the loss or higher the reward associated with a function , the more useful it is .the relation between definition [ d : foo ] and representations as commonly understood in the deep learning literature is discussed in section [ sec : ebp ] below .[ rem : sutton] + in related work , sutton _et al _ proposed that semantics i.e. knowledge about the world can be encoded in general value functions that provide answers to specific questions about expected rewards .definition [ d : meaning ] is more general than their approach since it associates a semantics to _ any _ function .however , the function must arise from optimizing an objective for its semantics to accurately represent a phenomenon of interest .the main example of a representation arises under supervised learning .[ rep : sup] + let and be an input space and a set of labels and be a loss function .suppose that is a parametrized family of functions .* _ nature _ which samples labeled pairs i.i.d . from distribution , singly or in batches . *_ predictor _ chooses parameters . *_ objective _ is .\ ] ] the query and responses phases can be depicted graphically as + at ( 0,0 ) ( player ) * predictor * ; at ( 5,0 ) ( block ) } ] . to keep the discussion and notation simple, we do not consider any of these important details .it is instructive to unpack the protocol , by observing that the objective is a composite function involving , and ] ; ( player ) edge node[above ] ( block ) ; at ( 10,0 ) ( player_a ) * estimator * ; at ( 15,0 ) ( oracle ) * oracle * ; ( oracle ) edge node[above ] ( player_a ) ; + the estimate , where , is a representation of the optimal solution , and can also be considered a representation of .the setup extends easily to maximum _ a posteriori _ estimation .as for supervised learning , the protocol can be unpacked by observing that the objective has a compositional structure : + at ( 0,0 ) ( glabel ) * query * ; at ( 3,0 ) ( unknown_in ) * nature * ; at ( 0,-2 ) ( player ) * estimator * ; at ( 3,-2 ) ( block1 ) ; at ( 6,-2 ) ( block2 ) ; ( unknown_in ) edge node[left ] ( block1 ) ; ( player ) edge node[above ] ( block1 ) ; ( block1 ) edge node[left ] ( block2 ) ; at ( 9,0 ) ( glabel ) * response * ; at ( 9,-2 ) ( player_out ) * estimator * ; at ( 15,-2 ) ( oracle_l ) * oracle * ; at ( 12,0 ) ( oracle_f ) * oracle * ; at ( 12,-2 ) ( block ) ; ( oracle_l ) edge node[above ] ( block ) ; ( oracle_l ) edge node[below ] ( block ) ; ( oracle_f ) edge node[left ] ( block ) ; ( block ) edge node[below ] ( player_out ) ; ( block ) edge node[above ] ( player_out ) ; the third example is taken from reinforcement learning .we will return to reinforcement learning in section [ sec : pg ] , so the example is presented in some detail . in reinforcement learning , an agent interacts with its environment , which is often modeled as a markov decision process consisting of state space , action space , initial distribution on states , stationary transition distribution and reward function .the agent chooses actions based on a _ policy _ : a function from states to actions .the goal is to find the optimal policy .actor - critic methods break up the problem into two pieces .the critic estimates the expected value of state - action pairs given the current policy , and the actor attempts to find the optimal policy using the estimates provided by the critic .the critic is typically trained via temporal difference methods .let denote the distribution on states at time given policy and initial state at and let .let be the discounted future reward .define the value of a state - action pair as .\ ] ] unfortunately , the value - function can not be queried . instead, temporal difference methods take a bootstrapped approach by minimizing the bellman error : \ ] ] where is the state subsequent to . + critic interacts with black - boxes actor and nature . *_ critic _ plays parameters .* _ operator _ and estimates the value function and compute the bellman error . in practice , it turns out to _ clone _ the value - estimate periodically and compute a slightly modified bellman error : \ ] ] where is the cloned estimate .cloning improves the stability of td - learning . a nice conceptual side - effect of cloning is that td - learning reduces to gradient descent .+ + at ( 0,0 ) ( glabel ) * query * ; at ( 6,0 ) ( unknown_in ) * nature * ; at ( 3,0 ) ( actor ) * actor * ; at ( 0,-2 ) ( player ) * critic * ; at ( 3,-2 ) ( block1 ) ; at ( 6,-2 ) ( block2 ) ; ( unknown_in ) edge node[left ] ( block1 ) ; ( player ) edge node[above ] ( block1 ) ; ( block1 ) edge node[left ] ( block2 ) ; ( unknown_in ) edge node[right ] ( block2 ) ; ( actor ) edge node[left ] ( block1 ) ; + at ( 9,0 ) ( glabel ) * response * ; at ( 9,-2 ) ( player_out ) * critic * ; at ( 15,-2 ) ( oracle_l ) * oracle * ; at ( 12,0 ) ( oracle_f ) * oracle * ; at ( 12,-2 ) ( block ) ; + ( oracle_l ) edge node[above ] ( block ) ; ( oracle_f ) edge node[right ] ( block ) ; ( block ) edge node[above ] ( player_out ) ; + + the estimate is a representation of the true value function . + temporal difference learning is not strictly speaking a gradient - based method .the residual gradient method performs gradient descent on the bellman error , but suffers from double sampling .projected fixpoint methods minimize the _ projected _ bellman error via gradient descent and have nice convergence properties .an interesting recent proposal is implicit td learning , which is based on implicit gradient descent .section [ sec : pg ] presents the deviator - actor - critic model which simultaneously learns a value - function estimate and a locally optimal policy .it is often useful to decompose complex problems into simpler subtasks that can handled by specialized modules .examples include variational autoencoders , generative adversarial networks and actor - critic models .neural networks are particularly well - adapted to modular designs , since units , layers and even entire networks can easily be combined analogously to bricks of lego . however , not all configurations are viable models .a methodology is required to distinguish good designs from bad .this section provides a basic language to describe how bricks are glued together that may be a useful design tool .the idea is to extend the definitions of optimization problems , protocols and representations from section [ sec : reps ] from single to multi - player optimization problems .[ d : game] + a * distributed optimization problem * or * game * ,\theta , { \ensuremath{\boldsymbol\ell}}) ] of players , a parameter space , and loss vector .player picks moves from and incurs loss determined by .the goal of each player is to minimize its loss , which depends on the moves of the other players .the classic example is a _ finite game _ , where player has a menu of -actions and chooses a distribution over actions , on each round .losses are specified for individual actions , and extended linearly to distributions over actions .a natural generalization of finite games is _ convex games _ where the parameter spaces are compact convex sets and each loss is a convex function in its -argument . it has been shown that players implementing no - regret algorithms are guaranteed to converge to a correlated equilibrium in convex games . computation graphs are a useful tool for calculating derivatives . for simplicity , we restrict to deterministic computation graphs .more general stochastic computation graphs are studied in . * _ query phase ._ players provide inputs to the query graph _ ( ) _ that operators transform into outputs .* _ response phase . _ operators in act as oracles in the response graph _ ( ) _ : they input subgradients that are transformed and communicated to the players .the protocol specifies how players and oracles communicate without specifying the optimization algorithms used by the players .the addition of a response graph allows more general computations than simply backpropagating the gradients of the query phase .the additional flexibility allows the design of new algorithms , see sections [ sec : pg ] and [ sec : kb ] below .it is also sometimes necessary for computational reasons .for example , backpropagation through time on recurrent networks typically runs over a truncated response graph .suppose that we wish to optimize an objective function that depends on all the moves of all the players .finding a global optimum is clearly not feasible .however , we may be able to construct a protocol such that the players are jointly able to find local optima of the objective . in such cases ,we refer to the protocol as a grammar : [ d : grammar] + a * grammar * for objective is a distributed communication protocol where the response graph provides _ sufficient _ first - order information to find a local optimum of .the guarantee ensures that the representations constructed by players in a grammar can be combined into a coherent distributed representation .that is , it ensures that the representations constructed by the players transform data in a way that is useful for optimizing the shared objective .the players losses need not be explicitly computed .all that is necessary is that the response phase communicate the gradient information needed for players to locally minimize their losses and that doing so yields a local optimum of the objective . ** functions can be inserted into grammars as lego - like building blocks via function composition during queries and the chain rule during responses .let be a function that takes inputs and , provided by a player and by upstream computations respectively .the output of is communicated downstream in the query phase : + at ( 0,0 ) ( glabel ) * query * ; at ( 0,-2 ) ( unknown_in ) ; at ( 6,-2 ) ( unknown_out ) ; at ( 3,0 ) ( player ) * player * ; at ( 0,-4 ) ( unknown_x ) ; at ( 3,-2 ) ( block ) ; ( unknown_in ) edge node[above ] ( block ) ; ( player ) edge node[left ] ( block ) ; ( block ) edge node[above ] ( unknown_out ) ; at ( 0,0 ) ( glabel ) * response * ; at ( 0,-2 ) ( unknown_in ) ; at ( 6,-2 ) ( unknown_out ) ; at ( 3,0 ) ( player ) * player * ; at ( 3,-2 ) ( block ) ; at ( 3,-4 ) ( oracle ) * oracle * ; ( block ) edge node[left ] ( player ) ; ( block ) edge node[right ] ( player ) ; ( block ) edge node[above ] ( unknown_in ) ; ( block ) edge node[below ] ( unknown_in ) ; ( unknown_out ) edge node[above ] ( block ) ; ( oracle ) edge node[right ] ( block ) ; + the chain rule is implemented in the response phase as follows .oracle reports the gradient in the response phase .operator `` '' computes the products via matrix multiplication .the projection of the product onto the first and second components '' produce two outputs , the entire vector can be reported in both direction with the irrelevant components ignored . ]are reported to player and upstream respectively . 1 ._ exact gradients . _+ under error backpropagation the response graph implements the chain rule , which guarantees that players receive the gradients of their loss functions ; see section [ sec : ebp ] .surrogate objectives . _ + the variational autoencoder uses a surrogate objective : the variational lower bound . maximizing the surrogateis guaranteed to also maximize the true objective , which is computational intractable ; see section [ sec : vae ] .learned objectives . _+ in the case of generative adversarial network and the dac - model , some of the players learn a loss that is guaranteed to align with the true objective , which is unknown ; see sections [ sec : advers ] and [ sec : pg ] .4 . _ estimated gradient . _+ in the dac - model and kickback , gradient estimates are substituted for the true gradient ; see sections [ sec : pg ] and [ sec : kb ] .guarantees are provided on the estimates . + there is considerable freedom regarding the choice of players . in the examples below ,players are typically chosen to be layers or entire neural networks to keep the diagrams simple .it is worth noting that zooming in , such that players correspond to individual units , has proven to be a useful tool when analyzing neural networks .the game - theoretic formulation is thus scale - free and can be coarse- or fine - grained as required .a mathematical language for tracking the structure of hierarchical systems at different scales is provided by operads , see and the references therein , which are the natural setting to study the composition of operators that receive multiple inputs .the main example of a grammar is a neural network using error backpropagation to perform supervised learning .layers in the network can be modeled as players in a game . setting each ( p)layer s objective as the network s loss , which it minimizes using gradient ascent , yields backpropagation .[ c : chain_rule] + an -layer neural network can be reformulated as a game played between players , corresponding to _ nature _ and the _ layers _ of the network .the query graph for a 3-layer network is : + * _ nature _ plays samples datapoints i.i.d . from and acts as the zeroth player . *_ layer _ plays weight matrices . * _ operators _ compute for each layer , along with loss .the protocol can be extended to convolutional networks by replacing the matrix multiplications performed by each operator , , with convolutions and adding parameterless max - pooling operators . ** we are now in a position to relate the notion of representation in definition [ d : foo ] with the standard notion of representation learning in neural networks . in the terminology of section [ sec :reps ] , each player learns a representation .the representations learned by the different players form a coherent distributed representation because they jointly optimize a single objective function .if we set then the function fits the definition of representation above .moreover , the compositional structure of the network implies that is composed of subrepresentations corresponding to the optimizations performed by the different players in the grammar : each function is a local optimum where is optimized to transform its inputs into a form that is useful to network as a whole . ** little can be said in general about the rate of converge of the layers in a neural network since the loss is not convex .however , neural networks can be decomposed further by treating the individual units as players . when the units are linear or rectilinear , it turns out that the network is a _circadian game_. the circadian structure provides a way to convert results about the convergence of convex optimization methods into results about the global convergence a rectifier network to a local optimum , see .the next example extends the unsupervised setting described in section [ sec : unsupervised ] .suppose that observations are sampled i.i.d .from a two - step stochastic process : a latent value is sampled from , after which is sampled from .the goal is to ( i ) find the maximum likelihood estimator for the observed data and ( ii ) estimate the posterior distribution on conditioned on an observation .a straightforward approach is to maximize the marginal likelihood and then compute the posterior however , the integral in eq .is typically untractable , so a more roundabout tactic is required .the approach proposed in is to construct two neural networks , a decoder that learns a generative model approximating , and an encoder that learns a recognition model or posterior approximating .it turns out to be useful to replace the encoder with a deterministic function , , and a noise source , that are compatible .here , compatible means that sampling is equivalent to sampling and computing .* _ environment _ plays i.i.d .samples from * _ noise _ plays i.i.d .samples from .it also communicates its density function , which is analogous to a gradient and the reason that _ noise _ is gray rather than black - box . *_ encoder _ and _ decoder _ play parameters and respectively . * _ is a neural network that encodes samples into latent variables . *_ operator _ is a neural network that estimates the probability of conditioned on . *the remaining operators compute the ( negative ) variational lower bound }_{{{\mathcal l}}_2}.\ ] ] at ( 0,0 ) ( glabel ) * response * ; at ( 2,-2 ) ( oracle1 ) * oracle * ; at ( 13,-2 ) ( oracle2)*oracle * ; at ( 2,-4 ) ( oracle4 ) * oracle * ; at ( 13,-4 ) ( oracle3 ) * oracle * ; at ( 7,-4 ) ( oracle5 ) * oracle * ; at ( 5,-2 ) ( block1 ) ; at ( 9,-2 ) ( block2 ) ; at ( 5,-4 ) ( block3 ) ; at ( 9,-4 ) ( block4 ) ; at ( 2,0 ) ( block5 ) ; at ( 5,0 ) ( player1 ) * encoder * ; at ( 9,0 ) ( player2 ) * decoder * ; ( oracle1 ) edge node[left ] ( block5 ) ; ( oracle2 ) edge node[above ] ( block2 ) ; ( block4 ) edge node[right ] ( block2 ) ; ( oracle3 ) edge node[above ] ( block4 ) ; ( oracle4 ) edge node[above ] ( block3 ) ; ( oracle5 ) edge node[above ] ( block3 ) ; ( oracle5 ) edge node[above ] ( block4 ) ; ( block1 ) edge node[below ] ( block5 ) ; ( block2 ) edge node[above ] ( block1 ) ; ( block3 ) edge node[left ] ( block1 ) ; ( block5 ) edge node[above ] ( player1 ) ; ( block2 ) edge node[left ] ( player2 ) ; ( block2 ) edge node[right ] ( player2 ) ; 1 . maximizing the variational lower bound yields ( i ) a maximum likelihood estimator and ( ii ) an estimate of the posterior on the latent variable .the chain rule ensures that the correct gradients are communicated to encoder and decoder .a recent approach to designing generative models is to construct an adversarial game between forger and curator .forger generates samples ; curator aims to discriminate the samples produced by forger from those produced by nature .forger aims to create samples realistic enough to fool curator .if forger plays parameters and curator plays then the game is described succinctly via + \operatorname*{\mathbb e}_{{\ensuremath{\boldsymbol\epsilon}}\sim{{\mathbb p}}_{noise}({\ensuremath{\boldsymbol\epsilon}})}\big[\log ( 1 - d_{{{\ensuremath{\boldsymbol{\phi}}}}}(g_{{{\ensuremath{\boldsymbol{\theta}}}}}({\ensuremath{\boldsymbol\epsilon}})))\big ] \right],\ ] ] where is a neural network that converts noise in samples and classifies samples as fake or not .* _ environment _ samples images i.i.d . from . *_ noise _ samples i.i.d . from .* _ forger _ and _ curator _ play parameters and respectively . * _ operator_ is a neural network that produces fake image .* _ operator _ is a neural network that estimates the probability that an image is fake .* the remaining operators compute a loss that _ curator _ minimizes and _ forger _maximizes }_{{\ensuremath{\boldsymbol\ell}}_{disc } } + \underbrace{\operatorname*{\mathbb e}_{{\ensuremath{\boldsymbol\epsilon}}\sim { { \mathbb p}}({\ensuremath{\boldsymbol\epsilon}})}\big[\log\big(1-d_{{\ensuremath{\boldsymbol{\phi}}}}(g_{{\ensuremath{\boldsymbol{\theta}}}}({\ensuremath{\boldsymbol\epsilon}}))\big)\big]}_{{\ensuremath{\boldsymbol\ell}}_{gen}}\ ] ] note there are two copies of operator in the query graph .the response graph implements the chain rule , with a tweak that multiplies the gradient communicated to _ forger _ by to ensure that _ forger _ maximizes the loss that _ curator _ is minimizing .+ at ( 0,0 ) ( glabel ) * response * ; at ( 0,-3 ) ( player1 ) * forger * ; at ( 4,0 ) ( player2 ) * curator * ; at ( 4,-3 ) ( block1 ) ; at ( 8,-3 ) ( block2 ) ; at ( 12,-3 ) ( block3 ) ; at ( 12,-1 ) ( block4 ) ; at ( 8,-1 ) ( block4a ) ; at ( 4,-1 ) ( block5 ) ; at ( 12,-4 ) ( oracle3 ) * oracle * ; at ( 12,0 ) ( oracle4 ) * oracle * ; at ( 12,-2 ) ( oracle5 ) * oracle * ; at ( 4,-4 ) ( oracle1 ) * oracle * ; at ( 8,-4 ) ( oracle2 ) * oracle * ; at ( 8,0 ) ( oracle2a ) * oracle * ; ( block1 ) edge node[below ] ( player1 ) ; ( block1 ) edge node[above ] ( player1 ) ; ( block2 ) edge node[right ] ( block5 ) ; ( oracle4 ) edge node[right ] ( block4 ) ; ( block2 ) edge node[above ] ( block1 ) ; ( block2 ) edge node[below ] ( block1 ) ; ( oracle2 ) edge node[right ] ( block2 ) ; ( oracle2a ) edge node[right ] ( block4a ) ; ( oracle1 ) edge node[left ] ( block1 ) ; ( oracle3 ) edge node[right ] ( block3 ) ; ( oracle5 ) edge node[right ] ( block3 ) ; ( oracle5 ) edge node[right ] ( block4 ) ; ( block4 ) edge node[above ] ( block4a ) ; ( block4a ) edge node[above ] ( block5 ) ; ( block4 ) edge node[below ] ( block4a ) ; ( block3 ) edge node[above ] ( block2 ) ; ( block3 ) edge node[below ] ( block2 ) ; ( block5 ) edge node[left ] ( player2 ) ; the generative - adversarial network is the first example where the response graph does not simply backpropagate gradients : the arrow labeled is computed as , whereas backpropagation would use .the minus sign arises due to the adversarial relationship between forger and curator they do not optimize the same objective . as discussed in section [ sec : valprox ] , actor - critic algorithms decompose the reinforcement learning problem into two components : the critic , which learns an approximate value function that predicts the total discounted future reward associated with state - action pairs , and the actor , which searches for a policy that maximizes the value appoximation provided by the critic .when the action - space is continuous , a natural approach is to follow the gradient . in , it was shown how to compute the policy gradient given the true value function .furthermore , sufficient conditions were provided for an approximate value function learned by the critic to yield an unbiased estimator of the policy gradient .more recently provided analogous results for deterministic policies .the next example of a grammar is taken from , which builds on the above work by introducing a third algorithm , deviator , that directly estimates the gradient of the value function estimated by critic . at ( 0,-1.5 ) ( glabel ) * query * ; at ( 0,0 ) ( actor ) * actor * ; at ( 0,-3 ) ( deviator ) * deviator * ; at ( 14,0 ) ( critic ) * critic * ; at ( 14,-3 ) ( noise ) * noise * ; at ( 6.5,-1.5 ) ( env ) * nature * ; at ( 3,0 ) ( block1 ) ; at ( 3,-3 ) ( block2 ) ; at ( 10,0 ) ( block3 ) ; at ( 10,-3 ) ( block4 ) ; ( actor ) edge node[above ] ( block1 ) ; ( deviator ) edge node[above ] ( block2 ) ; ( critic ) edge node[above ] ( block3 ) ; ( noise ) edge node[above ] ( block4 ) ; ( block1 ) ( block2 ) ; ( block1 ) ( block3 ) ; ( block2 ) ( block4 ) ; ( block3 ) ( block4 ) ; ( env ) edge node[above ] ( block1 ) ; ( env ) edge node[above ] ( block2 ) ; ( env ) edge node[above ] ( block3 ) ; ( env ) edge node[above ] ( block4 ) ; * _ nature _ samples states from and announces rewards that are a function of the prior state and action ; _ noise _ samples . * _ actor _ , _ critic _ and _ deviator _ play parameters , and respectively . *_ operator _ is a neural network that computes actions . *_ operator _ is a neural network that estimates the value of state - action pairs . *_ operator _ is a neural network that estimates the gradient of the value function . *the remaining _ operator _ computes the bellman gradient error ( bge ) which _ critic _ and _ deviator _ minimize at ( 0,-1.5 ) ( glabel ) * response * ; at ( 0,0 ) ( actor ) * actor * ; at ( 0,-3 ) ( deviator ) * deviator * ; at ( 14,0 ) ( critic ) * critic * ; at ( 5.5,0 ) ( oracle1 ) * oracle * ; at ( 5.5,-1.5 ) ( oracle2)*oracle * ; at ( 3,-1.5 ) ( op2 ) ; at ( 7.5,0 ) ( oracle3 ) * oracle * ; at ( 10,-3 ) ( oracle4 ) * oracle * ; at ( 3,0 ) ( block1 ) ; at ( 5.5,-3 ) ( block2 ) ; at ( 10,0 ) ( block3 ) ; ( block1 ) edge node[below ] ( actor ) ; ( block1 ) edge node[above ] ( actor ) ; ( block2 ) edge node[below ] ( deviator ) ; ( block2 ) edge node[above ] ( deviator ) ; ( block3 ) edge node[below ] ( critic ) ; ( block3 ) edge node[above ] ( critic ) ; ( oracle1 ) edge node[below ] ( block1 ) ; ( oracle4 ) edge node[below ] ( block2 ) ; ( oracle4 ) edge node[above ] ( block2 ) ; ( oracle4 ) edge node[right ] ( block3 ) ; ( oracle4 ) edge node[left ] ( block3 ) ; ( oracle2 ) edge node[left ] ( block2 ) ; ( op2 ) edge node[right ] ( block1 ) ; ( op2 ) edge node[left ] ( block1 ) ; ( oracle3 ) edge node[below ] ( block3 ) ; note that instead of backpropagating first - order information in the form of gradient , the response graph instead backpropagates zeroth - order information in the form of _ gradient - estimate _ , which is computed by the query graph during the feedforward sweep .we therefore write and ( instead of and to emphasize that the gradients communicated to actor are estimates . 1 ._ critic _ estimates the value function via td - learning with cloning for improved stability .deviator _ estimates the value gradient via td - learning and the gradient perturbation trick ._ actor _ follows the correct gradient by the policy gradient theorem .the internal workings of each neural network are guaranteed correct by the chain rule .two appealing features of the algorithm are that ( i ) actor is insulated from critic , and only interacts with deviator and ( ii ) critic and deviator learn different features adapted to representing the value function and its gradient respectively .previous work used the derivative of the value - function estimate , which is not guaranteed to have compatible function approximation , and can lead to problems when the value - function is estimated using functions such as rectifiers that are not smooth .finally we consider kickback , a biologically - motivated variant of backprop with reduced communication requirements .the problem that kickback solves is that backprop requires two distinct kinds of signals to be communicated between units feedforward and feedback whereas only one signal type spikes are produced by cortical neurons .kickback computes an estimate of the backpropagated gradient using the signals generated during the feedforward sweep .kickback also requires the gradient of the loss with respect to the ( one - dimensional ) output to be broadcast to all units , which is analogous to the role played by diffuse chemical neuromodulators . at ( 0,0 ) ( glabel ) * query * ; at ( 0,-2 ) ( player_env1 ) * nature * ; at ( 4,0 ) ( player1 ) * layer * ; at ( 8,0 ) ( player2 ) * layer * ; at ( 12,0 ) ( player3 ) * layer * ; at ( 16,0 ) ( player_env2 ) * nature * ; at ( 2.5,-3 ) ( out1 ) ; at ( 6.5,-3 ) ( out2 ) ; at ( 10.5,-3 ) ( out3 ) ; at ( 4,-2 ) ( block1 ) ; at ( 8,-2 ) ( block2 ) ; at ( 12,-2 ) ( block3 ) ; at ( 16,-2 ) ( block4 ) ; ( player_env1 ) ( block1 ) ; ( player1 ) edge node[left ] ( block1 ) ; ( player2 ) edge node[left ] ( block2 ) ; ( player3 ) edge node[left ] ( block3 ) ; ( player_env2 ) edge node[left ] ( block4 ) ; ( block1 ) edge node[above ] ( block2 ) ; ( player_env1 ) edge node[above ] ( block1 ) ; ( block2 ) edge node[above ] ( block3 ) ; ( block3 ) edge node[above ] ( block4 ) ; ( block1 ) edge node[right ] ( out1 ) ; ( block2 ) edge node[right ] ( out2 ) ; ( block3 ) edge node[right ] ( out3 ) ; ( block1 ) edge node[left ] ( out1 ) ; ( block2 ) edge node[left ] ( out2 ) ; ( block3 ) edge node[left ] ( out3 ) ; * _ nature _ samples labeled data from . *_ layers _ by weight matrices .the output of the neural network is required to be one - dimensional * _ operators _ for each layer compute two outputs : _ and _ where and 0 otherwise .* the task is regression or binary classification with loss given by the mean - squared or logistic error .it follows that the derivative of the loss with respect to the network s output is a _scalar_. the response graph contains a single oracle that broadcasts the gradient of the loss with respect to the network s output ( which is a scalar ) .estimates _ for each _ layer _ are computed using a mixture of oracle and local zeroth - order information referred to as _ kicks _ : + at ( 2,0 ) ( glabel ) * response * ; at ( 4,-2 ) ( oracle1 ) * kick * ; at ( 8,-2 ) ( oracle2 ) * kick * ; at ( 12,-2 ) ( oracle3 ) * kick * ; at ( 10,-4 ) ( oracle_env2 ) * oracle * ; at ( 6,-2 ) ( block1 ) ; at ( 10,-2 ) ( block2) ; at ( 14,-2 ) ( block3 ) ; ( oracle_env2 ) edge node[left ] ( block2 ) ; ( oracle_env2 ) edge node[left ] ( block1 ) ; ( block1 ) edge node[right] ( player1 ) ; ( block2 ) edge node[right] ( player2 ) ; ( block3 ) edge node[right] ( player3 ) ; ( oracle1 ) ( block1 ) ; ( oracle2 ) ( block2 ) ; ( oracle3 ) ( block3 ) ; ( s1 ) edge node[above] ( block1 ) ; ( s2 ) edge node[above] ( block2 ) ; ( s3 ) edge node[above] ( block2 ) ; ( s3 ) edge node[below] ( block2 ) ; ( block1 ) ( zero ) ; ( block2 ) ( block1 ) ; + where is coordinatewise multiplication and is the outer product . if then _ nature _ is substituted for . if then is replaced with the scalar value .the loss functions for the layers are not computed in the query graph .nevertheless , the gradients communicated to the layers by the response graph are exact with respect to the layers losses , see . for our purposesit is more convenient to focus on the global objective of the neural network and treat the gradients communicated to the layers as _ estimates _ of the gradient of the global objective with respect to the layers weights .the guarantee for kickback is that , if the network is coherent , then the gradient estimate computed using the zeroth - order kicks has the same sign as the backpropagated error computed using gradients , see for details . as a result ,smalls steps in the direction of the gradient estimates are guaranteed to decrease the network s loss . + kickback uses a single oracle , analogous to a neuromodulatory signal , in contrast to backprop which requires an oracle per layer .the rest of the oracles are replaced by kicks zeroth - order information from which gradient - estimates are constructed .importantly , the kick computation for layer only requires locally available information produced by its neighboring layers and during the feedforward sweep .the feedback signals are analogous to the signals transmitted by nmda synapses .two recent alternatives to backprop that also do not rely on backpropagating exact gradients are target propagation and feedback alignment .target propagation makes do without gradients by implementing autoencoders at each layer .unfortunately , optimization problems force the authors to introduce a correction term involving _ differences _ of targets . as a consequence , and in contrast to kickback , the information required by layers in difference target propagation can not be computed locally but instead requires recursively backpropagating differences from the output layer .feedback alignment solves a different problem : that feedback and forward weights are required to be equal in backprop ( and also in kickback ) .the authors observe that using random feedback weights can suffice . unfortunately ,as for difference target propagation , feedback alignment still requires separate feedforward and recursively backpropagated training signals , so weight updates are not local .unfortunately , at a conceptual level kickback , target propagation and feedback alignment all tackle the wrong problem .the cortex performs reinforcement learning : mammals are not provided with labels , and there is no clearly defined output layer from which signals could backpropagate .a biologically - plausible deep learning algorithm should take advantage of the particularities of the reinforcement learning setting .hinton g , deng l , yu d , dahl ge , mohamed a , jaitly n , senior a , vanhoucke v , nguyen p , sainath tn , kingsbury b : * deep neural networks for acoustic modeling in speech recognition*. _ ieee signal processing magazine _ 2012 , * 29*:8297 .mnih v , kavukcuoglu k , silver d , rusu aa , veness j , bellemare mg , graves a , riedmiller m , fidjeland ak , ostrovski g , petersen s , beattie c , sadik a , antonoglou i , king h , kumaran d , wierstra d , legg s , hassabis d : * human - level control through deep reinforcement learning*. _ nature _ 2015 , * 518*(7540):529533 .bergstra j , breuleux o , bastien f , lamblin p , pascanu r , desjardins g , turian j , warde - farley d , bengio y : * theano : a cpu and gpu math expression compiler*. in _ proc .python for scientific comp .( scipy ) _ 2010 .bastien f , lamblin p , pascanu r , bergstra j , goodfellow i , bergeron a , bouchard n , bengio y : * theano : new features and speed improvements*. in _ nips workshop : deep learning and unsupervised feature learning _ 2012 .williams rj , zipser d : * gradient - based learning algorithms for recurrent networks and their computational complexity*. in _ backpropagation : theory , architectures , and applications_. edited by chauvin y , rumelhart d , lawrence erlbaum associates 1995 .von bartheld cs , wang x , butowt r : * anterograde axonal transport , transcytosis , and recycling of neurotrophic factors : the concept of trophic currencies in neural networks*. _ molecular neurobiology _ 2001 , * 24*. sutton r , modayil j , delp m , degris t , pilarski pm , white a , precup d : * horde : a scalable real - time architecture for learning knowledge from unsupervised motor interaction*. in _ proc .10th int . conf . on aut agents and multiagent systems ( aamas ) _ 2011 .balduzzi d : * falsification and future performance*. in _ algorithmic probability and friends : bayesian prediction and artificial intelligence _ , _ volume 7070 of _lnai__. edited by dowe d , springer 2013:6578 .sutton r , szepesvri c , maei hr : * a convergent algorithm for off - policy temporal - difference learning with linear function approximation*. in _ adv in neural information processing systems ( nips ) _ 2009 .
deep learning is currently the subject of intensive study . however , fundamental concepts such as representations are not formally defined researchers `` know them when they see them '' and there is no common language for describing and analyzing algorithms . this essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments . the backbone of almost all deep learning algorithms is backpropagation , which is simply a gradient computation distributed over a neural network . the main ingredients of the framework are thus , unsurprisingly : ( i ) game theory , to formalize distributed optimization ; and ( ii ) communication protocols , to track the flow of zeroth and first - order information . the framework allows natural definitions of semantics ( as the meaning encoded in functions ) , representations ( as functions whose semantics is chosen to optimized a criterion ) and grammars ( as communication protocols equipped with first - order convergence guarantees ) . much of the essay is spent discussing examples taken from the literature . the ultimate aim is to develop a graphical language for describing the structure of deep learning algorithms that backgrounds the details of the optimization procedure and foregrounds how the components interact . inspiration is taken from probabilistic graphical models and factor graphs , which capture the essential structural features of multivariate distributions .
fracture processes in heterogeneous materials are an important technological problem that has attracted the interest of the scientific community since a long time . due to the complex interaction between failures and the subsequent redistribution of local stresses ,the development of adequate statistical models for fracture propagation is an extremely hard and challenging undertaking .the probably most important class of approaches to the study of fracture processes is that of fiber - bundle models ( fbm s ) . despite their simplicity ,fbm s are able to describe the main processes that can lead to a propagation of fractures and eventually to a complete breakdown of real heterogeneous materials .fiber - bundle models refer to a bundle of parallel fibers that are clamped at both ends and stretched by a common force .the fibers have a stochastic distribution of individual strength thresholds , and the different versions of fbm s that have been considered can be distinguished by their assumptions with respect to the stress redistribution after the failure of one of the fibers .the usual experimental setup considered and analyzed in the fbm literature can be described as follows : the force is gradually increased from zero until the weakest fiber breaks , and the transfer of its stress to the surviving fibers may then induce an avalanche of subsequent failures .if the fiber bundle reaches an equilibrium with no further failures , the force is increased again until the next fiber breaks , and this procedure is repeated up to the complete breakdown of the entire bundle .the main quantities of interest in connection with this procedure are the distribution of avalanche sizes and the ultimate strength of the fiber bundle , defined as the maximum stress the system can support before it breaks down completely .an alternative but equivalent procedure is to apply a finite force to the system , so that immediately all fibers with a strength threshold smaller than fail . the ultimate strength of the fiber bundleis then determined by the maximum value that does not lead to a failure of the entire system .the oldest and most well - known fbm is that where the stress of a failing fiber is distributed equally between the surviving fibers ( global load sharing , gls ) . for this model, the strength of the fiber bundle as well as the form of the avalanche - size distribution can be determined analytically .local load sharing ( lls ) fiber - bundle models , on the other hand , are much more difficult to analyze . in these models ,the stress of a failing fiber is only transferred locally , typically to the surviving nearest neighbors .lls models have been studied mainly via monte - carlo simulations and analytical results have only been obtained for one - dimensional models with essentially nearest - neighbor load transfer . in most studies ,the strength thresholds of the individual fibers are assumed to be distributed according to a weibull distribution ( typically with a weibull index ) . for simplicity, however , uniform strength - threshold distributions ( sometimes with a finite lower cutoff ) have also been considered .the two idealized extremes of global load sharing and of load transfer to nearest neighbors only are not adequate assumptions for most real systems .attempts have therefore been made to interpolate between gls and lls behavior .hidalgo et al . , e.g. , assume that the stress transfer after a failure decays as , where is the distance from the broken fiber , and they study the failure - propagation behavior of such a system as a function of .a similar model ( with ) has been studied by curtin . in ref . , we have introduced a class of failure - propagation models that can represent , in a stochastic sense , the main characteristics of realistic load - redistribution mechanisms , but are still amenable to an analytical treatment .we have applied our approach to an illustrative prototype example for cascading failure propagation in large infrastructure networks , e.g. , power grids . in particular , we have analyzed the probability of a complete system breakdown after an initial failure , and we have found that the model exhibits interesting critical dependencies on parameters that characterize the failure tolerance of the individual elements and the range of load redistribution after a failure . in this paper , we apply our stochastic approach to the problem of fracture propagation in fiber bundles and analyze two models that interpolate between global and local stress redistribution .the first is a stochastic version of the -model of hidalgo et al . , and the second a fiber - bundle equivalent to our prototype example of ref .we consider an experimental setup where initially all fibers carry the same stress and where the individual strength thresholds are randomly distributed according to a probability density that is zero for .we then examine the consequences of the breaking of a single fiber and concentrate on the calculation of the following quantities : a. the _ no - cascade probability _ , i.e. , the probability that an initial failure does not induce any further failures ; b. the _ breakdown probability _ , i.e. , the probability that an initial failure leads to a breakdown of the entire fiber bundle ; and c. the _ critical stress _ , defined as the largest such that for all .we note that , in contrast to most of the other fiber - bundle models , our stochastic models neglect any spatial correlations in the load transfer after a failure . the only other model , as far as we are aware of , that also uses a stochastic ( rather than a spatially correlated ) stress redistribution is that of dalton et al . , where it is assumed that the load of a failing fiber is transferred to a fixed number ( ) of randomly chosen surviving fibers .in addition , we use strength - threshold distributions that are truncated below the initially applied stress contrast to most models studied in the fiber - bundle literature . there exist , however , a number of investigations that also consider truncated threshold distributions , so that a direct comparison with our results can be made . in sect .[ sec : model ] , we introduce our stochastic load - redistribution model and describe its application to fracture processes .subsequently , in sect .[ sec : branching ] , we describe a markov approximation of the model which leads to a description in terms of generalized branching processes . in sects .[ sec : rdld ] and [ sec : pld ] , we analyze the two different model variants mentioned above , and final conclusions are given in sect .[ sec : conclusions ] .we shall consider a bundle consisting of fibers subjected to an external force .we assume that the initial stress of all fibers is equal and that the strength thresholds of the individual fibers are randomly distributed according to a probability density . for our setup of fracture - propagation experiments, we assume that we start from a finite stress and that all fibers with thresholds smaller than have been removed .thus , the threshold distribution has to fulfill for all .on the other hand , we are interested in a situation where already an infinitesimal increase of the external force leads with probability one in the limit the breaking of exactly one fiber , and we thus require . when a fiber breaks , its stress has to be taken over by the remaining intact fibers of the bundle . in our stochastic load - redistribution model , we assume that this process can be described by a rule of the form here , ( ) is the stress of an intact fiber before ( after ) the failure of a fiber with stress , and the load - redistribution factor is a random number drawn independently from the same distribution for each of the intact fibers . note that for the initial failure , both and are given by the initial stress . in the special case of a uniform load - redistribution , the form reduces exactly to a gls rule . for a general non - uniform load redistribution, we require that the failed stress will be shared _ on average _ by the remaining intact elements , i.e. , the mean of has to fulfill the condition due to the stress increment a fiber has obtained after a failure event , its stress itself might be above its critical threshold . in general , the initially failing fiber might thus induce the failure of other fibers , thereby starting a failure cascade .we then assume that all overloaded fibers fail simultaneously and that their stress is again redistributed according to the rule , where now both the pre - failure stresses of each of the intact fibers and the stresses of each of the failing fibers will , in general , be different random variables .if this process leads to the further overloading of fibers , it continues to a next cascade stage , and so on . eventually , either the bundle stabilizes again , i.e. , all fibers are stressed below their respective strength thresholds , or it breaks down completely , i.e. , all fibers fail .note that , in general , during the failure cascade , the load redistribution and hence the -distribution will be modified .whereas the details of such a modification , which can depend on topological changes , are very difficult to model , one at least has to take into account one dominant effect : as the number of fibers that are still intact at cascade stage decreases , the mean has to increase in accordance with eq . with replaced by .below , we will discuss how to fulfill this requirement for the chosen forms of load redistribution . for our setup of fracture - propagation experiments, we have to truncate the distribution of strength thresholds below . in the literature on fiber - bundle models ,the strength thresholds of the individual fibers are usually assumed to be distributed according to a weibull distribution with density , where mostly the weibull index is used .truncation then leads to a distribution of the form in addition , we shall also consider uniform distributions that are truncated below the initial stress : in sects .[ sec : rdld ] and [ sec : pld ] , we summarize and discuss the corresponding results for , and for two different load - redistribution models and for the two threshold distributions and .the dynamics of the stochastic cascade model described in the previous section and the quantities , and can only be obtained exactly by means of monte - carlo simulations . in the limit of large system sizes , however , we can achieve an approximate description of the cascade dynamics by noting the following points : a. the failure of a fiber leaves the stress in the majority of the intact fibers nearly unaffected , i.e. , maximally leads to changes of the order of .thus , along the failure cascade , the stress of the intact fibers is approximately given by the initial stress .b. the remaining number of fibers always stays infinitely large and thus the number of further failures induced by a failing fiber is distributed according to a poisson distribution . c. the interaction between different failures can be neglected , i.e. , in the case of several induced failures , the failure cascades resulting from each of these failures can be treated as being independent . under these assumptions ,the cascade dynamics becomes markovian if we choose the point process of the stresses of the failed fibers as underlying state space .this point process on the semi - infinite interval is independent , and the failure dynamics can thus be described by a generalized branching process with characteristic functional = \exp\bigg\{\mu(\sigma{_\mathrm{f}})\bigg [ \int\ ! & { { \mathrm{d}}}\sigma{_\mathrm{f } } ' \ , \ , { p}(\sigma{_\mathrm{f}}'|\sigma{_\mathrm{f}}'\,{>}\ , \sigma{_\mathrm{th}};\sigma{_\mathrm{f } } ) \,{{\mathrm{e}}}^{-u(\sigma{_\mathrm{f } } ' ) } - 1\bigg]\bigg\ } \end{split } \label{eq : chfunc}\ ] ] for the point process induced by a single failure with stress . here, denotes the conditional probability density that the induced failure resulting from the breaking of a fiber with stress occurs with a stress . for given distributions of the load - redistribution factors and the critical thresholds , this quantity can be readily calculated from eq . .this also holds true for the mean number of failures , induced by the breaking of a fiber with stress .we remark that in order for this quantity to be finite in the limit , the probability for the induced failure of a given intact fiber has to vanish as .this is in accordance with the requirement for the mean of the load - redistribution factors . finally , in eq ., denotes an arbitrary non - negative test function on the interval . for later use, we note that from the mean number of induced failures , one directly obtains the no - cascade probability , which is given by ^{n-1}\,.\ ] ] in the limit , this relation becomes \,.\ ] ] in principle , the properties of the later cascade stages and thus the full cascade dynamics can be obtained in a recursive way from the functional ( see ref . ) . here , we are only interested in the question whether an initial failure leads to the breakdown of the entire fiber bundle. it can be shown that this question can be answered by solving the integral equation for the probability that an initial failure with stress leads to the breakdown of the entire bundle .using eq . and the load - redistribution rule , together with the fact that , within the approximation considered , the pre - failure stress is just given by the initial stress , we can rewrite the integral equation in the form where . here , we have also used the relation and introduced the cumulative distribution function corresponding to the threshold distribution . under quite general assumptions , the unique solution of the integral equations or can be found by means of an iterative procedure starting from an arbitrary initial guess for .the so obtained probability function can then be evaluated at the initial stress to obtain the probability for the breakdown of a fiber bundle in the setup described in sect .[ sec : intro ] .we finally remark that the interpretation of eq. becomes clear if one writes the exponential on the right - hand side in the form ^{n}\,,\ ] ] where we have again used eqs . and .hence , the probability that no breakdown occurs after the failure of a fiber with stress is equal in the limit the probability that none of the induced failures with stress leads to a breakdown .hidalgo et al . proposed a fbm where the stress transfer after a failure decays with the distance between the failing fiber and the one affected by the failure as a power law . here, is a normalization factor which ensures that the total load is conserved .they furthermore assumed that all fibers are arranged on a two - dimensional square lattice . by varying the exponent , they were then able to study the transition between a gls rule ( for ) and a lls rule ( for ) . note that due to the infinite range of the power - law transfer function , the latter situation of a strictly local load - transfer can only be achieved in an approximate sense .for the case of weibull - distributed strength thresholds , a monte - carlo analysis of this range - dependent load - transfer model showed that for an exponent , the model behaves essentially as a fbm with gls rule and , in particular , a finite critical stress value was observed . for larger , there is a transition to the lls case with a critical stress that vanishes in the large system - size limit . in ref . , the same model has been analyzed for uniform threshold distributions with a lower cutoff , and in this case , the critical stress remains finite for all values of ( if ) . in the following , we study a stochastic version of this model , which we shall call `` -model '' .it is based on the assumption that the position of the fibers is uniformly distributed within the two - dimensional cross - section of the bundle . upon failure of a fiberwe then randomly pick the affected fibers from this uniform distribution and calculate the load - transfer factor according to the ( random ) distance .we will now first derive the corresponding -distribution , then analyze the properties of the resulting model , and finally compare its results to the ones obtained in ref . . for reasons of simplicity , we assume that the broken fiber is in the center of a hollow cylinder with inner ( outer ) radius ( ) containing intact fibers uniformly distributed with area density in the cross - sectional area of size .note that in contrast to refs . , we consider a uniform distribution of the fiber positions and thus have to introduce a lower cut - off for the distance to prevent a divergence at small distances . for this uniform spatial distribution and the given distance dependence of the load transfer, we can then readily derive the probability distribution for the load - redistribution factors appearing in eq .: here , the lower and upper cutoffs for are given by and , respectively .in the stochastic model , we fix the constant by imposing eq . , which states that , on average , the load is redistributed to the remaining elements .this yields the normalization constant for ; the special case can be readily treated by considering the limit .we can write eqs . and in a more convenient form by introducing a dimensionless length .this first allows us to express the number of intact fibers as where is the average number of fibers in the vicinity of the failing fiber . from eq . , we obtain , e.g. , if we assume that the model describes the continuous approximation of fibers located on a quadratic lattice ( with lattice constant ) consisting of sites inside a circle of the given radius . furthermore , we can write the -cutoffs in the form where again the case has to be treated as limit . the probability distribution is then given by the power - law form for .for later use , we note that in the limit of large system sizes , the lower -cutoff always scales to zero : for .the behavior of the upper cutoff , however , strongly depends on the exponent : for , also vanishes in the limit , whereas for , it converges to a finite value : for .we now calculate the mean number of fiber failures resulting directly from the breaking of a fiber with stress . from rule and eq ., we obtain with eqs . , we can write this expression in the form for the evaluation of this integral in the limit , where the integrand becomes singular , it is useful to consider the cases and separately. \(i ) : as mentioned above , we then have for and thus consider the taylor expansion where we have used that . inserting this expansion into eq ., we find with eq . that only the first order term leads to a non - vanishing contribution in the limit .this yields note that this result is independent of the exponent .\(ii ) : in this case , we can insert the finite asymptotic value for , for , into eq . and write the mean number of failures as the improper integral here , the convergence of the integral at the lower boundary is guaranteed since , because the strength - threshold distribution is truncated below , we have . for the evaluation of the breakdown probability, we have to solve the integral equation , which in the present case assumes the form alternatively , we can write this equation as the integral on the right - hand side of this equation has to be evaluated in the limit of infinitely large systems ( ) .again , we treat the cases and separately .\(i ) : as above , in eq ., we expand the distribution function around and now furthermore assume that such an expansion is also valid for the breakdown probability , inserting these expansions into the integral in eq . , we find with eq . that the various terms behave as for large .thus , only the lowest order terms and survive in this limit .the integral equation hence simplifies to the transcendental equation \ , , \label{eq : pb_iter_gamma3a}\ ] ] which , again , is -independent . in this regime ,the critical stress is given by the condition that the mean number of directly induced failures equals unity : \(ii ) : as for the evaluation of the mean number of induced failures , cf .eq . , we use the asymptotic value of and replace the integral in the limit by an improper one [ cf . also remark after eq . ] .this leads to the integral equation which , in general , can only be solved numerically , e.g. , by means of an iterative procedure . in particular ,the critical stress is not determined by a simple relation like eq .but has to be determined from the full solution of eq . .in the case of a uniform distribution of the strength thresholds , the mean number of induced failures can be evaluated explicitly . with then obtain from eq .the no - cascade probability [ eq : mu_power_law_uniform ] if or ^{-1}$ ] , and \ ] ] otherwise . for the calculation of the breakdown probability ,the transcendental equation ( for ) or the integral equation ( for ) have to be solved numerically .-model with and uniform distribution of strength thresholds .( a ) no - cascade probability and ( b ) breakdown probability as a function of the initial stress for different values of the exponents .lines : results from eq .[ panel ( a ) ] and eqs . and [ panel ( b ) ] , respectively .symbols : results from monte - carlo simulations of the failure process for averaged over realizations .the statistical error is of the order of the size of the symbols . ] in fig .[ fig : pnc_pb_power_law_uniform_sigma0 ] we show the no - cascade probability and the breakdown probability as a function of the initial stress for different values of the exponent in eq . .the approximate results within the markov approximation ( lines ) are compared with monte - carlo simulations ( symbols ) of the failure dynamics generated by the load redistribution .we note that in the monte - carlo simulations , we have to ensure that condition stays fulfilled during the entire cascade process .this is done by replacing in eqs . by , where denotes the number of remaining intact fibers .-model with .critical stress as a function of the exponent .solid line : uniformly distributed strength thresholds . dashed line : weibull - distributed ( ) strength thresholds . ] with increasing initial stress , we observe a gradual decrease of the no - cascade probability from one to zero [ cf.fig .[ fig : pnc_pb_power_law_uniform_sigma0](a ) ] .in contrast , the breakdown probability [ fig . [ fig : pnc_pb_power_law_uniform_sigma0](b ) ] exhibits a critical behavior : there is a -dependent critical stress such that for , the probability of a breakdown of the fiber bundle vanishes exactly . the dependence of the critical stress on the exponent is illustrated in fig . [fig : sigma0crit_power_law ] .the value for can be readily determined from eqs . and and exactly reproduces the value of the fbm with gls rule .for , and hence smaller effective `` range '' of the stress redistribution , we first observe a transition to a regime , where the critical stress decreases with increasing down to a minimal value .for even larger , , the critical stress increases again .we remark , however , that for large , it becomes numerically rather difficult to find the precise location of the critical transition because the onset of the regime with a finite breakdown probability becomes more and more flat [ cf .[ fig : pnc_pb_power_law_uniform_sigma0](b ) ] .it is interesting to compare our results with the ones obtained for the variable - range load - redistribution model of ref . , in particular , the behavior for the case of the failure stress being equal to the cutoff strength [ cf .1(a ) of ref . ] . in both models ,we observe a critical value of above which a transition from a gls regime to one with short - ranged stress transfer and smaller -value takes place . within our model, we can trace back this transition to a change in the load - redistribution distribution , in particular , the asymptotic value of .furthermore , the critical stress values of both models agree rather well up to . for even larger exponents , we find an increase of the critical stress , which can not be observed in the more microscopic model of ref .this discrepancy probably results from a breakdown of the continuum approximation upon which the distribution is based .the deficiency of this approximation for large is also reflected in the fact that for , the asymptotic value for becomes larger than unity , which means that a single fiber may receive a stress increment that is higher than the stress of the failing fiber . in order to prevent such a pathological behavior , a more sophisticated load - redistribution model has to be used .we finally note that the results from the markov approximation , i.e. , the generalized branching process description , agree very well with the ones obtained by monte - carlo simulations of the failure process . around the critical transition , some deviations for the breakdown probabilitycan be observed , which , however , decrease with increasing system size and , thus , represent finite - size effects . for the case of the truncated weibull distribution of strength thresholds, we have to evaluate for both the no - cascade probability and the breakdown probability numerically from eqs . and , respectively .the results as a function of the initial stress are depicted in fig .[ fig : pnc_pb_power_law_weibull_sigma0 ] , where we have chosen here and in the following a weibull index of . comparing with the case of a uniform distribution of the strength thresholds ,we find qualitatively the same behavior .in particular , we identify a critical transition at a -dependent stress ( cf.fig . [fig : sigma0crit_power_law ] ) , and again , for , the result for the gls case is recovered exactly from eq . together with the distribution .-model with and weibull - distributed strength thresholds ( ) .( a ) no - cascade probability and ( b ) breakdown probability as a function of the initial stress for different values of the exponents .lines : results from eqs . and [ panel ( a ) ] and eqs . and [ panel ( b ) ] , respectively .symbols : results from monte - carlo simulations of the failure process for averaged over realizations .the statistical error is of the order of the size of the symbols . ]-model with and weibull - distributed strength thresholds ( ) .breakdown probability as a function of the exponent for different values of the initial stress .lines : results from eqs . and .symbols : results from monte - carlo simulations of the failure process for averaged over realizations .the statistical error is of the order of the size of the symbols . ]figure [ fig : pb_power_law_weibull_gamma ] shows the dependence of the breakdown probability on the exponent for fixed initial stress . in accordance with eq . , the breakdown probability is -independent for and assumes the gls value . in the case , we find for a regime with a monotonic decrease of towards zero as a function of . for smaller ,the breakdown probability assumes a maximum at a certain -value and then decreases again towards zero . finally , for smaller than the critical stress of the gls model but larger than the minimal stress observed in fig .[ fig : sigma0crit_power_law ] , an increase of eventually leads to a destabilization of the system , i.e. , a non - vanishing breakdown probability , above some critical -value . , we have introduced a simple prototype model that interpolates between the limiting cases of global load redistribution and the transfer of the failing load to a single other element .the model , which we shall call `` -model '' , is characterized by a bimodal distribution of the load - redistribution factors , i.e. , after the failure of an element with stress , the stress of a still intact element is increased to with probability and remains unchanged with probability .we further require that the sum of the induced stress increments is , on average , equal to the stress of the failing element and that .it follows that then corresponds to the limiting case of global stress redistribution and to the case where the failing load is transferred , on average , to a single other element . the probability that after the failure of a fiber with stress , a still intact fiber also fails can be written as and the mean number of induced failures becomes in these expressions, we have neglected that a fraction of the still intact fibers at later cascade stages may carry a stress larger than .it can be shown , however , that this is a finite - size effect , i.e. , this fraction vanishes as . the no - cascade probability then follows directly from eq . by using eq .with : \,.\ ] ] to calculate the breakdown probability , we use eq . , which reduces in the present case to the recursion relation \,,\ ] ] where this recursion can be solved numerically to an arbitrary degree of accuracy by starting at a high enough value of , say , and setting . finally , it can be shown that the critical stress is determined by i.e. , by for a uniform threshold distribution we obtain and it follows that and the breakdown probability is determined by numerically solving the recursion defined in eqs . and with for the case of a truncated weibull distribution of strength thresholds , eq ., with weibull index , we have \ ] ] and obtain \bigg\}\ ] ] and ^{1/2}\,.\ ] ] the corresponding results for arbitrary weibull indices can be readily obtained . is again determined numerically by solving the recursion of eqs . and , with \bigg\}\,.\ ] ] the behavior of and as a function of and is illustrated in figs .[ fig : pnc_pb_bimodal_weibull_sigma0 ] and [ fig : pb_bimodal_weibull_delta0 ] for a truncated weibull distribution of strength thresholds , and we note that the results for uniformly distributed strength thresholds ( see sect . [ sec : bimodal_uniform ] ) show a qualitatively similar behavior .figure [ fig : sigma0crit_bimodal ] shows the dependence of on for both uniformly and weibull distributed strength thresholds .-model with weibull - distributed strength thresholds ( ) .( a ) no - cascade probability and ( b ) breakdown probability as a function of the initial stress for different values of the load - redistribution parameter .lines : results from eqs .[ panel ( a ) ] and eqs . , , and [ panel ( b ) ] , respectively .symbols : results from monte - carlo simulations of the failure process for fibers averaged over realizations .the statistical error is of the order of the size of the symbols . ]-model with weibull - distributed strength thresholds ( ) .breakdown probability as a function of the load - redistribution parameter for different values of the initial stress .lines : results from eqs . , , and .symbols : results from monte - carlo simulations of the failure process for fibers averaged over realizations .the statistical error is of the order of the size of the symbols . ]critical stress as a function of the load - redistribution parameter .solid line : uniformly distributed strength thresholds [ eq . ] . dashed line : weibull - distributed ( ) strength thresholds [ eq . ] . ] in fig .[ fig : pnc_pb_bimodal_weibull_sigma0 ] and [ fig : pb_bimodal_weibull_delta0 ] , our analytical results are compared with those obtained from monte - carlo simulations . to ensure the validity of condition ( [ eq : redist_req ] ) , we have chosen to keep fixed as the number of intact fibers decreases , and to use the scaling . to compare the results of this model ( -model ) with those of the model analyzed in section 4 ( -model ), we first make the following observations .the -model reproduces the gls - limit if and ( in a particular sense ) approaches an lls - limit as , while in the -model , the gls - limit is reproduced if and the lls - limit ( transfer of the failing load to a single surviving fiber ) for .because of the different nature of the two -distributions , an exact relation between and can not be derived . a comparison of figs .[ fig : sigma0crit_power_law ] and [ fig : sigma0crit_bimodal ] , however , suggests that for , a rough correspondence between the - and the -model is obtained if we set using this relation , it can be seen that the vs. and vs. behavior of the two models is qualitatively very similar if ( ) , except that in the -model , ) always increases towards one , while in the -model , saturates at a value smaller than one if . for ( ) , however , the behavior of the two models is significantly different .the critical strength , e.g. , continues to decrease towards zero in the -model , but starts to increase again in the -model .also the behavior of the no - cascade probability vs. is completely different in the two models . in the -model, continues to increase towards one as , even for large values of , while in the -model , is bounded by for large . as already discussed in sect .[ sec : results_power_law_uniform ] , the peculiar behavior of the -model for large can be attributed to a breakdown of the continuum approximation on which the corresponding -distribution is based .in this paper , we have introduced and analyzed a new fiber - bundle model with stochastic load redistribution .the fraction of a failing fiber that is transferred to the surviving fibers is assumed to be a random variable , and we have considered two different distributions for the -values . the first ( -model ) refers to a stochastic version of the range - dependent load redistribution model of hidalgo et al . , and the second ( -model ) to a model with a simple bimodal -distribution that can also interpolate between the two limiting cases of global and local load sharing . for the distribution of strength thresholds ,we have also considered two different cases , a uniform and a weibull distribution , both truncated below some finite stress .while our models neglect any spatial correlations in the load redistribution after a failure , they have the advantage that they can be treated analytically , in contrast to most of the existing fiber bundle models that can only be analyzed via monte - carlo simulations . in the limit of global load sharing ( in the -model or in the -model ) , our modelsnot only recover the known exact results for the critical stress , but also give the exact behavior of the breakdown probability for . in this gls limit, the recursion relations for the determination of are reduced to simple transcendental equations \ ] ] for a truncated uniform strength - threshold distribution , and \ ] ] for a truncated weibull distribution with index .eqs . andcan easily be derived from the recursion of eq . by taking the limit , or from the transcendental equation ( [ eq : pb_iter_gamma3a ] ) for . with our stochastic models, we can also determine the critical stress and the behavior of ) for in the case of a more localized stress redistribution ( in the -model or in the -model ) . as already discussed in sect .[ sec : results_power_law_uniform ] , our -model results for ( for truncated uniform strength - threshold distributions ) agree very well with the corresponding results of raischel et al . up to or .it is quite remarkable that a stochastic model that neglects any spatial correlations can so accurately reproduce the behavior of a microscopically more adequate model . in addition, our analytical solution allows us to trace back the onset of the transition between the gls and lls behavior at to a change in the scaling of the upper cutoff of the -distribution in the limit of infinite system sizes . in the case of strength - threshold distributions that are not truncated ,the usual procedure is to gradually increase the external force from zero up to the complete breakdown of the entire bundle .the critical strength of the fiber bundle is then defined as the maximum stress the system can support before it breaks down .in global load sharing models , the surviving fibers always carry the same stress , so that the critical fiber - bundle strength can be written as \,,\ ] ] where is the stress a surviving fiber carries at breakdown and is the fraction of surviving fibers .this result is also recovered within our approach , with . for nearest - neighbor lls models, however , vanishes in the limit of large system sizes , .as our stochastic models neglect spatial correlations , they can not describe such situations .we can , however , compare the results of our models with that of ref . , where also a stochastic load redistribution model is used . here, remains finite , even if the failing load is transferred only to a small , fixed number ( ) of randomly chosen surviving fibers . for and for a uniform strength - threshold distribution ,e.g. , it is found that for large systems .this can be compared with the corresponding result of our -model .if we choose , so that the failing load is , on average , transferred to two surviving fibers , we obtain for a uniform distribution of strength thresholds , i.e. , the discrepancy between the two models can be attributed to our assumption that at breakdown all surviving fibers carry the same stress , whereas it is shown in ref . that at breakdown , the surviving fibers have a broad distribution of stresses , with a pronounced exponential tail .it remains to be investigated whether our approach can be adapted to correctly analyze the behavior of fiber - bundle models with a strength - threshold distribution that is not truncated .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 h. e. daniels , http://rspa.royalsocietypublishing.org/content/183/995/405.abstract[proc .r. soc . lond .a ] 183 ( 1945 ) 405 . m. kloster , a. hansen , p. c. hemmer , http://link.aps.org/abstract/pre/v56/p2615[phys .e ] 56 ( 1997 ) 2615 .p. hnggi , h. thomas , http://www.sciencedirect.com / science?_ob = articleurl&_udi = b6tvp-46sx0pr% -91&_user=10&_rdoc=1&_fmt=&_orig = search&_sort = d&_docanchor=&view = c&_acct = c0000% 50221&_version=1&_urlversion=0&_userid=10&md5=6aa055d23aec80e2322881960bd97091% [ phys . rep . ] 88 ( 1982 ) 207 .
we study fracture processes within a stochastic fiber - bundle model where it is assumed that after the failure of a fiber , each intact fiber obtains a random fraction of the failing load . within a markov approximation , the breakdown properties of this model can be reduced to the solution of an integral equation . as examples we consider two different versions of this model that both can interpolate between global and local load redistribution . for the strength thresholds of the individual fibers , we consider a weibull distribution and a uniform distribution , both truncated below a given initial stress . the breakdown behavior of our models is compared with corresponding results of other fiber - bundle models . fracture mechanics , fiber - bundle model , statistical physics , branching process
bootstrap percolation on a graph is defined as the spread of _ activation _ or _ infection _ according to the following rule , with a given threshold : we start with a set of _ active _ vertices .each inactive vertex that has at least active neighbors becomes active .this is repeated until no more vertices become active , that is , when no inactive vertex has or more active neighbors .active vertices never become inactive , so the set of active vertices grows monotonously . to avoid confusion , we will use the terminology that each active vertex _ infects _ all its neighbors , so that a vertex that is infected ( at least ) times becomes active .we are mainly interested in the final size of the active set , and in particular whether eventually all vertices will be active or not .if they are , we say that the initial set _ percolates _ ( completely ) .we will study a sequence of graphs of order ; we then also say that ( a sequence of ) _ almost percolates _ if the number of vertices that remain inactive is , that is , if .bootstrap percolation on a lattice ( which is a special example of a cellular automata ) was introduced in 1979 by chalupa , leath and reich as a simplified model of some magnetic systems .since then bootstrap percolation has been studied on various graphs , both deterministic and random .one can study either a random initial set or the deterministic problem of choosing an initial set that is optimal in some sense .a simple example of the latter is the classical folklore problem to find the minimal percolating set in a two - dimensional grid ( i.e. , a finite square ^ 2 ] . )another extremal problem is studied by morris .the problem with a random initial set was introduced by chalupa , leath and reich ( lattices and regular infinite tree ) , and further studied on lattices by schonmann ; it has , in particular , been studied on finite grids ( in two dimensions or more ) , see aizenman and lebowitz , balogh and pete , cerf and cirillo , cerf and manzo , holroyd , balogh , bollobs and morris , gravner , holroyd and morris . in a recent paper , balogh et al . derived a sharp asymptotic for the critical density ( i.e. , the critical size of a random initial set ) for bootstrap percolation on grids of any dimension , generalizing results of balogh , bollobs , and morris .grids with a different edge set where studied by holroyd , liggett and romik .the study of bootstrap percolation on lattices is partly explained by its origin in statistical physics , and the bootstrap process is being successfully used in studies of the ising model ; see . latelybootstrap percolation has also been studied on varieties of graphs different from lattices and grids ; see , for example , balogh and bollobs ( hypercube ) ; balogh , peres and pete ( infinite trees ) ; balogh and pittel , janson ( random regular graphs ) ; an extension where the threshold may vary between the vertices is studied by amini . an anisotropic bootstrap percolation was studied by duminil - copin and van enter .further , a graph bootstrap percolation model introduced by bollobs already in 1968 , where edges are infected instead of vertices , was analyzed recently by balogh , bollobs and morris and balogh et al . . in the present paper , we study bootstrap percolation on the erds rnyi random graph with an initial set consisting of a given number of vertices chosen at random .( by symmetry , we obtain the same results for any deterministic set of vertices . ) recall that is the random graph on the set of vertices where all possible edges between pairs of different vertices are present independently and with the same probability . as usual, we let depend on .a problem equivalent to bootstrap percolation on in the case was studied by scalia - tomba , although he used a different formulation as an epidemic .( ball and britton study a more general model with different degrees of severity of infection . ) otherwise , bootstrap percolation on was first studied by vallier ; we here use a simple method ( the same as ) that allows us to both simplify the proofs and improve the results .we will state the results for a general fixed ( the case is much different ; see remark [ r1 ] ) ; the reader may for simplicity consider the case only , since there are no essential differences for higher .we will see that there is a threshold phenomenon : typically , either the final size is small ( at most twice the initial size ) , or it is large [ sometimes exactly , but if is so small that there are vertices of degree less than , these can never become active except initially so eventually at most will become infected ] .we can study the threshold in two ways : in the first version , we keep and fixed and vary . in the second version, we fix and and vary .we will state some results for both versions and some for the former version only ; these too can easily be translated to the second version .we will also study dynamical versions , where we add new external infections or activations or new edges until we reach the threshold ; see section [ sdyn ] .apart from the final size , we will also study the time the bootstrap process takes until completion .we count the time in _ generations _ : generation 0 is , generation 1 is the set of other vertices that have at least neighbors in generation 0 , and so on .the process stops as soon as there is an empty generation , and we let be the number of ( nonempty ) generations . thus ,if we let be the set of vertices activated in generation , then bootstrap percolation does not seem to be a good model for usual infectious diseases ; see , however , ball and britton .it might be a better model for the spread of rumors or other ideas or beliefs ; cf .the well - known rule , `` what i tell you three times is true '' in carroll .bootstrap percolation can be also viewed as a simplified model for propagation of activity in a neural network .although related neuronal models are too involved for a rigorous analysis ( see , e.g. , ) they inspired study of bootstrap percolation on by vallier .there is a further discussion on the application of bootstrap percolation on to neuromodelling in . instead of , one might consider the random graph , with a given number of edges .it is easy to obtain a result for from our results for , using monotonicity , but we usually leave this to the reader . [ in the dynamical model in section [ ssdynm ] , we consider , however . ]an alternative to starting with an initial active set of fixed size is to let each vertex be initially activated with probability , with different vertices activated independently .note that this is the same as taking the initial size random with . for most results the resulting random variation in in negligible , and we obtain the same results as for , but for the gaussian limit in theorems [ tac][tac0 ] and [ tdg2 ] , the asymptotic variances are changed by constant factors .we leave the details to the reader .some open problems arise from our study . in ,balogh , bollobs and morris determine the critical probability for bootstrap percolation on grids when the dimension .a similar idea translated to the graph would be to study what happens when .this problem is not treated here although our methods might be useful also for such problems .the problem of majority percolation where a vertex becomes activated if at least half of its neighbors are active [ has been studied on the hypercube by balogh , bollobs and morris . on the -dimensional grid but on the graph, this problem is completely different and still open .( we thank the referee for these suggestions . )the method is described in section [ ssetup ] .the main results are stated in section [ smain ] , with further results in sections [ sdyn ] and [ sbound ] .proofs are given in sections [ soverview][slast ] .all unspecified limits are as .we use for convergence in distribution and for convergence in probability of random variables ; we further use and in the standard sense ( see , e.g. , and ) , and we use w.h.p .( with high probability ) for events with probability tending to 1 as . note that , for example , `` w.h.p . ''is equivalent to `` '' and to `` , '' and that `` w.h.p . ''is equivalent to `` ; '' see .a statement of the type `` when , then w.h.p . '' ( or similar wording ) , where and are two events , means that , that is , that w.h.p . ``( not ) or '' holds .( see , e.g. , theorem [ t2 ] and proposition [ pg3 ] . ) if is bounded away from 0 , this is equivalent to `` conditioned on , holds w.h.p . ''if is a sequence of random variables , and and are sequences of real numbers , with , we say that if .occasionally we use the subsubsequence principle ( , page 12 ) , which says that to prove a limit result ( e.g. , for real numbers , or for random variables in probability or in distribution ) , it is sufficient to show that every subsequence has a subsubsequence where the result holds .we may thus , without loss of generality , select convenient subsequences in a proof , for example , such that another given sequence either converges or tends to .[ ssetup ] in order to analyze the bootstrap percolation process on , we change the time scale ; we forget the generations and consider at each time step the infections from one vertex only .choose and give each of its neighbors a _mark _ ; we then say that is _ used _ , and let be the set of used vertices at time 1 .we continue recursively : at time , choose a vertex .we give each neighbor of a new mark .let be the set of inactive vertices with marks ; these now become active , and we let be the set of active vertices at time .we finally set , the set of used vertices .[ we start with , and note that necessarily for . ]the process stops when , that is , when all active vertices are used .we denote this time by , clearly , ; in particular , is finite .the final active set is ; it is clear that this is the same set as the one produced by the bootstrap percolation process defined in the ; only the time development differs .hence we may as well study the version just described .[ this is true for any choice of the vertices . for definiteness, we may assume that we keep the unused , active vertices in a queue and choose as the first vertex in the queue , and that the vertices in are added at the end of the queue in order of their labels .thus will always be one of the oldest unused , active vertices , which will enable us to recover the generations ; see further section [ sgen ] . in section [ sdyn ] , we consider other ways of choosing . ]this reformulation was used already by scalia - tomba ( for a more general model ) .it is related to the ( continuous - time ) construction by sellke for an epidemic process .let , the number of active vertices at time . since and for , we also have moreover , since the final active set is , its size is hence , the set percolates if and only if , and almost percolates if and only if .we analyze this process by the standard method of revealing the edges of the graph only on a need - to - know basis .we thus begin by choosing as above and then reveal its neighbors ; we then find and reveal its neighbors , and so on .let , for , be the indicator that there is an edge between the vertices and .this is also the indicator that gets a mark at time , so if is the number of marks has at time , then at least until is activated ( and what happens later does not matter ) .note that if , then , for every , if and only if .the crucial feature of this description of the process , which makes the analysis simple , is that the random variables are i.i.d . .we have defined only for and , but it is convenient to add further ( redundant ) variables so that are defined , and i.i.d ., for all and all .one way to do this formally is to reverse the procedure above .we start with i.i.d . , for and , and a set .we let and start with an empty graph on .we then , as above , for select if this set is nonempty ; otherwise we select ( taking , e.g. , the smallest such vertex ) .we define by ( [ mi ] ) for all and , and update and .furthermore , add an edge to the graph for each vertex such that .finally , define by ( [ t1 ] ) or ( [ t2 ] ) .it is easy to see that this constructs a random graph and that , , is as above for this graph , so the final active set of the bootstrap percolation on the graph is .define also , for , if , then is the time vertex becomes active , but if , then never becomes active .thus , for , by ( [ mi ] ) , each has a binomial distribution .further , by ( [ mi ] ) and ( [ yi ] ) , each has a negative binomial distribution , moreover , these random variables are i.i.d .we let , for so , by ( [ at2 ] ) , and our notation , by ( [ as ] ) , ( [ t2 ] ) and ( [ at ] ) , it suffices to study the stochastic process . note that is a sum of i.i.d .processes , each of which is -valued and jumps from 0 to 1 at time , where has the distribution in ( [ yik ] ) .we write when we want to emphasize the number of summands in ; more generally we define for any [ assuming for consistency that .the fact that , and thus , is a sum of i.i.d .processes makes the analysis easy ; in particular , for any given , where in particular , we have to avoid rounding to integers sometimes below , we define and for all real .we also sometimes ( when it is obviously harmless ) ignore rounding to simplify notation .[ smain ] [ ssprob ] for given , , and we define , for reasons that will be seen later , in particular , for , and . for future use , note also that ( [ tc ] ) can be written our standard assumptions and imply that \\[-8pt ] { a{_\mathsf{c}}}&\to&\infty,\qquad { a{_\mathsf{c}}}/n\to0,\qquad { b{_\mathsf{c}}}/n\to0,\qquad p{b{_\mathsf{c}}}\to0,\nonumber\end{aligned}\ ] ] and further if , then and ( [ bc2a ] ) yields if is larger [ , i.e. , , this is not quite true , but in this case both and decrease to 0 very fast ; in all cases recall that our main interest is in rather than ; see ( [ sbin ] ) ; for we obviously have similar results , with additional error terms depending on ; see ( [ es ] ) and , for example , ( [ esapp ] ) .note further that by ( [ bc ] ) , for any , which by simple calculations yields , provided , \\[-8pt ] & & \quad\iff\quad { b{_\mathsf{c}}}\to \cases { \infty , \cr ( r-1)!{^{-1}}e^{-\beta } , \cr 0.}\nonumber\end{aligned}\ ] ] our first result , to be refined later , shows that the threshold for almost percolation is .the proof of the theorems in this section are given later ( sections [ sthreshold][sgen ] ) .let us recall that is the final size of the active set , and that .[ t1 ] suppose that and .[ t1sub ] if , then , where is the unique root in } ] .in particular , in the subcritical case [ t1sub ] , we thus have w.h.p . , so the activation will not spread to many more than the originally active nodes . in the supercritical case [ t1super ], we have the following more detailed result . [t2 ] suppose that , and , and that as , for example , in theorem [ t1][t1super ] .then : [ t2oo ] if , so by ( [ bclim ] ) , then . in particular , w.h.p .we do not have complete percolation .[ t20 ] if , so by ( [ bclim ] ) , then w.h.p . , so we have complete percolation .[ t2b ] if , so by ( [ bclim ] ) , then ; in particular , . more generally , even if we do not have almost percolation w.h.p, the result holds w.h.p .provided . by the last statementwe mean that . in particular ,it holds w.h.p . conditioned on , provided we have .[ rdegrees ] let be the set of vertices in with degrees less than .these are never activated unless they happen to be in the initially active set , and for each of the vertices , this has probability if ; hence trivially .we have [ cf .( [ bc2a ] ) and ( [ bc2c ] ) ] with concentration of around its mean if and a limiting poisson distribution if ; see , sections 6.2 and 6.3 , and . comparing this with theorem [ t2 ] we see that in the supercritical case , and with , the final inactive set differs from by vertices only , and in the case [ combining cases [ t20 ] and [ t2b ] in theorem [ t2 ] ] , w.h.p . . in other words , when we get a large active set , the vertices that remain inactive are mainly the ones with degrees less than , and if further , they are w.h.p .exactly the vertices with degrees less than .we can , as discussed earlier , also consider thresholds for for a given .[ tpc ] suppose that and that with .then the threshold for for almost percolation is in the sense that if , for some , , then w.h.p . , while if , then w.h.p . in the latter case, further w.h.p . if and only if for some .note that .equation ( [ pc ] ) is the inverse to ( [ ac ] ) in the sense that the functions and that they define are the inverses of each other . for , ( [ pc ] ) simplifies to .note that the thresholds for complete and almost percolation are different only for large .indeed , for such a case the threshold for almost percolation can be so small that the graph may not be even connected .then , besides , we have the second threshold for the complete percolation ; for example , if and , there are two thresholds : for almost percolation , and for complete percolation . if is small enough so that is dense enough ( e.g. , if when ) , these two thresholds coincide .[ ssgauss ] to study the threshold at more precisely , we approximate in ( [ pi ] ) by the corresponding poisson probability , note that is a differentiable , increasing function on , and that by a standard estimate for poisson approximation of a binomial distribution ( see , e.g. , , theorem 2.m ) , where denotes the total variation distance . a sharper estimate for small will be given in lemma [ ltpi ] .we define , for given and , and let ] .the following theorem shows that the precise threshold for is , with a width of the threshold of the order . denotes the standard normal distribution function .note that theorem [ t2 ] applies , provided , and provides more detailed information on when is large [ i.e. , in ( ii ) and in ( iii ) conditioned on , say , ] .[ tac ] suppose that and .[ tac- ] if , then for every , w.h.p . .if further , then .[ tac+ ] if , then .[ tac0 ] if , then for every and every with , \bigr ) & \to & 1-\phi \bigl((r-1)^{1/2 } y\bigr).\end{aligned}\ ] ] for the corresponding result when we keep fixed and change , we define , for given and , since is an increasing function of , is increasing , with and , provided , for example , , [ attained at .given with , there is thus ( for large ) a unique such that we will see in lemma [ lpcx ] that .it is easily verified that , for large at least , , and thus and are the inverses of each other . [ tpcxx ] suppose and with . if , then . if further , then . if , then ; if further , then w.h.p . if , then for every , \biggr ) & \to & 1- \phi\bigl(r(r-1){^{-1/2}}{\lambda}\bigr).\end{aligned}\ ] ] if further , then ( [ sam ] ) can be replaced by . in the subcritical cases in theorems [ t1][t1sub ] and [ tac][tac- ], we also obtain a gaussian limit for the size of the final active set .[ tgsub ] suppose and .let be the smallest positive root of [ tgsub1 ] if , then with given by ( [ ika2 ] ) , and , where . [ tgsub2 ]if and also , then , more precisely and [ rtgsub ] it follows from the proof that in both cases , for large at least , is the unique root of ( [ tx ] ) in ] . in [ tgsub2 ] , this is not always true . by lemma [ ld ] , still for large , and . if , for example , and , then , while yields . [ ssgen ] in the supercritical case , when w.h.p .almost percolates by theorem [ tac ] , we have the following asymptotic formula for the number of generations until the bootstrap percolation process stops . [ tgensuper ]suppose that , and .assume than [ so that w.h.p .almost percolates ] .then , w.h.p . , \\[-8pt ] & & { } + \frac{\log n}{np } + { o_{\mathrm p}}(1 ) .\nonumber\end{aligned}\ ] ] this theorem is an immediate consequence of propositions [ pgensuper0 ] , [ pgensuper+ ] , [ pgen2 ] and [ pg3 ] in section [ sgen ] .moreover , these propositions show that the three terms [ excepting the error term in the formula ( [ tg ] ) are the numbers of generations required for three distinct phases of the evolution : the beginning including ( possibly ) a bottleneck when the size is about ; a period of doubly exponential growth ; and a final phase where the last vertices are activated .note that each of the three terms may be the largest one .[ egen ] let , with , and suppose .then the third term in ( [ tg ] ) is and can be ignored while the second term is .if we are safely supercritical , say , then the first term too is and the result is w.h.p . , dominated by the second term .if instead the process is only barely supercritical , with say , with , then the first term in ( [ tg ] ) is with and the exponent , which dominates the other terms .note that the exponent here can be any positive number in ( with if and , so the graph is very sparse and the initial set is minimal ) . finally , if , say , so the graph is very sparse , and , then again the first term in ( [ tg ] ) is , the second is , while the third is , which thus dominates the sum .note that the second term is , and the third is [ and in many cases so it can be ignored ] , while the first term may be as large as [ although it too in many cases is ] . in the subcritical case, one could presumably obtain similar results for the number of generations until the process stops , but we have not pursued this topic here .[ sdyn ] we usually assume , as above , that and are given , but we can also consider dynamical models where one of them grows with time .[ ssdyna ] in the first dynamical model , we let and be given , and consider a realization of .we start with all vertices inactive ( and completely uninfected ) .we then activate the vertices ( from the outside ) one by one , in random order .after each external activation , the bootstrap percolation mechanism works as before , activating all vertices that have at least active neighbors until no such vertices remain ; this is done instantaneously ( or very rapidly ) so that this is completed before the next external activation .let be the number of externally activated vertices the first time that the active set is `` big '' in some sense .for example , for definiteness , we may define `` big '' as .[ it follows from theorem [ t1 ] that any threshold for a constant will give the same asymptotic results , as well as thresholds tending to 0 or sufficiently slowly .if , we may also choose the condition , that is , complete percolation .] then is a random variable ( depending both on the realization of and on the order of external activations ) . in this formulation ,the threshold result in theorem [ t1 ] may be stated as follows .[ td1 ] suppose that and . then .the active set after external activations is the same as the final active set in the static model considered in the rest of this paper with these vertices chosen to be active initially .hence , for any given , if and only if bootstrap percolation with initially active yields a big final active set . in particular ,if , then theorem [ t1][t1sub ] implies that , while theorem [ t1][t1super ] and [ t1complete ] imply that .more precisely , theorem [ tac ] yields a gaussian limit .[ tdg1 ] suppose that and . then .let . then , arguing as in the proof of theorem [ td1 ] but now using theorem [ tac][tac0 ] ( with ) , we find we have here for simplicity assumed that the external activations are done by sampling without replacement , but otherwise independently of whether the vertices already are ( internally ) activated .a natural variation is to only activate vertices that are inactive .let be the number of externally activated vertices when the active set becomes big in this version .since a new activation of an already active vertex does not matter at all , equals in the version above the number of externally active vertices among the first that are not already internally activated .thus is the number of external activations that hit an already active vertex .it is easily verified that this is , and thus theorem [ td1 ] holds for as well ; we omit the details .it seems likely that it is possible to derive a version of the gaussian limit in theorem [ tdg1 ] for too , but that would require a more careful estimate of ( and in particular its variance ) , which we have not done , so we leave this possibility as an open problem .[ rdyna ] one way to think about this dynamical model , where we add new active vertices successively and may think of these as being initially active , is to see it as a sequence of bootstrap percolation processes , one for each ; the processes live on the same graph but have different numbers of initially active vertices , and they are coupled in a natural way . in order to really have the same realization of for different , we have to be careful in the choice of the order in which we explore the vertex neighborhoods , that is , the choice of . [recall that is constructed from the indicators and the sequence ; see section [ ssetup ] .] we can achieve this by first making a list of all vertices in the ( random ) order in which they are externally activated .we then at each time choose as an unused internally activated vertex ( e.g. , the most recent one ) if there is any such vertex , and otherwise as the next unused vertex in the list .this model makes it possible to pose now questions about the bootstrap percolation .for example , we may consider the critical process starting with exactly initially active vertices ( i.e. , the first process that grows beyond the bottleneck and becomes big ) and ask for the number of generations until the process dies out .alternatively , we may consider the process starting with exactly initially active vertices ( i.e. , the last process that does not become big ) and ask for its final size . such questions will not be treated in the present paper , but we mention that it is easily seen that the final size with initially active vertices is so that the final size jumps from about to about with the addition of a single additional initial vertex .furthermore , we conjecture that , under suitable conditions , the number of generations for the process with initially active vertices is of order ( which is much larger than the number of generations for any fixed ; see section [ ssgen ] ) .[ ssdyni ] an alternative to external activations is external infections , where we again start with all vertices inactive and uninfected , and infect vertices one by one from the outside , choosing the infected vertices at random ( independently and with replacement ) ; as before , infections ( external or internal ) are needed for activation , and active vertices infect their neighbors .let be the number of external infections when the active set first becomes `` big '' ( as in section [ ssdyna ] ) .( thus , is a random variable . ) in the original model , each initially active vertex infects about other vertices so the total number of initial infections is about ; it is thus easy to guess that .indeed , this is the case as is shown by the next theorem .we can not ( as far as we know ) directly derive this from our previous results , since the dependencies between infections in the two versions are slightly different , but it follows by a minor variation of our method ; see section [ sthreshold ] .we believe that the result could be sharpened to a gaussian limit as in theorem [ tdg1 ] , but we leave this to the reader . [tj ] suppose that and . then . in particular , for , we thus have .[ ssdynm ] in the second dynamical model , and are given ; we start with vertices of which are active , but no edges .we then add the edges of the complete graph one by one , in random order .as in the previous dynamical model , bootstrap percolation takes place instantaneously after each new edge is added .it is convenient to use the standard method of adding the edges at random times ( as in , e.g. , ) .thus , each edge in is added at a time , where are independent and uniformly distributed on } ] , the resulting graph is with .( we use to denote this time variable , in order not to confuse it with the time used to describe the bootstrap percolation process . )let the random variable be the number of edges required to obtain a big active set , where `` big '' is defined as in section [ ssdyna ] .[ tdg2 ] suppose and with .then more precisely , the proof is given in section [ sgauss ] .[ rdynpp ] the proof of theorem [ tdg2 ] is based on using our earlier results for a single .we might also want to study the bootstrap percolation process for all at once [ or equivalently , in for all at once ] , that is , with a coupling of the models for different , for given and .as in remark [ rdyna ] , this requires a careful choice of the order in which the vertices are inspected .we can achieve this by modifying the formulation in section [ ssetup ] as follows : when we have chosen a vertex , we reveal the times that the edges from it appear ; this tells us the neighborhood of at any time .we begin by choosing as the initially active vertices .we then , after each choice of , , calculate for each of the remaining vertices the time when it acquires the edge to , and let be the vertex such that this time is minimal .then , fixing any time , the chosen vertices will all be active until the first time that no unused active vertices remain , and the process stops . in this manner , we have found a choice of that satisfies the description in section [ ssetup ] for all } ] , \\[-8pt ] & = & ( 1 - { \theta } ) \sum_{j = r}^\infty\frac { ( cx)^j}{j!}e^{-cx } - x + { \theta}\nonumber \\ \label{f1b1 } & = & 1-x-(1-{\theta}){\mathbb p}\bigl({\operatorname{po}}(cx)\le r-1\bigr ) \\ \label{f1b2 } & = & 1-x-(1-{\theta})\sum_{j=0}^{r-1 } \frac{(cx)^j}{j!}e^{-cx},\end{aligned}\ ] ] and let be the smallest root of similarly , let be the largest root in } ] , and ; further when while and .we also define thus , , .[ lf ] if , then ( [ f10 ] ) has a unique root } ] , and is a continuous strictly increasing function of . if , then there exists and with such that ( [ f10 ] ) has three roots in } ] ; if or , there are two roots , one of them double .the smallest root is strictly increasing and continuous on } ] , and is a double root .[ pn = cn ] suppose that , and for some constants and .[ p = cnaon ] if , that is , if , then .[ pn = cn=0 ] if , that is , if , then .[ pn = cnsub ] if , then , where is the unique nonnegative root of ( [ f10 ] ) .[ pn = cnsuper ] if and given by lemma [ lf ] , then , where is the smallest nonnegative root of ( [ f10 ] ) .there is thus a jump in the final size at .remark [ rgthc ] shows how to find .[ rgthcc ] and are decreasing functions of .[ is strictly decreasing , while is constant for large . ]hence their largest value is , by the calculation in remark [ rgthc ] , thus , , .the threshold for can be calculated too . for , for ,where ; numerically , this is we have here considered a given and varied .if we instead , as in theorem [ tpc ] , take a given for a fixed and vary , we have a similar phenomenon .lemma [ lf ] and theorem [ pn = cn ] apply for every combination of and , and by considering the set of such that ( [ f10 ] ) has two or three roots , it follows from remark [ rgthcc ] that if , then , where is the unique root of ( [ f10 ] ) and thus a continuous function of , while if , then there is a range of where ( [ f10 ] ) has three roots , and one value of where the limit value jumps from a `` small '' to a `` large '' value. thus there is , again , a kind of phase transition .the following theorem shows that if we for simplicity take , then the precise threshold for in theorem [ pn = cn](iv ) is , with a width of the threshold of the order .[ cngauss ] suppose that and with fixed .let , and be as in lemma [ lf ] ; thus and are the two roots in } ] , ( [ ika ] ) implies ( using the fact just shown that w.h.p . ) that , where is the unique root in } ] into different intervals .then in the proof of theorem [ t1][t1super ] and [ t1complete ] , it remains to show that if is supercritical , then for .let , where slowly but is otherwise arbitrary .[ lbulk ] suppose that and .then , for any , w.h.p . for all ] . by lemma [ l10r ] , w.h.p .for all such , _ case _ 2 : }\bigr ) & \le & \sum_{j=3}^{j-1 } { \mathbb p}\bigl(s_n(t_j)\le2t_j\bigr ) \\[-2pt ] & \le & \sum_{j=3}^{j-1 } \frac6{t_j } < \frac{12}{t_3 } < \frac2{r{t{_\mathsf{c}}}}=o(1).\end{aligned}\ ] ] _ case _ 3 : ] .let and . then & = & o((t'_2p)^{r-1}e^{-t'_2p } ) = o((np)^{r-1}e^{-c_{1 } np})\\[-1pt ] & = & o((np){^{-1}}).\end{aligned}\ ] ] thus , , and w.h.p ., , that is , ._ case _ 5 : ] , and thus w.h.p . , which proves the second claim and completes the proof .note that ( [ tjo ] ) and ( [ smu ] ) show that is , to the first order , shifted horizontally by , while in our standard model is shifted vertically by . since we study the hitting time of the linear barrier , these are essentially equivalent .[ sgauss ] we begin with an estimate of defined in ( [ tpi ] ) . [ ls1 ] suppose that and .then , for large , for ] such that .selecting a subsequence , we may further assume that ] , and thus , by ( [ jul ] ) and lemma [ lmin ] , hence , if , then and thus , for large , so .conversely , if , then ( [ jull ] ) yields , for large , so .consequently , .we also need more precise estimates of .the following gaussian process limit is fundamental . ] , with the skorohod topology ; see , for example , ( for ; the general case is similar by a change of variables ) or , chapter 16 .[ lg ] suppose and with .then in ] for every fixed , can also be expressed as convergence in .proof of lemma [ lg ] this is a result on convergence of empirical distribution functions ( of ) ; cf . , theorem 16.4 ; we get here a brownian motion instead of a brownian bridge as in because we consider for each only a small initial part of the distribution of . for every fixed , by ( [ pia ] ) and ( [ tccond ] ) , , and thus by ( [ svar ] ) and ( [ tc1 ] ) hence ( [ sbin ] ) and the central limit theorem yield for every , which proves for each fixed .this is easily extended to finite - dimensional convergence : suppose that are fixed , and let \} ] and ] . by a simple standard argument, there thus exists a sequence , where we may further assume that , such that w.h.p . for all \cup[1+\delta_n,10r] ] , or for all ; in the latter case , for any , w.h.p . for all by lemma [ lbulk ] , so ; hence and , more precisely , provided , theorem [ t2 ] applies .we thus only have to investigate the interval ] to a continuous function is equivalent to uniform convergence , this means that ( a.s . ) uniformly for ; in particular , uniformly for ] , \\[-8pt ] & = & ( n - a){{\tilde\pi}}(x{t{_\mathsf{c}}})+{t{_\mathsf{c}}}^{1/2 } \bigl(\xi+o(1)\bigr ) \nonumber\end{aligned}\ ] ] and thus , refining ( [ ax1 ] ) , \\[-8pt ] & = & a+(n - a){{\tilde\pi}}(x{t{_\mathsf{c}}})-x{t{_\mathsf{c}}}+{t{_\mathsf{c}}}^{1/2}\xi+o_{\mathrm p}({t{_\mathsf{c}}}^{1/2}).\nonumber\end{aligned}\ ] ] hence , recalling ( [ acx ] ) and that the minimum there is attained at ] , where now .the infimum in ( [ hp ] ) is attained for some , where by lemma [ ls1 ] for large , and an argument as in ( [ ax1 ] ) shows that .we may assume that is chosen such that .then , by ( [ emm0 ] ) , where , by ( [ hp ] ) and the comments just made ( for large ) , further , writing ( [ hp ] ) as , with , we have at the minimum point the derivative .hence , uniformly for and , for any , using ( [ tpii ] ) , and thus by the mean - value theorem , for some between and , since the minimum in ( [ hp ] ) may be taken over such only , for suitable , this yields consequently , ( [ emm1 ] ) and ( [ em2 ] ) yield hence , where and , and the different parts of the theorem follow . [ ld ] suppose that and .then , for large at least , the minimum point in ( [ acx ] ) is unique , and , ; more precisely , let then let then for , and in particular ( for large ) for ; see ( [ tccond ] ) .further , and by ( [ tpii ] ) and ( [ tc ] ) , for large , hence , there is a unique ] , as we defined after ( [ acx ] ) .let ] . as in the proof of theorem[ tac ] , we may , by the skorohod coupling theorem ( , theorem 4.30 ) assume that the limit in ( [ lg ] ) holds a.s ., uniformly in . for , , and ( [ lg ] ) then implies that , uniformly for , \\[-8pt ] & = & ( n - a)\pi(t)+{t{_\mathsf{c}}}^{1/2 } w\bigl({\varphi}({\alpha})^r / r\bigr)+o_{\mathrm p}({t{_\mathsf{c}}}^{1/2}).\nonumber\end{aligned}\ ] ] let . then , by ( [ tib ] ) and lemma [ ltpi ] , for , since w.h.p . , we may here substitute , and obtain \\[-8pt ] & = & a+(n - a){{\tilde\pi}}(t)-t+{t{_\mathsf{c}}}^{1/2 } \bigl(\xi+o_{\mathrm p}(1)\bigr).\nonumber\vadjust{\goodbreak}\end{aligned}\ ] ] define the function by thus ( [ tx ] ) is .then we have shown in ( [ j16 ] ) , the function is continuous on with .consider the two cases separately .[ tgsub1 ] : when we have , by ( [ gt ] ) and ( [ tpi2 ] ) , ( for large ) , since .further , on ] .it follows from ( [ tpi2 ] ) and ( [ ika2 ] ) that . since also , ( [ g ] ) implies that for all between and , and thus the mean value theorem yields which together with ( [ aahm ] ) yields , recalling , the result in [ tgsub1 ] follows .[ tgsub2 ] : let and be as in the proof of lemma [ ld ] , ( [ ldg ] ) and ( [ ldh ] ) .we know that and .further , for , we have by ( [ tccond ] ) , ( [ tpi ] ) , ( [ tpi2 ] ) and ( [ tpii ] ) , hence , by ( [ ldh ] ) , , and \\[-8pt ] & = & \frac{r-1}{{t{_\mathsf{c}}}}\bigl(1+o(1)\bigr)+o\biggl(\frac{1}{n}\biggr ) = \frac{r-1}{{t{_\mathsf{c}}}}\bigl(1+o(1)\bigr ) . \nonumber\end{aligned}\ ] ] consequently , a taylor expansion yields , for , we have and thus .further , ( [ tpi2 ] ) and ( [ ika2 ] ) again yield . hence , ( [ gtay ] ) yields ( [ tx2 ] ) . since theorem [ tac ] yields and w.h.p . , ( [ aahm ] ) yields ; thus , similarly , ( [ gtay ] ) yields , using , hence , w.h.p ., every between and satisfies , and then by ( [ ldgg ] ) , finally , the mean value theorem yields , similarly to case [ tgsub1 ] , and the result in [ tgsub2 ] follows , since and . proof of theorem [ tdg2 ] we use the version described in section [ ssdynm ] where edges are added at random times . let be the time the active set becomes big , that is , the time the edge is added . for any given , then if and only if at time , the active set is big , which is the same as saying that there is a big active set in . fix and choose . then theorem [ tpcxx ] [ with yields . in other words , , or be the number of edges at time .then and , in analogy with lemma [ lmartin ] , is a martingale on .thus , doob s inequality yields , as in the proof of lemma [ l2 ] , for any } ] and let .the assumptions on and imply . for ] , by ( [ tpi2 ] ) and ( [ tccond ] ) , and thus by ( [ acx ] ) \\[-8pt ] & = & a+\inf_{t\le3{t{_\mathsf{c}}}}\frac { n{{\tilde\pi}}(t)-t}{1-{{\tilde\pi}}(t ) } = a-{a{_\mathsf{c}^*}}.\nonumber\end{aligned}\ ] ] in particular , by our assumption , .consequently , by ( [ gs ] ) and ( [ gh ] ) , for any fixed small and , w.h.p .\\[-8pt ] & = & \bigl(1+o({\varepsilon})\bigr)\biggl(h+\frac{r-1}{2{t{_\mathsf{c}}}}(t - t_*)^2\biggr).\nonumber\end{aligned}\ ] ] let and .for and , lemmas [ l10r ] and [ lmin ] imply that w.h.p . , for some constant . the numbers of generations required to cover the intervals ] are thus , so , where is the number of generations needed to increase the size from at least to at least .to find , we may redefine by starting with and iterate as in ( [ tj ] ) until we reach .( note that since is increasing , if we start with a larger , then every will be larger .hence , to start with exactly can only affect by at most 1 . ) by ( [ ga ] ) and lemma [ lgen1 ] , we may on the interval ] . by ( [ pia ] ) and ( [ fd ] ) , for ( with ) , if is large enough so , we may thus choose and such that for all such ( and large ) for , ( [ fd ] ) implies so choosing large enough , we have for all ] for all with .however , if , then , and it follows that , since both and are monotone , w.h.p . for all ] , and then , by induction as in lemma [ lgen1 ] , for all with . consequently , w.h.p . to find , rewrite ( [ fd ] ) as where . iterating we see that , for , and thus and consequently , in order to simplify this , note that , using ( [ tc ] ) , and thus further , we may assume that , since otherwise the process is subcritical and w.h.p . by theorem [ t1 ] .hence , and thus , since also , we may assume that , so , and then ( [ k1 ] ) yields finally , ( [ jb ] ) , ( [ acu ] ) and ( [ kll ] ) yield note that the right - hand side depends on only in the error term .hence , we have the same result for , and the result follows by ( [ t8 ] ) and the comments at the beginning of the proof .we finally consider the evolution after vertices have become active .we let , as in section [ sthreshold ] , where slowly ; we assume that [ which is possible since by ( [ tccond ] ) ] . by remark [ rbulk ] , w.h.p ., so it suffices to consider the evolution when less than vertices remain .let be the -field describing the evolution up to time .[ lnassjo ] for any and with , the conditional distribution of given is , where if further , then , uniformly in all such and , conditioned on , is a given number , and of the summands in ( [ st ] ) , are zero . for any of these terms ,the probability that it changes from at time to 1 at time is , by ( [ pi ] ) , hence , the conditional distribution of is . to see the approximation ( [ pitu1 ] ) , note first that for , since we assume , we have so . hence , using again and recalling the notation from ( [ bc2a ] ) , \\[-8pt ] & \sim&\frac{n^{r-1}}{(r-1)!}p^r(1-p)^n = \frac{p{{b{_\mathsf{c}}}'}}n . \nonumber\end{aligned}\ ] ] furthermore [ cf .( [ bc2a ] ) ] , still for , \\[-8pt ] & \sim & \frac{n^{r-1}}{(r-1)!}p^{r-1}(1-p)^n = \frac{{{b{_\mathsf{c}}}'}}n .\nonumber\end{aligned}\ ] ] consequently , and [ lg3 ] suppose that , and . if and , then ; in particular , w.h.p .we have , using ( [ es ] ) and ( [ pit ] ) , since implies , and similarly , using ( [ svar ] ) , thus , by chebyshev s inequality , since , [ pg3 ] suppose that , and .then , when , in particular , if further for some , then .furthermore , when , w.h.p . . by remark [ rbulk ] ,after generations , the active size is w.h.p .if , we can choose , so w.h.p . and .more generally , if , we have by ( [ pitt ] ) , hence , w.h.p . , which means that no further activations occur after . consequently , in this case too , w.h.p . . in particular, this proves that w.h.p .when , since w.h.p . if by theorem [ t2 ] . further , when , ( [ bclim ] ) implies that for large , so , and the result holds in this case. now assume that . for convenience ,we modify the counting of generations and start at , regarding the active but unused vertices at as `` generation 0 . ''( we may assume that is an integer . )thus define , recursively , since w.h.p . , it follows by induction that , , and thus w.h.p . consequently , it suffices to estimate . by lemma [ lnassjo ] , conditioned on [ i.e. , on and the evolution up to , which in particular specifies , for large , and thus , by induction , since , further , lemma [ lg3 ] yields w.h.pconsequently , ( [ lisa ] ) implies , w.h.p . for all ( simultaneously ) , recall that by ( [ tccond ] ) ,so we may assume .if is chosen such that , then ( [ jan ] ) implies that w.h.p . and thus .hence , for any , w.h.p . which is another way of saying , lemma 3 , for a lower bound , fix with , and define the deterministic numbers by let .we claim that w.h.p . by our assumption , we have , so geometrically fast . by lemma [ lg3 ] and , w.h.p . so ( [ manne ] ) holds w.h.p . for .say that is _ good _ if and _ fat _ if .let . at time we have active but unused vertices .further , by lemma [ lnassjo ] we have , conditioned on ( which specifies both and ) , by lemma [ lnassjo ] , for large , so if is good but not fat , and chebyshev s inequality yields , since is decreasing for , say that is _ bad _ if is not good and that _ fails _ if is fat or bad .then , by stopping at the first that fails we see that since w.h.p . by lemma [ lg3 ] and the final sum is because the terms increase geometrically , so the sum is dominated by its largest ( and last ) term .we have shown that w.h.p ., if , then and thus . hence , by ( [ manne ] ) and ( [ siv ] ) , w.h.p . combining the upper bound ( [ g4 ] ) and the lower bound ( [ g5 ] ) , we find by ( [ bc ] ) , and hence , finally ( [ g6 ] ) yields the result now follows from ( [ cec ] ) .[ spf+][slast ] we prove in this section theorems [ pn = cn ] , [ sqrtn ] and [ sqrtnn ] related to the boundary cases . we consider first the case .proof of lemma [ lf ] by the implicit function theorem , at least locally , the root is smooth except at points where we begin by studying such _critical points_. let ; cf . ( [ tpi ] ) .differentiations yield we have [ see ( [ f1b1 ] ) ] and thus .hence , ( [ critical ] ) holds if and only if which imply and thus let , , so ( [ ch ] ) says .then , by ( [ cgg ] ) , since and for , has a global minimum at , and the minimum value is furthermore , as , and as too , because then and .consequently , if , then ( [ ch ] ) has no solution , and thus there is no critical point .if , there is exactly one satisfying ( [ ch ] ) [ viz ., , and if , there are two .since ( [ ch ] ) implies , these roots are in . to complete the proof , it is perhaps simplest to rewrite ( [ f10 ] ) as , with since for , is a smooth function on } ] , and is its inverse . if , then only at a single point , and it follows again that is a strictly increasing function and is its inverse . if , then at two values and with and .it can be seen , for example , using ( [ cgg ] ) , that , and thus is _ decreasing _ on the interval ] if is large enough . ] [ rgthc ] if , then thus is the smallest root of , or equivalently where is the smallest root of ; further , while is the other root of . if , we have and thus and , using ( [ gthx ] ) and ( [ ccr ] ) , for , the two roots and of are smooth functions of , and thus where the last inequality follows from ( [ gthx ] ) , and similarly . hence , and are decreasing functions of , as claimed in remark [ rgthcc ] . [ll10r ] suppose that , and .then .we may assume .[ note that for . ] then by ( [ pia ] ) , and thus the expected number of activated vertices is . proof of theorem [ pn = cn ] first , in [ p = cnaon ] and [ pn = cn=0 ] , .let . taking in lemma [ ll10r ] , we find w.h.p . and thus whence . consequently , w.h.p . , proving [ p = cnaon ] and [ pn = cn=0 ] . next , by ( [ as ] ) , lemma [ l1 ] and ( [ es ] ) , uniformly for all , and thus , using also ( [ dtv ] ) , substituting , we find by ( [ tpi ] ) , since , uniformly in all , and , recalling ( [ f1a ] ) , still uniformly in , let .since for , and thus by compactness is bounded from below on ] , and thus .furthermore , both in [ pn = cnsub ] and in [ pn = cnsuper ] with , we have and thus if is small enough , , so ( [ f1c ] ) implies that w.h.p . and thus .the proof of theorem [ cngauss ] is very similar to the one of theorem [ tac ] .we first give a more precise estimate of the process , which is the analog of lemma [ lg ] in the case .however , in this case , we get a brownian bridge because here we consider a large part of the distribution of .[ step1 ] suppose , and with and .then in ] , is the empirical distribution function of , and thus by , theorem 16.4 , in } ] , which proves the result since .proof of theorem [ cngauss ] it suffices to consider such that . by ( [ es ] ) , ( [ dtv ] ) and ( [ tpi ] ) , \\[-8pt ] & = & ( n - a)\psi(cx)+o(1).\nonumber\end{aligned}\ ] ] by the skorohod coupling theorem ( , theorem 4.30 ), we may assume that the processes for different are coupled such that the limit ( [ eqstep1 ] ) in lemma [ step1 ] holds a.s . , and not just in distribution .since convergence in ] . hence, we have , using ( [ q2 ] ) and ( [ f1a ] ) , uniformly for } ] . by lemma [ lf ] , for or , with for and for ] and for ] .it follows by a standard argument that there exists a sequence such that w.h.p .\cup[{x_1}-{\varepsilon}_n,{x_1}+{\varepsilon}_n].\ ] ] moreover , w.h.p . ] .( we may also assume that is so small that and . ) for ] , and thus ( [ q4 ] ) yields } \bigl(a(xn ) - xn\bigr)&=&(a-{{\theta}{_\mathsf{c}}}n)\bigl(1-\psi(c{x_0})+o(1)\bigr ) \nonumber\\[-4pt]\\[-12pt ] & & { } + \sqrt{(1-{{\theta}{_\mathsf{c}}})n } w_0(\psi(c{x_0 } ) ) + o_{\mathrm p}(n^{1/2 } ) .\nonumber\end{aligned}\ ] ] the cases [ cngauss- ] and [ cngauss+ ] are easily derived .we thus focus on [ cngauss0 ] .we then have , from ( [ q5 ] ) , } \bigl(a(xn ) - xn\bigr ) \\ & & \qquad= y\bigl(1 - \psi(c{x_0})\bigr ) + \sqrt{1-{{\theta}{_\mathsf{c } } } } w_0(\psi(c{x_0 } ) ) + o_{\mathrm p}(1)\end{aligned}\ ] ] and thus , since , where , } \bigl(a(xn ) - xn\bigr ) < 0\bigr ) \\ & & \qquad={\mathbb p}\bigl(y\bigl(1-\psi(c{x_0})\bigr)+ \sqrt{1-{{\theta}{_\mathsf{c } } } } w_0(\psi ( c{x_0}))<0\bigr ) + o_{\mathrm p}(1 ) \\ & & \qquad=1-\phi(y/{\sigma } ) + o_{\mathrm p}(1).\end{aligned}\ ] ] the result follows . to prove theorem [ sqrtn ] ( ) , we first show using the previous results that if we can activate vertices , then the activation spreads w.h.p . to the entire graphit remains to show that starting with a finite number of active vertices , the process activates vertices with a probability bounded away from and .this will be done using a branching process argument .[ lsqrtn ] suppose that for some . if , then w.h.p . for all with .this is easy to prove directly , but we prefer to view it as a corollary of our estimates for smaller . thus , let .we may assume and then , so , at least for large , and we may assume that . we may consider bootstrap percolation on and simultaneously , with the same initial set of size ; we use the description in section [ ssetup ] , starting with families of i.i.d .random indicators and where we may assume . then , using to denote variables for , and .we apply lemma [ lbulk ] to .the critical time for is [ see ( [ tc ] ) ] further , so , by ( [ bc ] ) , , and we may choose with .hence , lemma [ lbulk ] shows that w.h.p . for ] . since also , we have for all , and thus . for ( i )suppose , and let be some constant .the probability that a vertex is activated at a given time is by ( [ yik ] ) for any fixed , the random variables , form together with a random vector with the multinomial distribution with , , and . by ( [ samu ] ) , for , andit follows that for have a joint poisson limit , using the notation of remark [ rms ] we thus obtain and thus for and .since is arbitrary , we have shown for every finite .furthermore , for any fixed , and a standard argument shows that there exists a sequence such that , and thus . on the other hand , lemma [ lsqrtn ] with shows that .consequently , .it is clear that for every . to see that also , note that, see ( [ sjw ] ) , as .hence , there is some such that .since stochastically dominates for , it follows that if the process reaches without stopping , the continuation dominates ( up to a change of time ) a galton watson branching process with offspring distribution , which is supercritical and thus has a positive probability of living forever .hence , .proof of theorem [ sqrtnn ] it suffices to consider .thus assume , and consider the vertices activated in the first generation , that is , at time .there are such vertices .[ note that , see ( [ pi ] ) , . ] consequently , .let , so .it follows from chebyshev s inequality ( or chernoff s ) that w.h.p . . hence ,w.h.p . for all $ ] , . together with the trivial for and lemma [ lsqrtn ] , this shows that w.h.p . for all , and thus .the authors gratefully acknowledge the hospitality and the stimulating environment of institut mittag - leffler where the majority of this work was carried out during the program `` discrete probability , '' 2009 .the authors thank the referee for helpful suggestions .
bootstrap percolation on the random graph is a process of spread of `` activation '' on a given realization of the graph with a given number of initially active nodes . at each step those vertices which have not been active but have at least active neighbors become active as well . we study the size of the final active set . the parameters of the model are , besides ( fixed ) and ( tending to ) , the size of the initially active set and the probability of the edges in the graph . we show that the model exhibits a sharp phase transition : depending on the parameters of the model , the final size of activation with a high probability is either or it is . we provide a complete description of the phase diagram on the space of the parameters of the model . in particular , we find the phase transition and compute the asymptotics ( in probability ) for ; we also prove a central limit theorem for in some ranges . furthermore , we provide the asymptotics for the number of steps until the process stops . , , + . .
the lambert function is defined as the inverse function of the solution being given by or shortly since the mapping is not injective , no unique inverse of the function exists . as can be seen in fig .[ f : lambertw ] , the lambert function has two real branches with a branching point located at .the bottom branch , , is defined in the interval ] . the earliest mention of problem of eq .is attributed to euler . however , euler himself credited lambert for his previous work in this subject .the function started to be named after lambert only recently , in the last 10 years or so .the letter was chosen by the first implementation of the function in the maple computer software .recently , the function amassed quite a following in the mathematical community .its most faithful proponents are suggesting to elevate it among the present set of elementary functions , such as , , , etc .the main argument for doing so is the fact that it is the root of the simplest exponential polynomial function . while the lambert w function is simply called w in the mathematics software tool _ maple _ ,in the _ mathematica _ computer algebra framework this function is implemented under the name ` productlog ` ( in the recent versions an alias ` lambertw ` is also supported ) .there are numerous , well documented applications of in mathematics , physics , and computer science . herewe will give two examples that arise from the physics related to the pierre auger observatory .moyal function is defined as its inverse can be written in terms of the two branches of the lambert w function , and can be seen in fig .[ f : moyal - gh ] ( left ) . within the event reconstruction of the data taken by the pierre auger observatory , the moyal function is used for phenomenological recovery of the saturated signals from the photomultipliers .in astrophysics the gaisser - hillas function is used to model the longitudinal particle density in a cosmic - ray air showers .we can show that the inverse of the three - parametric gaisser - hillas function , is intimately related to the lambert w function . using rescale substitutions ,the gaisser - hillas function is modified into a function of one parameter only , the family of one - parametric gaisser - hillas functions is shown in fig .[ f : moyal - gh ] ( right ) . the problem of finding an inverse , for , can be rewritten into according to the definition , the two ( real ) solutions for are obtained from the two branches of the lambert w function , note that the branch or simply chooses the right or left side relative to the maximum , respectively .before moving to the actual implementation let us review some of the possible nimerical and analytical approaches . for and we can take the natural logarithm of and rearrange it , it is clear , that a possible analytical expression for exhibits a degree of self similarity .the function has multiple branches in the complex domain . due to the and conditions , the eq .represents the positive part of the principal branch , but as it turns out , in this form it is suitable for evaluation when , i.e. when . unrolling the self - similarity as a recursive relation , one obtains the following curious expression for , or in a shorthand of a continued logarithm , the above expression is clearly a form of successive approximation , the final result given by the limit , when it exists . for and we can multiply both sides of eq .with , take logarithm , and rewrite it to get a similar expansion for the branch , again , this leads to a similar recursive expression , or as a continued logarithm , for this continued logarithm we will use the symbol }(x) ] on the left and logarithmic interval ] to at least 5 decimal places and to at least 3 decimal places in the whole definition range .the }(x) ] and }(x) ] interval where the halley method requires another step of the iteration .for that reason we have decided to use only ( one step of ) the fritsch iteration in the c++ implementation of the lambert function . in fig .[ f : approx - w-1 ] ( left ) the same procedure is shown for the branch .the final approximation is accurate to at least 5 decimal places in the whole definition range ] is from eq ., is from eq . , and }(x)y={\rm w}(x)\fx = ye^y\fye^y\f{\rm w}_0\f{\rm w}_{-1}\f{\rm w}_0(x)\f[-1/e,\infty]\f{\rm w}_{-1}(x)\f[-1/e,0]\f$. accuracy is the nominal double type resolution ( 16 decimal places ) .this program is free software : you can redistribute it and/or modify it under the terms of the gnu general public license as published by the free software foundation , either version 3 of the license , or ( at your option ) any later version .this program is distributed in the hope that it will be useful , but without any warranty ; without even the implied warranty of merchantability or fitness for a particular purpose .see the gnu general public license for more details .template < > inline double branchpointpolynomial<7>(const double p ) { return -1 + p*(1 + p*(-1./3 + p*(11./72 + p*(-43./540 + p*(769./17280 + p*(-221./8505 + p*(680863./43545600 ) ) ) ) ) ) ) ; } template < > inline double branchpointpolynomial<8>(const double p ) { return -1 + p*(1 + p*(-1./3 + p*(11./72 + p*(-43./540 + p*(769./17280 + p*(-221./8505 + p*(680863./43545600 + p*(-1963./204120 ) ) ) ) ) ) ) ) ; } template < > inline double branchpointpolynomial<9>(const double p ) { return -1 + p*(1 + p*(-1./3 + p*(11./72 + p*(-43./540 + p*(769./17280 + p*(-221./8505 + p*(680863./43545600 + p*(-1963./204120 + p*(226287557./37623398400 . ) ) ) ) ) ) ) ) ) ; } template < > inline double asymptoticexpansion<3>(const double a , const double b ) { const double ia = 1 / a ; return a - b + b / a * ( 1 + ia * ( 0.5*(-2 + b ) + ia * 1/6.*(6 + b*(-9 + b*2 ) ) ) ) ; } template < > inline double asymptoticexpansion<4>(const double a , const double b ) { const double ia = 1 / a ; return a - b + b / a * ( 1 + ia * ( 0.5*(-2 + b ) + ia * ( 1/6.*(6 + b*(-9 + b*2 ) ) + ia * 1/12.*(-12 + b*(36 + b*(-22 + b*3 ) ) ) ) ) ) ; } template < > inline double asymptoticexpansion<5>(const double a , const double b ) { const double ia = 1 / a ; return a - b + b / a * ( 1 + ia * ( 0.5*(-2 + b ) + ia * ( 1/6.*(6 + b*(-9 + b*2 ) ) + ia * ( 1/12.*(-12 + b*(36 + b*(-22 + b*3 ) ) ) + ia * 1/60.*(60 + b*(-300 + b*(350 + b*(-125 + b*12 ) ) ) ) ) ) ) ) ; } //asymptotic expansion //corless et al .1996 , de bruijn ( 1981 ) template < int order > static double asymptoticexpansion(const double x ) { const double logsx = log(esign * x ) ; const double logslogsx = log(esign * logsx ) ; return lambertwdetail::asymptoticexpansion < order>(logsx , logslogsx ) ; } template < > template < > inline double branch<0>::rationalapproximation<1>(const double x ) { // branch 0 , valid for [ -0.31,0.3 ] return x * ( 1 + x * ( 5.931375839364438 + x * ( 11.392205505329132 + x * ( 7.338883399111118 + x*0.6534490169919599 ) ) ) ) / ( 1 + x * ( 6.931373689597704 + x * ( 16.82349461388016 + x * ( 16.43072324143226 + x*5.115235195211697 ) ) ) ) ; } template < > template < > inline double branch<0>::rationalapproximation<2>(const double x ) { // branch 0 , valid for [ -0.31,0.5 ] return x * ( 1 + x * ( 4.790423028527326 + x * ( 6.695945075293267 + x * 2.4243096805908033 ) ) ) / ( 1 + x * ( 5.790432723810737 + x * ( 10.986445930034288 + x * ( 7.391303898769326 + x * 1.1414723648617864 ) ) ) ) ; } template < > template < > inline double branch<0>::rationalapproximation<3>(const double x ) { // branch 0 , valid for [ 0.3,7 ] return x * ( 1 + x * ( 2.4450530707265568 + x * ( 1.3436642259582265 + x * ( 0.14844005539759195 + x * 0.0008047501729129999 ) ) ) ) / ( 1 + x * ( 3.4447089864860025 + x * ( 3.2924898573719523 + x * ( 0.9164600188031222 + x * 0.05306864044833221 ) ) ) ) ; } template < > template < > inline double branch<-1>::rationalapproximation<4>(const double x ) { //branch -1 , valid for [ -0.3,-0.05 ] return ( -7.814176723907436 + x * ( 253.88810188892484 + x * 657.9493176902304 ) ) / ( 1 + x * ( -60.43958713690808 + x * ( 99.98567083107612 + x * ( 682.6073999909428 + x * ( 962.1784396969866 + x * 1477.9341280760887 ) ) ) ) ) ; } template < > inline double branch<0>::approximation(const double x ) { if ( x < -0.32358170806015724 ) { if ( x < -kinve ) return numeric_limits < double>::quiet_nan ( ) ; else if ( x < -kinve+1e-5 ) return branchpointexpansion<5>(x ) ; else return branchpointexpansion<9>(x ) ; } else { if ( x < 0.14546954290661823 ) return rationalapproximation<1>(x ) ; else if ( x < 8.706658967856612 ) return rationalapproximation<3>(x ) ; else return asymptoticexpansion<5>(x ) ; } } template < > inline double branch<-1>::approximation(const double x ) { if ( x < -0.051012917658221676 ) { if ( x < -kinve+1e-5 ) { if ( x < -kinve ) return numeric_limits < double>::quiet_nan ( ) ; else return branchpointexpansion<5>(x ) ; } else { if ( x < -0.30298541769 ) return branchpointexpansion<9>(x ) ; else return rationalapproximation<4>(x ) ; } } else { if ( x < 0 ) return logrecursion<9>(x ) ; else if ( x = = 0 ) return -numeric_limits < double>::infinity ( ) ; else return numeric_limits < double>::quiet_nan ( ) ; } } inline double halleystep(const double x , const double w ) { const double ew = exp(w ) ; const double wew = w * ew ; const double wewx = wew - x ; const double w1 = w + 1 ; return w - wewx / ( ew * w1 - ( w + 2 ) * wewx/(2*w1 ) ) ; } inline double fritschstep(const double x , const double w ) { const double z = log(x / w ) - w ; const double w1 = w + 1 ; const double q = 2 * w1 * ( w1 + ( 2/3.)*z ) ; const double eps = z / w1 * ( q - z ) / ( q - 2*z ) ; return w * ( 1 + eps ) ; } template < double iterationstep(const double x , const double w ) > inline double iterate(const double x , double w , const double eps = 1e-6 ) { for ( int i = 0 ; i < 100 ; + + i ) { const double ww = iterationstep(x , w ) ; if ( fabs(ww - w ) < = eps ) return ww ; w = ww ; } cerr < < " convergence not reached . "< < endl ; return w ; } template < > double lambertw<0>(const double x ) { if ( fabs(x ) > 1e-6 & & x >-lambertwdetail::kinve + 1e-5 ) return lambertwdetail : : iterator <lambertwdetail::fritschstep > : : depth<1 > : : recurse(x , lambertwapproximation<0>(x ) ) ; else return lambertwapproximation<0>(x ) ; } template < > double lambertw<-1>(const double x ) { if ( x > -lambertwdetail::kinve + 1e-5 ) return lambertwdetail : : iterator <lambertwdetail::fritschstep > : : depth<1 > : : recurse(x , lambertwapproximation<-1>(x ) ) ; else return lambertwapproximation<-1>(x ) ; }
this short note presents the lambert function and its possible application in the framework of physics related to the pierre auger observatory . the actual numerical implementation in c++ consists of halley s and fritsch s iteration with branch - point expansion , asymptotic series and rational fits as initial approximations . + + * having fun with lambert function * + +
simultaneously recovering the location of a robot and a map of its environment from sensor readings is a fundamental challenge in robotics .well - known approaches to this problem , such as square root smoothing and mapping ( sam ) , have focused on regression - based methods that exploit the sparse structure of the problem to efficiently compute a solution .the main weakness of the original sam algorithm was that it was a _ batch _method : all of the data must be collected before a solution can be found . for a robot traversing an environment ,the inability to update an estimate of its trajectory online is a significant drawback . in response to this weakness, developed a critical extension to the batch sam algorithm , incremental smoothing and mapping ( isam ) , that overcomes this problem by _ incrementally _ computing a solution .the main drawback of isam , was that the approach required costly periodic batch steps for variable reordering to maintain sparsity and relinearization .this approach was extended in isam 2.0 , which employs an efficient data structure called the _ bayes tree _ to perform incremental variable reordering and just - in - time relinearization , thereby eliminating the bottleneck caused by batch variable reordering and relinearization .the isam 2.0 algorithm and its extensions are widely considered to be state - of - the - art in robot trajectory estimation and mapping .the majority of previous approaches to trajectory estimation and mapping , including the smoothing - based sam family of algorithms , have formulated the problem in discrete time .however , discrete - time representations are restrictive : they are not easily extended to trajectories with irregularly spaced waypoints or asynchronously sampled measurements .a continuous - time formulation of the sam problem where measurements constrain the trajectory at any point in time , would elegantly contend with these difficulties .viewed from this perspective , the robot trajectory is a _ function _ , that maps any time to a robot state. the problem of estimating this function along with landmark locations has been dubbed _ simultaneous trajectory estimation and mapping _ ( steam ) .tong et al . proposed a gaussian process ( gp ) regression approach to solving the steam problem .while their approach was able to accurately model and interpolate asynchronous data to recover a trajectory and landmark estimate , it suffered from significant computational challenges : naive gaussian process approaches to regression have notoriously high space and time complexity .additionally , tong et al.s approach is a _ batch _ method , so updating the solution necessitates saving all of the data and completely resolving the problem . in order to combat the computational burden ,tong et al.s approach was extended in barfoot et al . to take advantage of the sparse structure inherent in the steam problem .the resulting algorithm significantly speeds up solution time and can be viewed as a continuous - time analog of dellaert s original square - root sam algorithm .unfortunately , like sam , barfoot et al.s gp - based algorithm remains a batch algorithm , which is a disadvantage for robots that need to continually update the estimate of their trajectory and environment . in this work, we provide the critical extensions necessary to transform the existing gaussian process - based approach to solving the steam problem into an extremely efficient incremental approach .our algorithm elegantly combines the benefits of gaussian processes and isam 2.0 . like the gp regression approaches to steam , our approach can model continuous trajectories , handle asynchronous measurements , and naturally interpolate states to speed up computation and reduce storage requirements , and , like isam 2.0, our approach uses a bayes tree to efficiently calculate a _maximum a posteriori _( map ) estimate of the gp trajectory while performing incremental factorization , variable reordering , and just - in - time relinearization .the result is an online gp - based solution to the steam problem that remains computationally efficient while scaling up to large datasets .we begin by describing how the simultaneous trajectory estimation and mapping ( steam ) problem can be formulated in terms of gaussian process regression .following tong et al . and barfoot et al . , we represent robot trajectories as functions of time sampled from a gaussian process : here , is the continuous - time trajectory of the robot through state - space , represented by a gaussian process with mean and covariance functions .we next define a finite set of measurements : the measurement can be any linear or nonlinear functions of a set of related variables plus some gaussian noise .the related variables for a range measurement are the robot state at the corresponding measurement time and the associated landmark location .we assume the total number of measurements is , and the number of trajectory states at measurement times are . based on the definition of gaussian processes , any finite collection of robot states has a joint gaussian distribution .so the robot states at measurement times are normally distributed with mean and covariance .^\intercal\\ & \bm{\mu } = [ \begin{array}{ccc } \bm{\mu}(t_1)^{\intercal } & \hdots & \bm{\mu}(t_m)^{\intercal}\end{array}]^\intercal , \hspace{12pt}\bm{\mathcal{k}}_{ij } = \bm{\mathcal{k}}(t_i , t_j ) \end{split}\ ] ] note that any point along the continuous - time trajectory can be estimated from the gaussian process model .therefore , the trajectory does not need to be discretized and robot trajectory states do not need to be evenly spaced in time , which is an advantage of the gaussian process approach over discrete - time approaches ( e.g. dellaert s square - root sam ) . the landmarks which represent the map are assumed to conform to a joint gaussian distribution with mean and covariance ( eq . [ eqn_landmark_dist ] ) . the prior distribution of the combined state that consists of robot trajectory states at measurement times and landmarks is , therefore , a joint gaussian distribution ( eq . [ eqn_prior_dist ] ) .^\intercal \label{eqn_landmark_dist}\\ & \bm{\theta } \sim \mathcal{n}(\bm{\eta } , \bm{\mathcal{p } } ) , \hspace{12pt}\bm{\eta } = [ \begin{array}{cc } \bm{\mu}^\intercal & \bm{d}^\intercal\end{array}]^\intercal , \hspace{12pt } \bm{\mathcal{p } } = \begin{bmatrix } \bm{\mathcal{k } } & \\ & \bm{w } \end{bmatrix } \label{eqn_prior_dist}\end{aligned}\ ] ] to solve the steam problem , given the prior distribution of the combined state and the likelihood of measurements , we compute the _ maximum a posteriori _ ( map ) estimate of the combined state _ conditioned _ on measurements via bayes rule : where the norms are mahalanobis norms defined as : , and and are the mean and covariance of the measurements collected , respectively : ^\intercal\\ \bm{r } & = \text{diag}(\bm{r}_1 , \bm{r}_2 , \hdots , \bm{r}_n)\end{aligned}\ ] ] because both covariance matrices and are positive definite , the objective in eq .[ eqn_map ] corresponds to a least squares problem .consequently , if some of the measurement functions are nonlinear , this becomes a nonlinear least squares problem , in which case iterative methods including gauss - newton and levenberg - marquardt can be utilized .a linearization of a measurement function at current state estimate can be accomplished by a first - order taylor expansion : combining eq .[ eqn_measurement_linearization ] with eq .[ eqn_map ] , the optimal increment at the current combined state estimate is where is the measurement jacobian matrix : to solve the linear least squares problem in eq .[ eqn_map_linearized ] , we take the derivative with respect to , and set it to zero , which gives us embedded in a set of linear equations with covariance the positive definite matrix is the _ a posteriori _ information matrix , which we label . to solve this set of linear equations for , we do not actually have to calculate the inverse . instead, factorization - based methods can provide a fast , numerically stable solution .for example , can be found by first performing a cholesky factorization , and then solving and by back substitution . at each iterationwe perform a _ batch _state estimation update and repeat the process until convergence . if is dense , the time complexity of a cholesky factorization and back substitution are and respectively , where .however , if has sparse structure , then the solution can be found much faster .for example , for a narrowly banded matrix , the computation time is instead of .fortunately , we can guarantee sparsity for the steam problem ( see section [ subsec_sparse_gp_regression ] below ) .an advantage of the gaussian process representation of the robot trajectory is that any trajectory state can be interpolated from other states by computing the posterior mean : with ^\intercal \quad \text{and}\nonumber\\ \bm{\mathcal{k}}(t ) & = [ \begin{array}{ccc } \bm{\mathcal{k}}(t , t_1 ) & \hdots & \bm{\mathcal{k}}(t , t_m ) \end{array}].\end{aligned}\ ] ] by utilizing interpolation , we can reduce the number of robot trajectory states that we need to estimate in the optimization procedure . for simplicity ,assume , the set of the related variables of the measurement according to the model ( eq . [ eqn_measurement_model ] ) , is .then , after interpolation , eq . [ eqn_measurement_linearization ] becomes : by employing eq .[ eqn_interpolated_measurement_linearization ] during optimization , we can make use of measurement without explicitly estimating the trajectory states that it relates to .we exploit this advantage to greatly speed up the solution to the steam problem in practice ( section [ sec_experiment ] ) .the efficiency of the gaussian process gauss - newton algorithm presented in section [ sec_trajest ] is heavily dependent on the choice of kernel .it is well - known that if the information matrix is sparse , then it is possible to very efficiently compute the solution to eq .[ eqn_linear_equations ] .barfoot et al .suggest a kernel matrix with a sparse inverse that is well - suited to the simultaneous trajectory estimation and mapping problem .in particular , barfoot et al . show that is exactly block - tridiagonal when the gp is assumed to be generated by linear , time - varying ( ltv ) stochastic differential equation ( sde ) which we describe here : where is trajectory , is known exogenous input , is process noise , and is time - varying system matrix .the process noise is modeled by a gaussian process , and is the _ dirac delta function_. ( see for details ) .we consider a specific case of this model in the experimental results in section [ subsec_synthetic_experiment ] . assuming the gp is generated by eq .[ eqn_sde ] , the measurements are landmark and odometry measurements , and the variables are ordered in xl ordering xl ordering is an ordering where process variables come before landmarks variables . ] , the sparse information matrix becomes \end{aligned}\ ] ] where is block - tridiagonal and is block - diagonal .s density depends on the frequency of landmark measurements , and how they are taken .see fig .[ fig_original_info_mat ] for an example . when the gp is generated by ltv sde , barfoot et al .prove that in eq .[ eqn_batch_query ] has a specific sparsity pattern , only two column blocks that correspond to trajectory states at and , where , are nonzero . in other words , is an affine function of only two nearby states and : thus , it only takes time to query any using eq .[ eqn_sparse_query ] .moreover , because interpolation of a state is only determined by the two nearby states , measurement interpolation in eq .[ eqn_interpolated_measurement_linearization ] can be significantly simplified ..24 with xl ordering[foot_xl_ordering ] and symamd ordering[foot_symamd_ordering ] .both sparse matrices have the same number of non - zero elements , yet the second matrix can be factored much more efficiently due to the heuristic ordering of the matrix columns .( see table [ tab_chol ] ) . for illustration ,only 200 trajectory states are shown here.,title="fig : " ] .24 with xl ordering[foot_xl_ordering ] and symamd ordering[foot_symamd_ordering ] .both sparse matrices have the same number of non - zero elements , yet the second matrix can be factored much more efficiently due to the heuristic ordering of the matrix columns .( see table [ tab_chol ] ) . for illustration , only 200 trajectory states are shown here.,title="fig : " ]previous work on batch continuous - time trajectory estimation as sparse gaussian process regression assumes that the information matrix is sparse ( eq . [ eqn_sparsei ] ) and applies standard block elimination to factor and solve eq .[ eqn_linear_equations ] . despite the sparsity of , for large numbers of landmarksthis process can be very inefficient . inspired by square root sam , which uses variable reordering for efficient cholesky factorization in a discrete - time context, we show that factorization - time can be dramatically improved by matrix column reordering in the sparse gaussian process context as well .it is reasonable to base our approach on sam because the information matrix and factor graph of the sparse gp has structure similar to the sam formulations of the problem , and the intuitions from previous discrete - time approaches apply here .if the cholesky decompositions are performed naively , fill - in can occur , where entries that are zero in the information matrix become non - zero in the cholesky factor .this occurs because the cholesky factor of a sparse matrix is guaranteed to be sparse for some variable orderings , but not all variable orderings .therefore , we want to find a good variable ordering so that the cholesky factor is sparse .although finding the optimal ordering for a symmetric positive definite matrix is np - complete , good heuristics do exist .one such heuristic is symmetric approximate minimum degree permutation ( symamd)symamd is a variant of column approximate minimum degree ordering ( colamd ) on positive definite matrix . ] . to demonstrate the benefits of variable reordering, we constructed a synthetic example and compared different approaches .the example , which is explained in detail in section [ subsec_synthetic_experiment ] , consists of 1,500 time steps with trajectory states , ^\intercal ] , and with odometry and range measurements .the total number of landmarks is 298 .the structure of the information matrix and cholesky factor , with and without variable reordering , are compared in fig .[ fig_info_mat ] and fig .[ fig_chol ] . although variable reordering does not change the sparsity of the information matrix ( fig .[ fig_info_mat ] ) , it dramatically increases the sparsity of the cholesky factor ( fig .[ fig_chol ] ) . table [ tab_chol ] demonstrates this clear benefit of reordering .the cholesky factor after symamd ordering contains 10.6% non - zeroes of xl ordering [ foot_xl_ordering ] , and takes 2.83% of the time , which are significant improvements in both time and space complexity .we also experimented with block symamd , which exploits domain knowledge to group together variables belonging to a particular trajectory state or landmark location before performing symamd and empirically further improves performance ..24 of . in ( a ), is computed with xl ordering[foot_xl_ordering ] , which exhibits fill - in . when computed with symamd ordering in ( b ) , is more sparse . for illustration ,only 200 states are shown here.,title="fig : " ] .24 of . in ( a ), is computed with xl ordering[foot_xl_ordering ] , which exhibits fill - in . when computed with symamd ordering in ( b ) , is more sparse . for illustration ,only 200 states are shown here.,title="fig : " ] .cost of cholesky factorization with different ordering methods including ordering time [ cols= " < , < , < , < " , ] this dataset consists of an exploration task with 1,500 time steps .each time step contains a trajectory state ^\intercal ] , an odometry measurement , and a range measurement related to a nearby landmark .the total number of landmarks is 298 .the trajectory is randomly sampled from a gaussian process generated from white noise acceleration , i.e. constant velocity , and with zero mean . where note that velocity must to be included in trajectory state to represent the motion in ltv sde form .the odometry and range measurements with gaussian noise are specified in eq . [ eqn_odometry_measurements ] and eq . [ eqn_range_measurements ] respectively . where consists of the robot - oriented velocity and heading angle velocity with gaussian noise , and is the distance between the robot and a specific landmark at with gaussian noise .we compare the computation time of the three approaches ( pb , pbvr and btgp ) in fig .[ fig_syn_performance ] .the incremental gaussian process regression ( btgp ) offers significant improvements in computation time compared to the batch approaches ( pbvr and pb ) .we also demonstrate that btgp can further increase speed over a naive application of the bayes tree ( e.g. isam 2.0 ) without sacrificing much accuracy by leveraging interpolation . to illustrate the trade - off between the accuracy and time efficiency due to interpolation , we plot rmse of distance errors and the total computation time by varying the time step difference ( the rate of interpolation ) between estimated states .the second experiment evaluates our approach on real data from a freely available range - only slam dataset collected from an autonomous lawn - mowing robot .the `` plaza '' dataset consists of odometer data and range data to stationary landmarks collected via time - of - flight radio nodes .( additional details on the experimental setup can be found in . )ground truth paths are computed from gps readings and have 2 cm accuracy according to .the environment , including the locations of the landmarks and the ground truth paths , are shown in fig .[ fig_plaza_estimate ] .the robot travelled 1.9 km , occupied 9,658 poses , and received 3,529 range measurements , while following a typical path generated during mowing .the dataset has sparse range measurements , but contains odometry measurements at each time step .the results of incremental btgp are shown in fig .[ fig_plaza_estimate ] and demonstrate that we are able to estimate the robot s trajectory and map with a very high degree of accuracy . as in section [ subsec_synthetic_experiment ] , performance of three approaches periodic batch relinearization ( pb ) , periodic batch relinearization with variable reordering ( pbvr ) and incremental bayes tree ( btgp )are compared in fig .[ fig_plaza_performance ] . in this dataset ,the number of landmarks is 4 , which is extremely small relative to the number of trajectory states , so there is no performance gain from reordering .however , the bayes tree - based approach dramatically outperforms the other two approaches .as the problem size increases , there is negligible increase in computation time , even for close to 10,000 trajectory states .the third experiment evaluates our approach on the victoria park dataset , which consists of range - bearing measurements to landmarks , and speed and steering odometry measurements .the data was collected from a vehicle equipped with a laser sensor driving through the sydney s victoria park .the environment contains a high number of trees as landmarks .the vehicle travelled km in 26 minutes . after repeated measurements , taken when the vehicle is stationary ,are dropped , the dataset consists of 6,969 time steps and 3,640 range - bearing measurements relative to 151 landmarks . the bearing measurement is specified in eq .[ eqn_bearing_measurements ] , as the relative angle from vehicle heading to the landmark direction with gaussian noise where ^\intercal$ ] is location of landmark . results , shown in figure [ fig_victoria_park ] , further demonstrate the advantages of btgp . as seen from the upper right plot ,variable reordering drastically reduces computation time when used within batch optimization ( pbvr ) , and even further in the incremental algorithm ( btgp ) .we have introduced an incremental sparse gaussian process regression algorithm for computing the solution to the continuous - time simultaneous trajectory estimation and mapping ( steam ) problem .the proposed algorithm elegantly combines the benefits of gaussian process - based approaches to steam while simultaneously employing state - of - the - art innovations from incremental discrete - time algorithms for smoothing and mapping .our empirical results show that by parameterizing trajectories with a small number of states and utilizing gaussian process interpolation , our algorithm can realize large gains in speed over isam 2.0 with very little loss in accuracy ( e.g. reducing computation time by while increasing rmse by only 8 cm on the autonomous lawnmower dataset ) .
recent work on simultaneous trajectory estimation and mapping ( steam ) for mobile robots has found success by representing the trajectory as a gaussian process . gaussian processes can represent a continuous - time trajectory , elegantly handle asynchronous and sparse measurements , and allow the robot to query the trajectory to recover its estimated position at any time of interest . a major drawback of this approach is that steam is formulated as a _ batch _ estimation problem . in this paper we provide the critical extensions necessary to transform the existing batch algorithm into an extremely efficient incremental algorithm . in particular , we are able to vastly speed up the solution time through efficient variable reordering and incremental sparse updates , which we believe will greatly increase the practicality of gaussian process methods for robot mapping and localization . finally , we demonstrate the approach and its advantages on both synthetic and real datasets .
towards the end of his life the great astrophysicist subrahmanyan chandrashekar wrote a very interesting , educational , and entertaining book which was a reader s guide to newton s principia ( newton s principia for the common reader , clarendon press , oxford , 1995 ) .chandra characterized the nature of his project in the prologue as an undertaking by a practising scientist to read and comprehend the intellectual achievement that the _ principia _ is .the resulting book is a wonderful translation of newton s arguments into modern language and mathematical notation accompanied by historical and physical commentary .there is a section of chandra s book in which he claims to find a small error in the principia .this is the sort of claim that naturally draws one s attention .however we believe that there is a mistake of interpretation underlying chandra s claim and that the principia is correct as it stands .this short paper describes chandra s misinterpretation of a geometric construction of newton and gives an outline of newton s demonstration by following the standard english version of the _ principia _ line by line and converting it into modern mathematical notation in the spirit of chandra s book .first a brief description of the issue , which concerns newton s prescription for determining the orbits under an inverse cube force ( proposition xli , corollary , iii , page 132 , section 50 in chandrashekar ) which appears about a quarter of the way through the _ principia_. after newton has introduced his laws of motion and derived kepler s laws , he is in the midst of deriving properties of orbits for various types of centripetal forces , employing such ideas as energy conservation and the clear formulation of initial value problems . in corollaryiii to proposition xli concerning the orbits under a centripetal force , newton outlines a geometric construction for determining the orbits under an inverse cube force .this construction relies on the use of an auxiliary conic section , the curves vrs in the figure shown below . according to newton s constructionwhen the auxiliary curve is a hyperbola , the constructed orbit ( the curves vpq ) spirals in towards the center and when the auxiliary curve is an ellipse the constructed orbit is a hyperbola .this introduction of an auxiliary curve has led to some confusion and led chandrashekar to assert , in section 50 of his book , that the correct result is the other way around and that the statement in the _ principia _ must be a misprint .it seems that the introduction of an auxiliary curve led chandrashekar to a slight misinterpretation of the argument which it is the purpose of this note to clarify . in this paperall quotes from the _ principia _ , which was of course originally written in latin , are from the english translation sir isaac newton s mathematical principles of natural philosophy and his system of the world , translated by a. motte , revised by f cajori , university of california press , berkeley , 1934 ) . golubfig1.ps + fig .1 ) the curve vrs is tha auxiliary conic section used by newton to construct the orbit vpq . in fig .( a ) the auxiliary curve is an ellipse leading to an orbit , whose radius is given by ct ( rt is the tangent to vrs ) which grows continuously as the auxiliary point r(x , y ) moves down the ellipse to s. in fig .( b ) the auxiliary curve vrs is a hyperbola leading to an orbit vpq whose radius ( ct ) decreases as the auxiliary point r(x , y ) moves towards s. the area vrc proportional to the angle along the orbit , vcp , is indicated .see text below .( adapted from chandrashekar ) in his discussion chandra quotes the following passage from the principia : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .. therefore if the conic section vrs be a , the body will descend to the centre ( _ along vpq _ ) ; but if it be an , it will ascend continually and go farther and farther _ in infinitum_. and on the contrary .( parentheses added ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ chandra then comments ( page 180 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ on the face of it , one would conclude that the words hyperbola andellipse ( underlined ) have been interchanged by a simple oversight .certainly , an orbit which is a hyperbola in the ( ) plane ascends to infinity while tends to a finite limit , while an orbit which is an ellipse on the ( ) plane descends to the centre in a spiral orbit ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this statement and in the discussion on p. 176it is clear that chandra is interpreting the diagrams given by newton as being orbits in the plane but newton never mentions time in his discussion and we will see that if one interprets the curves vrs as auxiliary curves and vpq as the orbits the statement in the prinicpia is correct as it stands . at the beginning of section 50 .chandra states the word body is not used with its standard meaning .newton states _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .. from the place v there sets out a body with a just velocity ... that body will proceed in a curve vpq ... , ( page 132 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( see complete quote below ) .however if we accept vpq as the trajectory , then there does not appear to be anything wrong with this use of the word body . in the next sectionwe repeat chandra s calculation of the orbit and in section 3 we show that following newton s prescription leads to the same orbit .following chandra ( equation 1 , p. 174 ) we write the conservation of energy where the first term is the potential for the force and the second term is the centrifugal potential .( we took at .conservation of angular momentum yields considering first ( [ 1 ] ) and the initial condition yields evaluating the integral ( [ 3a ] ) we find ( taking at ) which is the equation of a hyperbola ( chandrashekar equ . 25 ,p 178 ) . using conservation of angular momentum ( [ 2 ] ) or evaluating the integral ( [ 3b ] )we find that the orbit in the plane is indeed a hyperbola as given by chandra . in the case of is easy to show that and of course chandra is correct when he says the orbit of equations ( [ 3aa ] ) is a hyperbola in the ( ) plane and that of equations ( [ 4aa ] ) is an ellipse in that plane but that does not form any part of newton s argument . in the next section we look directly at newton s suggested construction .in the following we will calculate the orbits following newton s prescription step by step .the references are to the figure the quoted passages are from p.132 principia , 3rd ed , etc .... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(i ) _ if to the centre c and the principal vertex v , there be described a conic section vrs ; ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ designating the point by the coordinates with axis along cv and axis directed to the right from the origin c , we can write the equations for the conic section taken as an ellipse ( lhs fig) or , alternately as a hyperbola ( rhs fig) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(ii ) _ ... and from any point thereof , as r , there be drawn the tangent rt meeting the axis cv indefinitely produced in the point t ; ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ from ( [ 5 ] ) and ( [ 6 ] ) we calculate with the top ( bottom ) sign applying to the ellipse ( [ 5 ] ) ( hyperbola [ 6 ] ) . then the distance ct is given by _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(iii ) _ ... and then joining cr there be drawn the right _( meaning straight not perpendicular ) _ line cp , equal to the abscissa ct , ... _ the radius relative to the origin c of the point p is: ( iv ) _ .. making an angle vcp proportional to the sector ( area ) vcr ; ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus _ _ where is the area of the triangle formed by cr , the x axis , and a vertical line through r. for the hyperbola ( lower sign , [ 6 ] ) we have where we used ( [ 6 ] ) .thus and which is equation ( [ 4a ] ) .thus we see that the curve generated by p following newton s prescription ( quotes ( iii ) and ( iv ) above ) for the conic section vrs being an hyperbola ( [ 6 ] ) does indeed spiral into the center ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .. and if a centripetal force inversely proportional to the cubes of the distances of the places from the centre , tends to the centre c : and from the place v there sets out a body with a just velocity in the direction of a line perpendicular to the right ( straight ) line cv ; that body will proceed in a curve vpq , which the point p will always touch ; , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as the point r moves up the curve vrs the point t moves toward the origin . further for vcr taken as an ellipse ( upper sign , [ 5 ] ) we have then or and ( [ 7]) which is equation ( [ 4 ] ) .thus we see that the curve generated by p following newton s prescription for the conic section vrs being an ellipse ( [ 5 ] ) is indeed a hyperbola ; _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ... but if it ( vrs ) be an ellipse , it ( the body ) will ascend continually , and go further off _ in infinitum ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as the point r moves down the ellipse vrs and the tangent approaches the vertical , the point t moves off to infinity .we have shown that following newton s geometric prescription one generates the correct orbits for the inverse cube force ; taking the auxiliary curve as a hyperbola leading to orbits that spiral in towards the center while an ellipse as auxiliary curve leads to hyperbolic orbits flying out to infinity so that there is no confusion in newton s presentation ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .. therefore if the conic section vrs be a , the body will descend to the centre ( _ along vpq _ ) ; but if it be an , it will ascend continually and go farther and farther _ in infinitum_. and on the contrary . ( parentheses added ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ on the face of it , one would conclude that the words hyperbola and ellipse ( underlined ) have been interchanged by a simple oversight .certainly , an orbit which is a hyperbola in the ( ) plane ascends to infinity while tends to a finite limit , while an orbit which is an ellipse on the ( ) plane descends to the centre in a spiral orbit ... _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ certainly none of this is meant as a criticism of chandra , who was clearly a very creative and insightful scientist .( the book in question was published in the year of his death at age 85 ) .we feel that chandra would not have wanted such an apparent error of interpretation in his wonderful work to stand . for usthe interesting thing was the fun of following chandra s lead in reading a small piece of one of newton s original arguments in the principia and reconstructing the reasoning for ourselves .although we verify that newton s construction is correct , the really interesting question is how he came up with the idea for such a construction in the first place .it is also interesting to note how succinctly newton was able to present the argument , requiring about half a page including the figure .newton himself explains his reasoning : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ all these things follow from the foregoing proposition _( xli , page 139 , which , as explained by chandra , is an exposition of the general energy integral method for solving the motion of a particle under an arbitrary central force ) , _ by the quadrature of a certain curve , the invention of which as being easy enough , for brevity s sake i omit . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this passage was quoted by chandra ( page 174 ) as an introduction to his solution of the orbits using the energy integral method in which he identifies the curves vrs with the orbit in the plane , something , which seems not to have been intended by newton .the _ principia _ has a reputation as one of the least frequently - read great books .for the usual reasons of time pressure felt so strongly in science education curricula , few instructors try to teach the subject of mechanics by guiding students through newton s derivations in the principia .however with the aid of chandra s lovely book and similar books , it is now far more practical for an instructor to consider using a few selected excerpts from the _ principia _ to give students a taste of the real newton .we feel that some students would be excited by the opportunity to confront a piece of the _ principia _ with some guidance and we hope that more instructors will consider this possibility .
there is a section in chandrashekar s newton s principia for the common reader , ( clarendon press , oxford , 1995 ) in which he claims to find a small error in the principia . . however we believe that there is a mistake of interpretation underlying chandra s claim and that the principia is correct as it stands . this short paper describes chandra s misinterpretation of a geometric construction of newton and gives an outline of newton s demonstration by following the standard english version of the _ principia _ line by line and converting it into modern mathematical notation in the spirit of chandra s book . subj - class : history of physics
in cardelli introduced a family of process calculi , called _ brane calculi _ , endowed with dynamically nested membranes , focussing on the interactions that happen _ on _ membranes rather than inside them .brane calculi offer a suitable and formal setting for investigating the behaviour of the specified systems , in order to establish the biological properties of interest .nevertheless , since the behaviour of a system is usually given in terms of its transition system , whose size can be huge , especially when modelling complex biological systems , its exploration can be computationally hard .one possible solution consists in resorting to static techniques to extract information on the dynamic behaviour and to check the related dynamic properties , without actually running the corresponding program .the price is a loss in precision , because these techniques can only provide approximations of the behaviour .however , we can exploit static results to perform a sort of preliminary and not too much expensive screening of _ in silico _ experiments . in the tradition of applying static techniques to process calculi used in modelling biological phenomena, we present here a contextual and less approximate extension of the control flow analysis for brane calculi introduced in .control flow analysis ( cfa ) is a static technique , based on flow logic , that provides a variety of automatic and decidable methods and tools for analysing properties of computing systems .one of the advantages of the cfa is that the obtained information on the behaviour are quite general . as a consequence ,a single analysis can suffice for verifying a variety of properties : different inspections of the cfa results permit to check different properties , with no need of re - analysing it several times .only the values of interest tracked for testing change accordingly and the definitions of the static counterparts of the dynamic properties must be provided .control flow analysis provides indeed a _safe over - approximation _ of the _ exact _ behaviour of a system , in terms of the possible reachable configurations .that is , at least all the valid behaviours are captured .more precisely , all those events that the analysis does not consider as possible will _ never _ occur . on the other hand ,the set of events deemed as possible may , or may not , occur in the actual dynamic evolution of the system . to this endwe have improved the precision of the cfa in , by adding information on the context ( along the lines of ) and introducing causality information on the membranes .also , this extra - information allows us to refine the static checking of properties related to the spatial structure of membranes .furthermore , we focus on causality , since we believe it plays a key role in the understanding of the behaviour of biological systems , in our case specified in a process algebra like the brane one . in order to investigate the possibilities of our cfa to capture some kinds of causal dependencies arising in the mbd version of brane calculi, we follow and its classification , by applying the analysis to the same key examples .we observe that the analysis is able to capture some of these dependencies .this is a small improvement in the direction of giving some causal structure to the usually flat cfa results .the gain in precision is paid in terms of complexity : the presented analysis is rather expensive from a computational point of view .the paper gets in the research stream dedicated to the application of static techniques and , in particular , control flow analysis to bio - inspired process calculi e.g. , .similar to ours are the works devoted to the analysis of bioambients .in particular , , where the authors introduce a contextual cfa and where a pathway analysis is exploited for investigating causal properties .bioambients are analysed using instead abstract interpretation in .the analysis presented in records information on the number of occurrences of objects and therefore is able to capture quantitative and causal aspects , necessary to reason on the temporal and spatial structure of processes . in , in a different context , the behaviour of processes is safely approximated and the properties of a fragment of computation tree logic is preserved .this makes it possible to address temporal properties and therefore some kinds of causality .finally , presents a static analysis that computes an abstract transition systems for bioambients processes , able to validate temporal properties .our choice of the brane calculi depends on the fact they have resulted to be particularly useful for modelling and reasoning about a large class of biological systems , such as the one of the eukaryotic cells that , differently from the prokaryotes , possess a set of internal membranes . among the first formalisms used to investigate biological membranes there are the p systems , introduced by pun , which formalise distributed parallel computations biologically - inspired : a biological system is seen as a complex hierarchical structure of nested membranes inspired by the structure of living cells . finally , besides brane , there are other calculi of interest for our approach , that have been specifically defined for modelling biological structures such as compartments and membranes , e.g. , an extension of -calculus , beta binders and the calculus of looping sequences .the rest of the paper is organised as follows . in section [ brane ] ,we present the mbd version of brane calculi .we introduce the control flow analysis in section [ cfa ] . in section [ spatial ] , we exploit our analysis to check some properties related to the hierarchical structure of brane processes . in section [ causal ] , we discuss on which kind of causal information our cfa can capture . in section [ viral ] , the static treatment of brane pep action is added and the whole analysis is applied to a model of infective cycle of the semliki forest virus . section [ concl ] presents some concluding remarks .proofs of theorems and lemmata presented throughout the paper are collected in appendix [ app - proof ] .the brane calculi are a family of calculi defined to describe the interaction amongst membraned component .specifically , the membrane interactions are explicitly described by means of a set of membrane - based interaction capabilities .a system consists of nested membranes , as described by the following syntax , where is taken from a countable set of names . the basic structure of a system consists of ( sub-)system composition , represented by the monoidal operator ( associative , commutative and with as neutral element ) .replication is used to represent the composition of an unbounded number of systems or membrane processes . is a _ membrane _ with content and interaction capabilities represented by the process .note that , following , we annotate membranes with a unique label so as to distinguish the different syntactic occurrences of a membrane .note that these labels have no semantic meaning , but they are useful for our cfa .we refer to as the identity of the membrane , where is the finite set of membrane identities .we assume that each considered system is contained in an ideal outermost membrane , identified by a distinguished element . membranes exhibit interaction capabilities , like the mbd set of actions that model membrane fusion and splitting .the former is modelled by the _ mating _ operation , the latter can be rendered both by _ budding _ , that consists in splitting off exactly one internal membrane , and _ dripping _ , that consists in splitting off one empty membrane . for the sake of simplicity , we focus here on the fragment of the calculus without communication primitives and molecular complexes , and with only the mbd actions .the treatment of the alternative set of pep actions is analogous and it is postponed to section [ causal ] , where it is briefly introduced .membrane processes consist of the empty process , the parallel composition of two processes , represented by the monoidal operator with as neutral element , the replication of a process and of the process that executes an interaction _ action _ and then behaves as another process .actions for mating ( ) and budding ( ) have the corresponding co - actions ( , resp . ) to synchronise with . here , which identifies a pair of complementary action and co - action that can interact , is taken from a countable set of names .the actions and are equipped with a process associated to the membrane that will be created when performing budding and dripping actions .the semantics of the calculi is given in terms of a transition system defined up to a structural congruence and reduction rules . the standard _ structural congruence _ on systems and membranes is the least congruence satisfying the clauses in table [ structcong ] . reduction rules complete the definition of the interleaving semantics .they consist of the basic reaction rules , valid for all brane calculi ( upper part of table [ opsem ] ) and by the reaction axioms for the mbd version ( lower part of table [ opsem ] ) .we use the symbol for the reflexive and transitive closure of the transition relation . \hline \begin{array}{ll } \\[.3 ex ] ( mate ) & \textcolor{blue}{mate_n}.\sigma | \sigma_0 \langle p \rangle^{\mu_{p } } \circ \textcolor{blue}{mate_n^{\bot}}.\tau | \tau_0 \langle q \rangle^{\mu_q } \rightarrow \sigma | \sigma_0 |\tau | \tau_0 \langle p\circ q \rangle^{\mu_{pq } } \\ & \mbox { where } \mu_{pq } = { \bf mi_{mate}}(mate_n,\mu_p , mate^{\bot}_n,\mu_q,\mu_{gp},\mu_p,\mu ) , \\& \mbox { identifies the closest membrane surrounding and in the context }\\ ( bud ) & \textcolor{blue}{bud_n^{\bot}(\rho)}.\tau | \tau_0 \langle \textcolor{blue}{bud_n}.\sigma | \sigma_0 \langle p\rangle^{\mu_p } \circ q \rangle^{\mu_q } \rightarrow \rho \langle \sigma | \sigma_0 \langle p \rangle^{\mu_p } \rangle^{\mu_{r } } \circ \tau | \tau_0\langle q \rangle^{\mu_q } \\ & \mbox { where } \mu_{r } = { \bf mi_{bud}}(bud_n,\mu_p , bud^{\bot}_n,\mu_q,\mu_{gp},\mu_p,\mu ) , \\ & \mbox{ identifies the closest membrane surrounding in the context } \\ ( drip ) & \textcolor{blue}{drip(\rho)}.\sigma | \sigma_0 \langle p \rangle^{\mu_p } \rightarrow \rho \langle \rangle^{\mu_{r } } \circ \sigma | \sigma_0 \langle p\rangle^{\mu_p } \\ & \mbox { where } \mu_{r } = { \bf mi_{drip}}(drip(\rho),\mu_p,\mu_{gp},\mu_p,\mu ) , \\ & \mbox{ identifies the closest membrane surrounding in the context } \\[1ex ] \end{array } \\\hline \end{array}\ ] ] they are quite self - explanatory and we make only a few observations about the labels treatment .given a system , the set of its membrane identities is finite . indeed, the structural congruence rule imposes that , i.e. no new identity label is introduced by recursive calls .a distinguished membrane identity is needed each time a new membrane is generated as a consequence of a performed action , e.g. the new membrane obtained by the fusion of two membranes after a mate synchronisation . to determine such labels we exploit the functions , , and that return fresh and distinct membrane identities , depending on the actions and on their syntactic contexts .recall that the number of needed membrane identities is finite , as finite are the possible combinations of actions and contexts .therefore , we choose these functions in such a way that , given an action and the identities of the membranes on which the action ( and the corresponding co - action , if any ) reside , the function includes the membrane identity needed to identify the membrane obtained by firing that action .we present an extension of the control flow analysis ( cfa ) , introduced in for analysing system specified in brane calculi .the analysis over - approximates all the possible behaviour of a top - level system . in particular, the analysis keeps track of the possible contents of each membrane , thus taking care of the possible modifications of the containment hierarchy due to the dynamics .the new analysis , following , incorporates context in the style of 2cfa , thus increasing the precision of the approximations w.r.t .furthermore , the analysis exploits some causality information to further reduce the degree of approximation . a localised approximation of the contents of a membrane or _ estimate _ is defined as follows : here , ( that is ) means that the membrane identified by may surround the membrane identified by , whenever is surrounded by and is surrounded by .the outermost membranes represent what is called the _ context _ and that amounts to when the analysed membrane is at top - level .moreover , means that the action may reside on and affect the membrane identified by , in the context .furthermore , the analysis collects two types of some causality information : * an approximation of the possible causal circumstances in which a membrane can arise : here means that the membrane can be _ causally derived _ by the firing of the action in and the coaction in , in the context .similarly , for an action like , without a co - action . *an approximation of the possible membrane incompatibilities : here , means that the membrane in the context can not interact with the membrane in the context , because the second membrane is obtained from the first and the first one is dissolved .note that and are two strict order relations , thus only transitivity property holds . to validate the correctness of a proposed estimate , we state a set of clauses operating upon judgements like .this judgement expresses that when the subprocess of is enclosed within a membrane identified by , in the context , then correctly captures the behaviour of , i.e. the estimate is valid also for all the states passed through a computation of .the analysis is specified in two phases .first , it checks that describes the initial process .this is done in the upper part of table [ analysis ] , where the clauses amount to a syntax - driven structural traversal of process specification .the clauses rely on the auxiliary function that collects all the actions in a membrane process and that is reported at the beginning of table [ analysis ] .note that the actions collected by , e.g. , in are equal to the ones in , witnessing the fact that here the analysis introduces some imprecision and approximation .he clause for membrane system checks that whenever a membrane is introduced inside a membrane , in the context the relative hierarchy position must be reflected in , i.e. .furthermore , the actions in that affect the membrane and that are collected in , are recorded in . finally ,when inspecting the content , the fact that the enclosing membrane is in the context is recorded , as reflected by the judgement . the rule for does not restrict the analysis result , while the rules for parallel composition , and replication ensure that the analysis also holds for the immediate sub - systems , by ensuring their traversal . in particular , note that the analysis of is equal to the one of .this is another source of imprecision .+ + secondly , the analysis checks that also takes into account the dynamics of the process under consideration ; in particular , the dynamics of the containment hierarchy of membranes .this is expressed by the closure conditions in the lower part of table [ analysis ] that mimic the semantics , by modelling , without exceeding the precision boundaries of the analysis , the semantic preconditions and the consequences of the possible actions .more precisely , each precondition checks whether a pair of complementary actions could possibly enable the firing of a transition according to .the conclusion imposes the additional requirements on that are necessary to give a valid prediction of the analysed action .consider e.g. , the clause for ( the other clauses are similar ) .if ( i ) there exists an occurrence of a mate action : ; ( ii ) there exists an occurrence of the corresponding co - mate action : ; ( iii ) the corresponding membranes are siblings : , ( iv ) the redexes are not incompatible , i.e. the corresponding membranes can interact : then the conclusion of the clause expresses the effects of performing the transition . in this case, we have that must reflect that ( i ) there may exist a membrane inside , in the context , at the same nesting level of the membranes and ; and ( ii ) the contents of and of , their children and their grandchildren , may also be included in .note that the contribution changes depending on whether we consider ( , resp . ) , their children or their grandchildren . with the inclusion we mean that for each in the context , all the elements in are included in .similarly , with we mean that for each in the context , and in turn for each in the context , all the elements in belong to .we use a similar notation for the relation .( iii ) the membrane is the result of the transition , performed by the two membranes and , in the context , as witnessed by the corresponding entry in the component ; ( iv ) the new membrane is with the and , because , derived by the transition , follows both and .note the similar incompatibility between the membrane in the context before the transition and the derived one in the context .the above requirements correspond to the application of the semantic rule that would result in the fusion of the two membranes .note that , since the new membrane inherits the prefix actions that affected the membranes and , it inherits also and ( we write in red this kind of _ imprecise _ inclusions ) .this is due to over - approximation , even though it is harmless : the two prefix actions can not be further used to predict a communication because they both occur in .still , the presence of both and could lead to predict another interaction that is impossible at run time .thanks to , we can safely exclude it , thus gaining precision .this gain is obtained in general : collects indeed pairs of capabilities that could be syntactically compatible with an interaction , but that can not really interact , because they dynamically occur in membranes that are not simultaneously present .the gain in precision is paid in terms of complexity : the presented analysis is rather expensive from a computational point of view , due to the introduction of contexts and to the possibly high number of different membrane names .both these features may lead to an explosion of the possible reachable configurations . to illustrate how our cfa work we use two simple examples .the emphasis is on the process algebraic structures and not on their biological expressiveness .we first report an application of it to a simple process , illustrated in ( and in turn taken from ) .we consider and the following possible computations , where and are not specified as they are not relevant here . the main entries of the analysis are reported in table [ ex ] , where identifies the ideal outermost context in which the system top - level membranes are .we write in red the entries due to approximations , but not reflecting the dynamics .furthermore , we pair the inclusions of actions and of the corresponding co - actions , in order to emphasise which are the pairs of prefixes that lead to the prediction of a possible communication .it is easy to check that is a valid estimate by following the two stage procedure explained above . to understand in which waythe component refines the analysis , note that since the analysis entries include and , without the check on the component , we can predict a transition between the two membranes and .this transition is not possible instead , because is causally derived by .note that although the cfa offers in general an over - approximation of the possible dynamic behaviour , in this example the result is rather precise . the transition is predicted as possible , since its precondition requirements are satisfied .indeed , we have that , and and are sibling and membranes .also the transition on is initially possible and this result is actually predicted by the analysis , since and , with , i.e. is the father of .instead , we can observe that the transition on can not be performed in the initial system . indeed , resides on the membrane in the context , while the coaction resides on that is not the father of .the transition on can be performed instead in the membrane in the context , that is the membrane introduced by the previous transition .we now apply our cfa to another process , taken from .we consider and the following possible computations . the main entries of the analysis are reported in table [ ex2 ] , where we do not include the entries due to approximations , but not reflecting the dynamics .as before , we pair the inclusions of actions and of the corresponding co - actions , in order to emphasise which are the pairs of prefixes that lead to the prediction of a possible communication .this motivates some redundancies in the entries .also in this example , the cfa result is rather precise .[ [ semantic - correctness ] ] * semantic correctness * + + + + + + + + + + + + + + + + + + + + + + our analysis is semantically correct with respect to the given semantics , i.e. a valid estimate enjoys the following subject reduction property with respect to the semantics . * ( subject reduction)*[sbj - red ] + if and then also .this result depends on the fact that analysis is invariant under the structural congruence , as stated below . *( invariance of structural congruence)*[congr ] if and we have that then also .moreover , it is possible to prove that there always exists a least estimate ( see for a similar statement and proof ) .control flow analysis provides indeed a _safe over - approximation _ of the _ exact _ behaviour of a system , that is , at least all the valid behaviours are captured .more precisely , all those events that the analysis does not consider as possible will _ never _ occur . on the other hand ,the set of events deemed as possible may , or may not , occur in the actual dynamic evolution of the system .the 2cfa gains precision w.r.t the 0cfa presented in and the incompatibility relation increases this gain . in the next section , we will discuss on the contribution of the component .we can exploit our analysis to check _ spatial structure _ properties , of the membranes included in the system under consideration .in particular , because of over - approximation , we can ask negative questions like whether : ( i ) a certain interaction capability _ never affects _ the membrane labelled , i.e. it never occurs in the membrane process of the membrane labelled ; ( ii ) the membrane labelled _ never ends up _ in the membrane labelled .suppose we have all the possible labels of the possible membranes arising at run time .then we can precisely define the above informally introduced properties .we first give the definition of the dynamic property , then the corresponding static property and , finally , we show that the static property implies the dynamic one . for each static property, we check for a particular content in the component . givena process including a membrane labelled , we say that the capability _ never affects _ the membrane labelled if there not exists a derivative such that , in which the capability can affect the membrane labelled . givena process including a membrane labelled , we say that the capability _ never appears on _ the membrane labelled if and only if there exists an estimate such that : for each possible context . given a process including a membrane labelled , then if never appears on the membrane labelled , then the capability never affects the membrane labelled .given a process including a membrane labelled and a membrane labelled , we say that the membrane _ never ends up inside _ the membrane labelled if there not exists a derivative such that , in which occurs inside the membrane .given a process including a membrane labelled , we say that _ never appears inside _ the membrane labelled if and only if there exists an estimate such that : for each possible context . given a process including a membrane labelled and a membrane labelled , then if never appears inside the membrane labelled , then the membrane never ends up inside the membrane labelled . back to our first running example ,we can prove , for instance , that the capability never affects the membrane labelled .this can be checked by looking in the cfa entries , for the content of , that indeed does not include .intuitively , this explains the fact that the synchronisation is not syntactically possible in the context , whose sub - membrane is affected by . in our second running example , we can prove instead , for instance , that the membrane never ends up inside the membrane labelled , where .indeed , by inspecting the cfa results , we have that .intuitively , this corresponds to the fact that the synchronisation is not syntactically possible in the context , because and are not siblings , while it is in the context .similarly , we can mix ingredients and introduce new properties , e.g. one can ask whether two membranes labelled and , _ never end up ( occur ) together _ in the same membrane . on the static side , this amounts to checking whether there exists an estimate such that : for all possible context , or . note that a single analysis can suffice for verifying all the above properties : only the values of interest tracked for testing change .understanding the causal relationships between the actions performed by a process is a relevant issue for all process algebras used in systems biology .although our cfa approximates the possible reachable configurations , we are able to extract some information on the causal relations among these configurations . to investigate these possibilities of our cfa, we follow where different kinds of causal dependencies are described and classified , by applying our analysis to the same key examples .the first kinds are called _ structural causality _ and _ synchronisation causality _ and are typical of all process algebras .structural causality arises from the prefix structure of terms , as in where the action on depends on the one on , since the second action is not reachable until the first has fired .synchronisation causality arises when an action depends on a previous synchronisation as in : where , the mate action is possible only when both and have been performed , and the following and depend on the previous mate synchronisation .our cfa is not able to capture these kinds of dependencies , because of the function definition , according to which . in other words ,the cfa disperses the order between prefixes . according to , when an action is performed on a membrane it impacts only on its continuation and not on the whole process on the membrane , e.g. , in : the drip operationcan be considered causally independent form the mate operation , because it can be executed regardless of the fact that the mate interaction has been performed .our analysis reflects this , because we have and also that .when considering mbd actions and , in particular , the mate action , we have to do with another kind of causality called _ environmental _ in , due to the fact that the interaction possibilities of the child membranes are increased by the mate synchronisation .examples of this kind of causality can be observed in our running examples . in the first , for instance , the depends on the , as reflected by the cfa entries : , where . in the second , we can observe that the synchronisation on can not be performed before a synchronisation on , as captured by the following cfa entries : , and , where belongs to .finally , in , a casual dependency generated by bud ( and drip ) is discussed on the following example : the bud action generates a new membrane and the corresponding actions are caused by the new membrane , as captured by the cfa entries : and .these considerations encourage us to further investigate and to formalise the static contribution of the cfa in establishing causal relationships .we illustrate our approach by applying it to the abstract description of the infection cycle of the semliki forest virus , shown in figure [ virus ] , as specified in .the semliki forest virus is one of the so - called `` enveloped viruses '' .we focus just on the first stage of the cycle and we report the analysis as given in .the virus , specified in table [ virus - evol ] , consists of a capsid containing the viral rna ( the nucleocapsid ) .the nucleocapsid is surrounded by a membrane , similar to the cellular one , but enriched with a special protein .the virus is brought into the cell by phagocytosis , thus wrapped by an additional membrane layer .an endosome compartment is merged with the wrapped - up virus . at this point, the virus uses its special membrane protein to trigger the exocytosis process that leads the naked nucleocapsid into the cytosol , ready to damage it . by summarising ,if the gets close to a , then it evolves into an infected cell .the complete evolution of the viral infection is reported in table [ virus - evol ] , while the main analysis entries are in table [ virus - cfa ] .the specification includes the pep version of brane calculus , whose syntax and reduction semantics is reported in table [ opsem2 ] .this further set of pep actions ( ) are inspired by endocytosis and exocytosis processes .the first indicates the process of incorporating external material into a cell , by engulfing it with the cell membrane , while the second one indicates the reverse process .endocytosis is rendered by two more basic operations : _ phagocytosis _ ( denoted by ) , that consists in engulfing just one external membrane , and _ pinocytosis _( denoted by ) , consists in engulfing zero external membranes ; _ exocytosis _ is instead denoted by .the cfa for the calculus can be straightforwardly extended to deal with the phago / exo / pino ( pep ) actions , as shown in table [ analysis2 ] . \hline \hline \end{array}\ ] ] roughly , the analysis results allow us to predict the effects of the infection .indeed , the inclusion reflects the fact that , at the end of the shown computation , is inside together with that is equivalent to , apart from the label that decorates the enclosed membrane .furthermore , we can check our properties in this systems .as far as the spatial structure properties , we can prove here , e.g. , that ( i ) the capability never affects the membrane labelled ( as ) ; and that ( ii ) the membrane never ends up inside the membrane labelled ( as ) .furthermore , we can observe that the cfa captures the dependency of the synchronisation on on the synchronisation on , since we have that , and is such that we have that .we have presented a refinement of the cfa for the brane calculi , based on contextual and causal information .the cfa provides us with a verification framework for properties of biological systems modelled in brane , such as properties on the spatial structure of processes , in terms of membrane hierarchy .we plan to formalise new properties like the ones introduced here .we have found that the cfa is able to capture some kinds of causal dependencies arising in the mbd version of brane calculi . as future work, we would like to investigate thoroughly and formally the static contribution of the cfa in establishing causal relationships between the brane interactions .* acknowledgments . *we wish to thank francesca levi for our discussion on a draft of our paper and our anonymous referees for their useful comments .1 b. alberts , d. bray , j. lewis , m. raff , k. roberts , and j.d . watson .`` molecular biology of the cell '' .third edition , garland .r. barbuti , g. caravagna , a. maggiolo - schettini , p. milazzo , and g. pardini . .in _ proc . of sfm08 _ , lncs 5016 ( 2008 ) , 387423 . c. bodei . _ a controlflow analysis for beta - binders with and without static compartments_. theoretical computer science 410(33 - 34 ) : 3110 - 3127 , 2009 , elsevier . c. bodei , a. bracciali , and d. chiarugi ._ control flow analysis for brane calculi_. in proc . of mecbic08, entcs 227 , pp .59 - 75 , 2009 .n. busi ._ towards a causal semantics for brane calculi_. in _ what is it about government that americans dislike _ , 19451965 , university press , 2007 .l. cardelli . .in _ proc . of computational methods in systems biology ( cmsb04 ) _ , lncs 3082 , 257280 , 2005 .r. gori and f. levi .a new occurrence counting analysis for bioambients ., lncs 3780:381 - 400 , 2005 .r. gori and f. levi .an analysis for proving temporal properties of biological systems ., lncs 4279:234252 , 2006 .r. gori and f. levi .comput.(8 ) : 869 - 921 , 2010 .v. danos and c. laneve . .in _ proc . of cmsb03 _ , lncs 2602 ( 2003 ) , 3446 , springer . c. laneve and f. tarissan ., in entcs 171 ( 2 ) , pp . 139 - 154 , 2007 .h. r. nielson and f. nielson .flow logic : a multi - paradigmatic approach to static analysis . in _ the essence of computation _ ,lncs 2566 , pp .springer , 2002 .f. nielson , h. riis nielson , c. priami , and d. schuch da rosa . .entcs 180(3 ) , 6579 , 2007 , elsevier .g. paun . ., 11(1 ) ( 2000 ) .h. pilegaard , f. nielson , h. riis nielson . .in _ proc . of emerging aspects of abstract interpretation06 _ , 2006h. pilegaard , f. nielson , h. riis nielson . .in _ the journal of logic and algebraic programming _ , 2008 .c. priami and p. quaglia . . in _ procof cmsb04 _ , lncs 3082 ( 2005 ) .a. regev , e.m .panina , w. silverman , l. cardelli , and e.y .bioambients : an abstraction for biological compartments ._ theoretical computer science _325(1 ) : 141 - 167 .2004 , elsevier .this appendix restates the lemmata and theorems presented earlier in the paper and gives the proofs of their correctness . to establish the semantic correctness ,the following auxiliary results are needed . by structural induction on .we show just one case . + * case * .we have that is equivalent to .now , and and imply and .therefore , by induction hypothesis , we have that .the proof amounts to a straightforward inspection of each of the clauses defining the structural congruence clauses relative to membranes .we only show two cases , the others are similar . + * case * .we have that . + * case * .we have that .now , since , we have that and therefore , from which the required .the proof amounts to a straightforward inspection of each of the clauses defining the structural congruence clauses .we only show two cases , the others are similar . + * case * .we have that is equivalent to , that is equivalent to and therefore to . + * case * .we have that is equivalent to . by proposition [ afuncta ] , , and by induction hypothesis , we have that . as a consequence , we can conclude that .the proof is by induction on .the proofs for the rules and are straightforward , using the induction hypothesis and the clauses in table [ analysis ] .the proof for the uses instead the induction hypothesis and lemma 4.1 .the proofs for the basic actions in the lower part of table [ opsem ] are straightforward , using the clauses in table [ analysis ] . + * case * ( par ) .let be and be , with .we have to prove that .now is equivalent to .by induction hypothesis , we have that , and from we obtain the required . + * case * ( brane ) .let be and be .we have to prove that .now is equivalent to have that . by induction hypothesis, we have that .we can therefore conclude that . + * case * ( struct ) .let , with such that . by lemma [ acongr ], we have that , by induction hypothesis and , again by lemma [ acongr ] , . + * case * ( mate ) .let be and be . then , amounts to and and , in turn , to , , , and and . note that , does not belong to . because of the closure conditions , from the above , we have , amongst the several implied conditions , that such that . from for , we have that and and , by proposition [ afatto ] , we have that and , and hence the required . + * case * ( bud ) . let be and be the process .now , is equivalent to , , and , moreover , and , from which we have that , , and .because of the closure conditions , from above , we have that such that , , , and ( and ( cond 2 ) ) .we have that is equivalent to have that ( 1 ) and that ( 2 ) . for ( 1 ) , we have to prove that , and , that is equivalent to , and . from the hypotheses , we have that . since and ( cond 2 ) we have . from , because of ( cond 2 ) and proposition [ afatto ] , we have that . for ( 2 ) , we have to prove that , and . all these conditions are satisfied ( see above ) . therefore, we obtain the required . + * case * ( drip ) .let be and be .we have that is equivalent to , , and . because of the closure conditions , from the above , such that , .we have that is equivalent to both and .the first condition is verified , because and . the second amounts to and and it is satisfied as well .we therefore obtain the required .* theorem 5.2 * _ given a process including a membrane labelled , then if never appears on the membrane labelled , then the capability never affects the membrane labelled . first of all , we observe that if affects in , then we have a contradiction , since it implies that for some context .we now show that there exist no , such that such that does not affect in , while it does in .the only case in which this can happen is when a or a is performed with parameter including .indeed , the firing of such an action lets arise a new membrane affected by the corresponding parameter .we focus on the second one .suppose we have in a sub - process and that occurs in .this amounts to have that can affect in . by theorem [ sbj - red ], we have that is an estimate also for .nevertheless this implies that , thus leading to a contradiction .
we improve the precision of a previous control flow analysis for brane calculi , by adding information on the context and introducing causality information on the membranes . this allows us to prove some biological properties on the behaviour of systems specified in brane calculi .
consider the regression model for and where and is a mean 0 random variable . in this paper , we study the situation where is subject to a convexity constraint .that is , every and .given the observations , we would like to estimate subject to the convexity constraint ; this is called the convex regression problem .note that convex regression is easily extended to concave regression since a concave function is the negative of a convex function .convex regression problems occur in a variety of settings .economic theory dictates that demand , production and consumer preference functions are often concave . in financial engineering ,stock option prices usually have convexity restrictions .stochastic optimization problems , studied in operations research and reinforcement learning , have response surfaces or value - to - go functions that exhibit concavity in many settings , like resource allocation or stochastic control . similarly , efficient frontier methods like data envelopment analysis include convexity constraints . in statistics , shape restrictions likelog - concavity are useful in density estimation .finally , in optimization , convex approximations to posynomial constraints are valuable for geometric programming .although convex regression has been well - explored in the univariate setting , the literature remains underdeveloped in the multivariate setting .existing methods do not scale well to more than a few thousand observations or more than a handful of dimensions . in this paper, we introduce the first computationally efficient , theoretically sound multivariate convex regression method , called convex adaptive partitioning ( cap ) .it relies on an alternate definition of convexity , for every , where is a subgradient of at .equation ( [ eq : convexity ] ) states that a convex function lies above all of its supporting hyperplanes , or subgradients tangent to .moreover , with enough supporting hyperplanes , can be approximately reconstructed by taking the maximum over those hyperplanes .the cap estimator is formed by adaptively partitioning a set of observations . within each subset of the partition, we fit a linear model to approximate the subgradient of within that subset . given a partition with subsets and linear models , , a continuous , convex ( concave ) functionis then generated by taking the maximum ( minimum ) over the hyperplanes by the partition is refined by a twofold strategy .first , one of the subsets is split along a cardinal direction ( say , or ) to grow .then , the hyperplanes themselves are used to refit the subsets . a piecewise linear function like induces a partition ; a subset is defined as the region where a particular hyperplane is dominant .the refitting step places the hyperplanes in closer alignment with the observations that generated them .this procedure is repeated until all subsets have a minimal number of observations .the cap estimator is then created by selecting the value of that balances fit with complexity using a generalized cross validation method .cap has strong theoretical properties , both in terms of computational complexity and asymptotic properties .we show that cap is consistent with respect to the metric and has a computational complexity of flops .the most widely implemented convex regression method , the least squares estimator , has only recently been shown to be consistent and has a computational complexity of flops . despite a difference of almost runtime, the cap estimator usually has better predictive error as well .because of its dramatic reduction in runtime , cap opens a new class of problems for study , namely moderate to large problems with convexity or concavity constraints .the rest of this paper is organized as follows . in section [ sec : litreview ], we review the literature on convex regression . in section [ sec : cap ] , we present the cap algorithm . in section [ sec : theory ] , we give computational complexity results and conditions for consistency . in section [ sec : implementation ] , we derive a generalized cross - validation method and give a fast approximation for the full cap algorithm . in section [ sec : numbers ] , we empirically test cap on convex regression problems , including value function estimation for pricing american basket options . in section [ sec : conclusions ] , we discuss our results and give directions for future work .the literature for nonparametric convex regression is dispersed over a variety of fields , including statistics , operations research , economics , numerical analysis and electrical engineering .there seems to be little communication between the fields , leading to the independent discovery of similar techniques . in the univariate setting , there are many computationally efficient algorithms for convex regression .these methods rely on the ordering implicit to the real line . setting for , is a sufficient constraint for pointwise convexity .when is differentiable , equation ( [ eq : ordering ] ) is equivalent to an increasing derivative function .various methods have been used to solve the univariate convex regression problem .the least squares estimator ( lse ) is the oldest and simplest method .it produces a piecewise linear estimator by solving a quadratic program with linear constraints .although the lse is completely free of tunable parameters , the estimator is not smooth and can overfit , particularly in the multivariate setting .consistency , rate of convergence , and asymptotic distribution of the lse were shown by , and , respectively .algorithmic methods for solving the quadratic program were given in and .spline methods have also been popular . and used convex - restricted splines with positive parameters in frequentist and bayesian settings , respectively . and used unrestricted splines with restricted parameters , likewise , in frequentist and bayesian settings .in other methods , used convexity constrained kernel regression ; used a random bernstein polynomial prior with constrained parameters ; and transformed the ordering problem into a combinatorial optimization problem which they solved with dynamic programming . due to the constraint on the derivative of ,univariate convex regression is quite similar to univariate isotonic regression .the latter has been studied extensively with many approaches ; for examples , see and . unlike the univariate setting , convex functions in multiple dimensions can not be represented by a simple set of first order conditions and projection onto the set of convex functions becomes computationally intensive .as in the univariate case , the earliest and most popular regression method is the lse , which directly projects a least squares estimator onto the cone of convex functions .it was introduced by and .the estimator is found by solving the quadratic program , , and are the estimated values of and the subgradient of at , respectively .the estimator is piecewise linear , characterization and consistency of the least squares problem have only recently been studied .the lse quickly becomes impractical due to its size : equation ( [ eq : lse ] ) has constraints .this results in a computational complexity of , which becomes impractical after one to two thousand observations .while the lse is widely studied across all fields , the remaining literature on multivariate convex regression is sparser and more dispersed than the univariate literature .one approach is to place a positive semi - definite restriction on the hessian of the estimator . in the economics literature , used kernel smoothing with a restricted hessian and found a solution with sequential quadratic programming . in electrical engineering , , and in a variationalsetting , and , used semi - definite programming to search the space of functions with positive semi - definite local hessians . although consistent in some cases , hessian methods are computationally intensive and can be poorly conditioned in boundary regions . in another approach , proposed a method based on reformulating the maximum likelihood problem as one minimizing entropic distance , which can be solved as a linear program .however , like the original maximum likelihood problem , the transformed problem still has constraints and does not scale to more than a few thousand observations .recently , multivariate convex regression methods have been proposed with a more traditional statistics approach . proposed a two step smoothing and fitting process .first , the data were smoothed and functional estimates were generated over an -net over the domain .then the convex hull of the smoothed estimate was used as a convex estimator . again , although this method is consistent , it is sensitive to the choice of smoothing parameter and does not scale to more than a few dimensions . proposed a bayesian model that placed a prior over the set of all piecewise linear models .they were able to show adaptive rates of convergence , but the inference algorithm did not scale to more than a few thousand observations . in a more computational approach , use an iterative fitting scheme . in this method ,the data were divided into random subsets and a linear model was fit within each subset ; a convex function was generated by taking the maximum over these hyperplanes .the hyperplanes induce a new partition , which is then used to refit the function .this sequence was repeated until convergence . despite relatively strong empirical performance , this method is sensitive to the initial partition and the choice of . moreover ,it is not consistent and there are cases when the algorithm does not even converge .as seen in much of the literature , a natural way to model a convex function is through the maximum of a set of hyperplanes .one example of this method is the least squares estimator , which fits every observation with its own hyperplane .this is computationally expensive and can result in overfitting , as shown in figure [ fig : capvlse ] .instead , we wish to model through only hyperplanes .we do this by partitioning the covariate space and approximating the gradients within each region by hyperplanes generated by the least squares estimator .the covariate space partition and are chosen through adaptive partitioning .given a partition of , an estimate of the gradient for each subset can be created by taking the least squares linear estimate based on all of the observations within that region , a convex function can be created by taking the maximum over , models adaptive partitioning models with linear leaves have been proposed before ; see and for examples . in most of these cases ,the partition is created by adaptively refining an existing partition by dyadic splitting of one subset .the split is chosen in a way that minimizes local error within the subset .there are two problems with these partitioning methods that arise when a piecewise linear summation function , is changed into a piecewise linear maximization function , like .first , a split that minimizes local error does not necessarily minimize global error for .this is fairly easy to remedy by considering splits based on minimizing global error .the second problem is more difficult : the gradients often act in areas over which they were not estimated .a piecewise linear maximization function , , generates a new partition , , by the partition is not necessarily the same as .we can use this new partition to refit the hyperplanes and produce a significantly better estimate .refitting hyperplanes in this manner can be viewed as a gauss - newton method for the non - linear least squares problem , similar methods for refitting hyperplanes have been proposed in and .however , repeated refitting may not converge to a stationary partition and is sensitive to the initial partition .convex adaptive partitioning ( cap ) uses adaptive partitioning with linear leaves to fit a convex function that is defined as the maximum over the set of leaves .the adaptive partitioning itself differs from previous methods in order to fit piecewise linear maximization functions .partitions are refined in two steps .first , candidate splits are generated through dyadic splits of existing partitions .these are evaluated and the one that minimizes global error is greedily selected .second , the new partition is then refit .although simple , these rules , and refitting in particular , produce large gains over naive adaptive partitioning methods ; empirical results are discussed in section [ sec : numbers ] .most other adaptive partitioning methods use backfitting or pruning to select the tree or partition size . due to the construction of the cap estimator, we can not locally prune and so instead we rely on model selection criteria .we derive a generalized cross - validation method for this setting that is used to select .this is discussed in section [ sec : gcv ] . since cap shares many feature with existing adaptive partitioning algorithms, we are able to use many adaptive partitioning results to study cap practically and theoretically .this is in stark contrast to the least squares estimator . despite being introduced by ,it has not been implemented in many practical settings and has only very recently been shown to be consistent .we now introduce some notation required for convex adaptive partitioning .when presented with data , a partition can be defined over the covariate space ( denoted by , with ) or over the observation space ( denoted by , with ) .the observation partition is defined from the covariate partition , cap proposes and searches over a set of models , .a model is defined by : 1 ) the covariate partition , 2 ) the corresponding observation partition , , and 3 ) the hyperplanes fit to those partitions .the cap algorithm progressively refines the partition until each subset can not be split without one subset having fewer than a minimal number of observations , , where here is a log scaling factor , which acts to change the base of the log operator .this minimal number is chosen so that 1 ) there are enough observations to accurately fit a hyperplane , and 2 ) there is a lower bound on the growth rate for the number of observations in each subset and an upper bound on the number of subsets .this is used to show consistency .we briefly outline the cap algorithm below .details are given in the following subsections .[ [ convex - adaptive - partitioning - cap ] ] convex adaptive partitioning ( cap ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1 .* initialize . *set ; place all observations into a single observation subset , ; ; this defines model .* split.*[item : split ] refine partition by splitting a subset .a. _ generate candidate splits ._ generate candidate model by 1 ) fixing a subset , 2 ) fixing a dimension , 3 ) dyadically dividing the data in subset and dimensions according to knot .this is done for knots , all dimensions and subsets .b. _ select split ._ choose the model from the candidates that minimizes global mean squared error on the training set and satisfies .set .3 . * refit .* use the partition induced by the hyperplanes to generate model .set if for every subset in , .4 . * stopping conditions . * if for every subset in , , stop fitting and proceed to step [ item : modelsize ] .otherwise , go to step [ item : split ] .[ item : modelsize ] * select model size .* each model creates an estimator , use generalized cross - validation on the estimators to select final model from .to split , we create a collection of candidate models by dyadically splitting a single subset .since the best way to do this is not apparent , we create models for every subset and search along every cardinal direction by splitting the data along that direction .we create model by 1 ) fixing subset , and 2 ) fixing dimension .let be the minimum value and be the maximum value of the covariates in this subset and dimension , be a set of evenly spaced knots that represent the proportion between and .use the weighted average to split and in dimension .set define new subset and covariate partitions , and where and for .fit hyperplanes in each of the subsets .the triplet of observation partition , covariate partition , , and set of hyperplanes defines the model .this is done for , and .after all models are generated , set .we note that any models where are discarded .if all models are discarded in one subset / dimension pair , we produce a model by splitting on the subset median in that dimension .we select the model that gives the smallest _ global _ error .let be the hyperplanes associated with and let be its estimator .we set the model to be the one that minimizes global mean squared error , to be the minimal estimator .we refit by using the partition induced by the hyperplanes .let be the hyperplanes associated with .refit the partitions by for .the covariate partition , is defined in a similar manner .fit hyperplanes in each of those subsets .let be the model generated by the partition .set if for all .cap has two tunable parameters , and . specifies the number of knots used when generating candidate models for a split .its value is tied to the smoothness of and after a certain value , usually 5 to 10 for most functions , higher values of offer little fitting gain .the parameter is used to specify a minimum subset size , . here transforms the base of the logarithm from into .we have found that ( implying base ) is a good choice for most problems .increases in either of these parameters increase the computational time .sensitivity to these parameters , both in terms of predictive error and computational time , is empirically examined in section [ sec : sensitivity ] .in this section , we give the computational complexity for the cap algorithm and conditions for consistency . since cap is similar to existing adaptive partitioning methods , we can leverage existing results to show consistency .computational complexity describes the number of bit operations a computer must do to perform a routine , such as cap .it is useful to determine small sample runtimes and how well routines will scale to larger problems .the computational complexity of the least squares estimator is unworkably high at flops to solve a problem with observations in dimensions .the worst case computational complexity of cap is much lower , at flops when implemented as in section [ sec : cap ] .the most demanding part of the cap algorithm is the linear regression ; each one has complexity . for iteration of the algorithm , linear regressions are fit .this is done for , where is bounded by . putting this together we obtain the above complexity . to demonstrate how much these factors matter in practice , we empirically compare cap , fast cap and lse on a small problem , where , and . the runtimes and mean absolute errors of each method are shown in figure [ fig : complexity ] .we now show consistency for the cap algorithm .consistency is shown in a similar manner to consistency for other adaptive partitioning models , like cart , treed linear models and other variants .we take a two - step approach , first showing consistency for the mean function and first derivatives of a more traditional treed linear model based on cap under the metric and then we use that to show consistency for the cap estimator itself . letting be the model for the cap estimate after observations ,define the discontinuous piecewise linear estimate based on , is the partition size , are the covariate partitions and are the hyperplanes associated with .likewise , let be the cap estimator based on , subset has an associated diameter , , where define the empirical covariate mean for subset as for define \\d_{nk}^{-1 } \left ( { \mathbf{x}}_i - \bar{{\mathbf{x}}}_k\right ) \end{array}\right ] , & g_k & = \sum_{i \in c_k } \gamma_i \gamma_i^t.\end{aligned}\]]note that whenever is nonsingular .let be i.i.d .random variables .we make the following assumptions : 1 . is compact and is lipschitz continuous and continuously differentiable on with lipschitz parameter .there is an such that $ ] is bounded on .3 . let be the smallest eigenvalue of and .then remains bounded away from 0 in probability as .the diameter of the partition in probability as .assumptions * a1 . * and * a2 . * place regularity conditions on and the noise distribution , respectively .assumption * a3 . * is a regularity condition on the covariate distribution to ensure the uniqueness of the linear estimates .assumption * a4 .* is a condition that can be included in the algorithm and checked along with the subset cardinality , .if is given , it can be computed directly , otherwise it can be approximated using . in some cases , such as when is strongly convex , * a4 .* will be satisfied without enforcement due to problem structure . to show consistency of under the metric, we first show consistency of and its derivatives under the metric in theorem [ thm : fk ] .this is very close to theorem 1 of for treed linear models , although we need to modify it to allow partitions with an arbitrarily large number of faces .[ thm : fk ] suppose that assumptions * a1 .* through * a4 .then , in probability as .the cap algorithm is similar to the support algorithm of , except the refitting step of cap allows partition subsets to be polyhedra with up to faces .theorem [ thm : fk ] is analogous to theorem 1 of ; to prove our theorem , we modify parts of the proof in that rely on a fixed number of polyhedral faces . as such , we first need to modify lemma 12.27 of .[ lem:12.27 ] suppose that * a2 .* holds and that there exists a where .then , for every compact set in and every and , to prove this lemma , we only need to lift the restriction on the number of faces of the polyhedron from being bounded by a fixed to .first , we note that implies that following the proof in , we note that for a fixed constant depending on assumption 6 . since , the conclusion holds . with lemma [ lem:12.27 ] ,the proof of theorem [ thm : fk ] follows directly from the arguments of .using the results from theorem [ thm : fk ] , extension to consistency for under the metric is fairly simple ; this is given in theorem [ thm : consistency ] .[ thm : consistency ] suppose that assumptions * a1 .* through * a4 .. then , in probability as .fix ; let be the diameter of . choose such that fix a net over such that at least one point of the net sits in for each be the number of points in the net and let be a point .then , terminal model produced by the cap algorithm often overfits the data and is computationally more intensive than necessary . in this section ,we derive a generalized cross - validation method to select the best model from all of those produced by cap , .we then propose an approximate algorithm , fast cap , that requires substantially less computation than the original algorithm .cross - validation is a method to assess the predictive performance of statistical models and is routinely used to choose tunable parameters . in this case , we would like to choose the cardinality of the partition , . as a fast approximation to leave - one - out cross - validation , we use generalized cross - validation . in a linear regressionsetting , is the diagonal element of the hat matrix , , is the estimator conditioned on all of the data minus element , and is the degrees of freedom . a given model is generated by a collection of linear models .a similar type approximation to leave - one - out cross - validation can be used to select the model size .the model is defined by , the partition , and the hyperplanes , which were generated by the partition .let be the collection of hyperplanes generated when observation is removed ; notice that if , only changes .let be the estimator for model with observation removed . using the derivation in equation ( [ eq : lineargcv ] ) , , in a slight abuse of notation, is the diagonal entry of the hat matrix for subset corresponding to element , and select , we find the that minimizes the right hand side of equation ( [ eq : gcv ] ) .although more computationally intensive than traditional generalized cross - validation , the computational complexity for cap generalized cross - validation is similar to that of the cap split selection step .the cap algorithm offers two main computational bottlenecks .first , it searches over all cardinal directions , and only cardinal directions , to produce candidate models .second , it keeps generating models until no subsets can be split without one having less than the minimum number of observations . in most cases ,the optimal number of components is much lower than the terminal number of components . to alleviate the first problem, we suggest using random projections as a basis for search . using ideas similar to compressivesensing , each projection for .then we search along the direction rather than .when we expect the true function to live in a lower dimensional space , as is the case with superfluous covariates , we can set .we solve the second problem by modifying the stopping rule .instead of fulling growing the tree until each subset has less than observations , we use generalized cross - validation .we grow the tree until the generalized cross - validation value has increased in two consecutive iterations or each subset has less than observations .as the generalized cross - validation error is usually concave in , this heuristic often offers a good fit at a fraction of the computational expense of the full cap algorithm .the fast cap algorithm has the potential to substantially reduce the factor by halting the model generation long before reaches .since every feasible partition is searched for splitting , the computational complexity grows as gets larger .the fast cap algorithm is summarized as follows .[ [ fast - convex - adaptive - partitioning - fast - cap ] ] fast convex adaptive partitioning ( fast cap ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 1 .* initialize . * as in cap .* split . * 1 ._ generate candidate splits ._ generate candidate model by 1 ) fixing a subset , 2 ) generating a random direction with , and 3 ) dyadically dividing the data as follows : * set , and * set + then new hyperplanes are fit to each of the new subsets .this is done for knots , dimensions and subsets .2 . _ select split . _ as in cap .3 . * refit . * as in cap .4 . * stopping conditions . *let be the generalized cross - validation error for model .stop if and . then select final model as in cap .in this section , we empirically analyze the performance of cap .there are no benchmark problems for multivariate convex regression , so we analyze the predictive performance , runtime , sensitivity to tunable parameters and rates of convergence on a set of synthetic problems .we then apply cap to value function approximation for pricing american basket options .we apply cap to two synthetic regression problems to demonstrate predictive performance and analyze sensitivity to tunable parameters .the first problem has a non - additive structure , high levels of covariate interaction and moderate noise , while the second has a simple univariate structure embedded in a higher dimensional space and low noise .low noise or noise free problems often occur when a highly complicated convex function needs to be approximated by a simpler one .[ [ problem-1 ] ] problem 1 + + + + + + + + + here .set where .the covariates are drawn from a 5 dimensional standard gaussian distribution , .[ [ problem-2 ] ] problem 2 + + + + + + + + + here .set where was randomly drawn from a dirichlet(1,,1 ) distribution , we set .the covariates are drawn from a 10 dimensional standard gaussian distribution , .we compared the performance of cap and fast cap to other regression methods on problems 1 and 2 .the only other convex regression method included was least squares regression ( lse ) ; it was implemented with the cvx convex optimization solver .the general methods included gaussian processes , a widely implemented bayesian nonparametric method , and two adaptive methods : tree regression with constant values in the leaves and multivariate adaptive regression splines ( mars ) .tree regression was run through the matlab function classregtree .mars was run through the matlab package areslab .gaussian processes were run with the matlab package gpml .parameters for cap and fast cap were set as follows .the log scale parameter set as and the number of knots was set as for both . in fast cap ,the number of random search directions was set to be .all methods were given a maximum runtime of 90 minutes , after which the results were discarded .methods were run on 10 random training sets and tested on the same testing set .average runtimes and predictive performance are given in table [ tab : synthetic ] .l | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] + + method & & & & & & & + cap & * 1 * & * 5884 * & * 0 * & * 6827 * & * 0 * & * 2740 * & 0 & 1644 & * 0 * & * 0927 * & * 0 * & * 0629 * & * 0 * & * 0450 * + fast cap & 1 & 8661 & 0 & 7471 & 0 & 3197 & * 0 * & * 1526 * & 0 & 1356 & 0 & 0724 & 0 & 0566 + lse & 15 & 8340 & 9 & 5970 & 18 & 0701 & 9,862 & 4602 & & & + tree & 12 & 2794 & 9 & 8356 & 6 & 7606 & 5 & 3478 & 4 & 1230 & 2 & 9173 & 2 & 3152 + gp & 8 & 5056 & 13 & 5495 & 6 & 8472 & 3 & 7610 & 2 & 2928 & 1 & 2058 & + mars & 8 & 3517 & 8 & 0031 & 6 & 8813 & 6 & 2618 & 5 & 9809 & 5 & 8558 & 5 & 8234 + + method & & & & & & & + cap & 0 & 0159 & 0 & 0138 & 0 & 0110 & * 0 * & * 0018 * & 0 & 0012 & * 0 * & * 0007 * & * 0 * & * 0003 * + fast cap & 0 & 0159 & 0 & 0138 & 0 & 0090 & * 0 * & * 0018 * & * 0 * & * 0011 * & * 0 * & * 0007 * & * 0 * & * 0003 * + lse & 0 & 6286 & 0 & 2935 & 31 & 2426 & & & & + tree & 0 & 1372 & 0 & 1129 & 0 & 0928 & 0 & 0797 & 0 & 0670 & 0 & 0552 & 0 & 0495 + gp & * 0 * & * 0109 * & * 0 * & * 0063 * & * 0 * & * 0039 * & 0 & 0027 & 0 & 0047 & 0 & 0076 & + mars & 0 & 0205 & 0 & 0140 & 0 & 0120 & 0 & 0110 & 0 & 0105 & 0 & 0102 & 0 & 0100 + + + + method & & & & & & & + cap & 0 & 15 sec & 0 & 24 sec & 0 & 78 sec & 1 & 34 sec & 2 & 18 sec & 4 & 33 sec & 9 & 31 sec + fast cap & 0 & 04 sec & 0 & 07 sec & 0 & 15 sec & 0 & 30 sec & 0 & 57 sec & 1 & 14 sec & 2 & 06 sec + lse & 1 & 56 sec & 10 & 17 sec & 226 & 20 sec & 43 & 37 min & & & + tree & 0 & 06 sec & 0 & 02 sec & 0 & 04 sec & 0 & 09 sec & 0 & 19 sec & 0 & 49 sec & 1 & 15 sec + gp & 0 & 22 sec & 0 & 35 sec & 1 & 35 sec & 5 & 07 sec & 22 & 03 sec & 248 & 72 sec & + mars & 0 & 22 sec & 0 & 34 sec & 0 & 76 sec & 1 & 81 sec & 3 & 95 sec & 16 & 65 sec & 56 & 19 sec + + method & & & & & & & + cap & 0 & 05 sec & 0 & 25 sec & 2 & 15 sec & 6 & 35 sec & 10 & 06 sec & 21 & 06 sec & 46 & 50 sec + fast cap & 0 & 02 sec & 0 & 03 sec & 0 & 08 sec & 0 & 13 sec & 0 & 25 sec & 0 & 89 sec & 2 & 03 sec + lse & 1 & 86 sec & 15 & 13 sec & 339 & 16 sec & & & & + tree & 0 & 02 sec & 0 & 03 sec & 0 & 07 sec & 0 & 14 sec & 0 & 27 sec & 0 & 71 sec & 1 & 53 sec + gp & 0 & 15 sec & 0 & 34 sec & 1 & 46 sec & 4 & 93 sec & 23 & 13 sec & 264 & 77 sec & + mars & 0 & 72 sec & 0 & 48 sec & 1 & 38 sec & 3 & 43 sec & 8 & 01 sec & 33 & 29 sec & 98 & 75 sec + unsurprisingly , the non - convex regression methods did poorly compared to cap and fast cap , particularly in the higher noise setting .gaussian processes offered the best performance of that group , but their computational complexity scales like ; this computational times of more than 90 minutes for .more surprisingly , however , the lse did extremely poorly .this can be attributed to overfitting , particularly in the boundary regions ; this phenomenon can be seen in figure [ fig : capvlse ] as well . while the natural response to overfitting is to apply a regularization penalty to the hyperplane parameters , implementation in this setting is not straightforward .we have tried implementing penalties on the hyperplane coefficients , but tuning the parameters quickly became computationally infeasible due to runtime issues with the lse .although cap and fast cap had similar predictive performance , their runtimes often differed by an order of magnitude with the largest differences on the biggest problem sizes . based on this performance , we would suggest using fast cap on larger problems rather than the full cap algorithm .treed linear models are a popular method for regression and classification .they can be easily modified to produce a convex regression estimator by taking the maximum over all of the linear models .cap differs from existing treed linear models in how the partition is refined .first , subset splits are selected based on global reduction of error .second , the partition is refit after a split is made . to investigate the contributions of each step , we compare to treed linear models generated by : 1 ) local error reduction as an objective for split selection and no refitting , 2 ) global error reduction as an objective function for split selection and no refitting , and 3 ) local error reduction as an objective for split selection along with refitting . all estimators based on treed linear models are generated by taking the maximum over the set of linear models in the leaves .we wanted to determine which properties led to a low variance estimator with low predictive error . by low variance ,we mean that changes in the training set do not lead to large changes in predictive error . to do this , we compared the performance of these methods on problems 1 and 2 over 10 different training sets and a single testing set .all treed linear model parameters were the same as those for cap .we viewed a model with local subset split selection and no refitting as a baseline .we compared both the average squared predictive error and the variance of that error between training sets .percentages of average error and variance reduction are displayed in table [ tab : treedlinearmodel ] .average predictive error is displayed in figure [ fig : treedlinearmodel ] .l | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] | [email protected] + + method & & & & & & & + refitting & 48 & 65% & 58 & 95% & 32 & 62% & 61 & 76% & 73 & 04% & 74 & 77% & 70 & 01 % + global selection & 24 & 67% & 34 & 85% & 21 & 32% & 23 & 46% & 29 & 40% & 30 & 48% & 19 & 23% + cap & 68 & 25% & 69 & 81% & 74 & 74% & 76 & 97% & 80 & 18% & 81 & 40% & 81 & 04% + + method & & & & & & & + refitting & 0 & 0% & 0 & 0% & -17 & 73% & 71 & 48% & 78 & 36% & 79 & 67% & 77 & 05% + global selection & 0 & 0% & 0 & 0% & -4 & 36% & 17 & 69% & 15 & 22% & 25 & 04% & 9 & 74% + cap & 0 & 0% & 0 & 0% & -17 & 10% & 71 & 70% & 75 & 60% & 81 & 66% & 86 & 21% + + + + method & & & & & & & + refitting & 19 & 16% & 65 & 00% & -243 & 33% & -4 & 03% & -163 & 40% & 64 & 86% & -18 & 88% + global selection & 38 & 41% & 68 & 78% & -17 & 84% & 61 & 34% & 24 & 51% & 91 & 44% & 75 & 97% + cap & 96 & 89% & 92 & 72% & 68 & 74% & 97 & 05% & 74 & 85% & 95 & 29% & 63 & 17% + + method & & & & & & & + refitting & 0 & 0% & 0 & 0% & -61 & 34% & 44 & 75% & 94 & 16% & 73 & 93% & 75 & 42% + global selection & 0 & 0% & 0 & 0% & -19 & 84% & -223 & 58% & -209 & 92% & -8 & 29% & -7 & 17% + cap & 0 & 0% & 0 & 0% & -76 & 78% & 52 & 44% & 89 & 30% & 30 & 18% & 15 & 16% + table [ tab : treedlinearmodel ] shows that global split selection and refitting are both beneficial , but in different ways .refitting dramatically reduces predictive error , but can variance to the estimator in noisy settings .global split selection modestly reduces predictive error but can reduce variance in noisy settings , like problem 1 .the combination of the two produces cap , which has both low variance and high predictive accuracy . in this subsection, we empirically examine the effects of the two tunable parameters , the log factor , , and the number of knots , .the log factor controls the minimal number of elements in each subset by setting , and hence it controls the number of subsets , , at least for large enough .increasing allows the potential accuracy of the estimator to increase , but at the cost of greater computational time due to the increase in possible values for and the larger number of possibly admissible sets generated in the splitting step of cap .we compared values for ranging from to on problems 1 and 2 with sample sizes of and .results are displayed in figure [ fig : dcompare ] .note that error may not be strictly decreasing with because different subsets are proposed under each value . additionally , fast cap is a randomized algorithm so variance in error rate and runtime is to be expected .empirically , once , there was little substantive error reduction in the models , but the runtime increased as for the full cap algorithm .since controls the maximum partition size , , and a linear regression is fit times , the expected increase in the runtime should only be .we believe that the extra empirical growth comes from an increased number of feasible candidate splits . in the fast cap algorithm , which terminates after generalized cross - validation gains cease to be made , we see runtimes leveling off with higher values of . based on these results , we believe that setting offers a good balance between fit and computational expense .the number of knots , , determines how many possible subsets will be examined during the splitting step . like , an increase in a better fit at the expense of increased computation .we compared values for ranging from to on problems 1 and 2 with sample sizes of and .results are displayed in figure [ fig : lcompare ] .the changes in fit and runtime are less dramatic with than they are with .after , the predictive error rates almost completely stabilized .runtime increased as as expected . due to the minimal increase in computation, we feel that is a good choice for most settings .although theoretical rates of convergence are not yet available for cap , we are able to empirically examine them . rates of convergence for multivariate convex regression have only been studied in two articles of which we are aware .first , studied rates of convergence for an estimator that is created by first smoothing the data , then evaluating the smoothed data over an -net , and finally convexifying the net of smoothed data by taking the convex hull .they showed that the convexify step preserved the rates of the smoothing step . for most smoothing algorithms ,these are minimax nonparametric rates , with respect to the empirical norm . in the second article, showed adaptive rates for a bayesian model that places a prior over the set of all piecewise linear functions .specifically , they showed that if the true mean function actually maps a -dimensional linear subspace of to , that is their model achieves rates of with respect to the empirical norm .empirically , we see these types of adaptive rates with cap . .slopes for linear models fit to vs. in figure [ fig : rates ] .expected slopes are given when : 1 ) rates are with respect to full dimensionality , , and 2 ) rates are with respect to dimensionality of linear subspace , .empirical slopes are fit to mean squared error generated by cap and fast cap .note that all empirical slopes are closest to those for linear subspace rates rather than those for full dimensionality rates . [ cols="<,>,<,>,<",options="header " , ] results are displayed in table [ tab : options ] .we found that cap and fast cap gave state of the art performance without the difficulties associated with linear functions , such as choosing basis functions and regularization parameters .we observed a decline in the performance of least squares as the number of assets grew due to overfitting .ridge regularization greatly improved the least squares performance as the number of assets grew .tree regression did poorly in all settings , likely due to overfitting in the presence of the non - symmetric error distribution generated by the geometric brownian motion .these results suggest that cap is robust even in less than ideal conditions , such as when data have heteroscedastic , non - symmetric error distributions .again , we noticed that while the performances of cap and fast cap were comparable , the runtimes were about an order of magnitude different . on the larger problems , runtimes for fast cap were similar to those for unregularized least squares .this is likely because the number of covariates in the least squares regression grew like , while all linear regressions in cap only had covariates .in this article , we presented convex adaptive partitioning ( cap ) , a computationally efficient , theoretically sound and empirically robust method for regression subject to a convexity constraint .cap is the first convex regression method to scale to large problems , both in terms of dimensions and number of observations . as such, we believe that it can allow the study of problems that were once thought to be computationally intractable .these include econometrics problems , like estimating consumer preference or production functions in multiple dimensions , approximating complex constraint functions for convex optimization , or creating convex value - to - go functions or response surfaces that can be easily searched in stochastic optimization .our preliminary results are encouraging , but some important questions remain unanswered . 1 .what are the convergence rates for cap ?are they adaptive , as they empirically seem to be ? 2 .the current splitting proposal is effective but cumbersome .are there less computationally intensive ways to refine the current partition ? 3 .the modified stopping in fast cap provides substantially reduced runtimes with little performance degradation compared to cap .can this rule or a similarly efficient one be theoretically justified ?allon , g. , beenstock , m. , hackman , s. , passy , u. shapiro , a. 2007 , ` nonparametric estimation of concave production technologies by entropic methods ' , _ journal of applied econometrics _ * 22*(4 ) , 795816 .chang , i .-s . , chien , l .- c . ,hsiung , c. a. , wen , c .- c .wu , y .- j .2007 , ` shape restricted regression with random bernstein polynomials ' , _ lecture notes - monograph series : complex datasets and inverse problems : tomography , networks and beyond _ * 54 * , 187202 .dobra , a. gehrke , j. 2002 , secret : a scalable linear regression tree algorithm , _ in _ ` proceedings of the eighth acm sigkdd international conference on knowledge discovery and data mining ' , acm , pp .481487 .fraser , d. a. s. massam , h. 1989 , ` a mixed primal - dual bases algorithm for regression under inequality constraints .application to concave regression ' , _ scandinavian journal of statistics _ * 16*(1 ) , 6574 .henderson , d. j. parmeter , c. f. 2009 , imposing economic constraints in nonparametric regression : survey , implementation and extension , _ in _q. li j. s. racine , eds , ` nonparametric econometric methods ( advances in econometrics ) ' , vol . 25 , emerald publishing group limited , pp .433469 .kim , j. , lee , j. , vandenberghe , l. yang , c. 2004 , techniques for improving the accuracy of geometric - programming based analog circuit design optimization , _ in _ ` proceedings of the ieee international conference on computer aided design ' , pp .863870 .lim , e. 2010 , response surface computation via simulation in the presence of convexity constraints , _ in _b. johansson , s. jain , j. montoya - torres , j. hugan e. ycesan , eds , ` proceedings of the 2010 winter simulation conference ' , pp .12461254 .meyer , m. c. , hackstadt , a. j. hoeting , j. a. 2011 , ` bayesian estimation and inference for generalised partial linear models using shape - restricted splines ' , _ journal of nonparametric statistics _p. to appear .roy , s. , chen , w. , chen , c. c .- p .hu , y. h. 2007 , ` numerically convex forms and their application in gate sizing ' , _ ieee transactions on computer - aided design of integrated circuits and systems _ * 26*(9 ) , 16371647 .tsitsiklis , j. n. van roy , b. 1999 , ` optimal stopping of markov processes : hilbert space theory , approximation algorithms , and an application to pricing high - dimensional financial derivatives ' , _ ieee transactions on automatic control _ * 44*(10 ) , 18401851 .
we propose a new , nonparametric method for multivariate regression subject to convexity or concavity constraints on the response function . convexity constraints are common in economics , statistics , operations research , financial engineering and optimization , but there is currently no multivariate method that is computationally feasible for more than a few hundred observations . we introduce convex adaptive partitioning ( cap ) , which creates a globally convex regression model from locally linear estimates fit on adaptively selected covariate partitions . cap is computationally efficient , in stark contrast to current methods . the most popular method , the least squares estimator , has a computational complexity of . we show that cap has a computational complexity of and also give consistency results . cap is applied to value function approximation for pricing american basket options with a large number of underlying assets . regression , shape constraint , convex regression , treed linear model , adaptive partitioning
the wavelet analysis has a great many different applications in signal and image processing ( see , ) , in physics and astronomy ( see , , ) .it is also used for developing efficient numerical algorithms for solving differential equations , ; however , mother wavelets are usually not associated with the solutions of differential equations under consideration .there is also an analytic approach to the problems of wave propagation proposed by kaiser , where the technique of wavelet analysis is developed for the decomposition of solutions of the wave equation in terms of localized solutions , which are called physical wavelets .they are constructed by means of a special technique of analytic continuation of fundamental solutions in complex space - time and can be split into two parts : an advanced fundamental solution and a retarded one .the physical wavelet as a localized solution of the wave equation has also been given in .applications of physical wavelets were discussed in - . in the present paper , we treat a new wavelet , which is at the same time a localized solution of the homogeneous wave equation in two or more dimensions .this solution has been previously found and discussed in , and generalized in .it was named the gaussian wave packet .we study its properties from two points of view .first , the solution can be taken as a mother wavelet for continuous wavelet analysis if time is a parameter and can be used in signal processing without being connected with any differential equation .secondly , this solution should be regarded as a physical wavelet , i.e. , it is an analytic continuation to the complex space - time of the sum of advanced and retarded parts of the field of a point source moving at a speed of wave propagation along a straight line and emitting a pulse that is localized in time .it is natural to decompose nonstationary wave fields in terms of these solutions , using the techniques of wavelet analysis .the aim of the paper is a detailed investigation of wavelet properties of the gaussian wave packet for a fixed time and its properties as a solution of the wave equation in view of its further application to problems of wave propagation .for example , the decomposition of the solution of the initial value problem for the wave equation in terms of wavelets has been proposed by us in . in section [ sec - wavan ] ,we give a brief review of the main facts of continuous wavelet analysis in one and two dimensions . in section [ sec - twow ] , we show that the gaussian wave packet for a fixed time can be regarded as a wavelet , give some estimates of it , and present its fourier transform . we show that both the wavelet and its fourier transform have an exponential decay at infinity .the wavelet has not only zero mean but all zero moments as well . in section [ sec - simplas ], we discuss the asymptotic behavior of the gaussian wave packet as some of the free parameters become large .we compare the packet with the nonstationary gaussian beam , i.e. , with the solution of the wave equation localized near the axis .we give the gaussian asymptotic of it reducing it to the morlet well - known wavelet . in section [ sec - uncert ], we discuss the results of numerical calculations of the centers and widths of the packet in both the space and spatial frequency domains .we specify how fast these characteristics tend to asymptotic ones , with respect to the morlet wavelet .we investigate the heisenberg uncertainty relation for this wavelet , depending on the parameters and check how far from the saturation it is .we also obtain results for the nonasymptotic case where the wavelet corresponds to the solution that describes the propagation of the wave packet of one oscillation .this case may find applications in optics .we specify when the wavelet is directional and calculate its scale and angular resolving powers .section [ sec - manydim ] gives a generalization of the above results to the case of an arbitrary number of spatial dimensions . in section [ sec - pulssour ], we establish an analogy between the new wavelet and physical wavelets of kaiser .we show that it may split into incoming and outgoing parts , each solving a nonhomogeneous wave equation . as a source ,we take new one - dimensional time - dependent wavelets , moving in the complex space - time at a speed of light .wavelet analysis is a method for analyzing local spectral properties of functions ( for example , see - ) .wavelet analysis also allows one to represent any function of finite energy as the superposition of a family of functions called wavelets derived from one function called a mother wavelet by shifting and scaling its argument in the one - dimensional case and also by rotating it in the case of several spatial dimensions . by analogy ,the fourier transform represents a signal as the superposition of oscillating exponents derived from one exponent by changing its frequency .let us give a brief review of some basic facts concerning the wavelet analysis of functions dependent on one variable ( for more detail , see - , ) .let a function have a zero mean , and let it decrease as tends to infinity so fast that .it must oscillate to be nonzero and to have the zero mean .we call such a function a mother wavelet because we derive a two - parametric family of functions from it , using two operations that shift the argument by and scale it by : thus any function from this family again has the shape of , but shifted and dilated . by means of these operations, we can place at any point of the axis and change its size to any size by the parameter .then we define the wavelet transform of any signal by the formula where the bar over denotes complex conjugation .one of the best - known mother wavelets is the morlet wavelet , which is .\ ] ] it is the difference of a gaussian function , filled with oscillations , and a term that provides the zero mean of and that is negligible if it is clear from formulas ( [ w - family-1d ] ) - ( [ morlet-1d ] ) that , defined by ( [ transform-1d ] ) , provides information about the frequency content of the signal in the vicinity of the size of the point , and plays the role of a spatial frequency .so we may regard the wavelet transform as a window transform , the size of a window changing for different frequencies . changing the size of the windowmakes the wavelet transform more precise as compared to the window fourier ( or gabor ) transform .we can also reconstruct the signal from its wavelet transform , or , in other words , represent the signal as a superposition of elementary signals .moreover , the mother wavelet used for the reconstruction of may differ from the one used for the analysis .the reconstruction formula looks like this where is another mother wavelet , and the constant reads where the symbol denotes the fourier transform .if we use the same wavelet for the transform and reconstruction , we should put and get in this formula to calculate the coefficient wavelet analysis can also be defined for the case of more than one dimension ( see , , ) .a mother wavelet in the case of two dimensions is a function that has zero mean .the morlet wavelet in two dimensions reads .\ ] ] we define a family of wavelets from the mother wavelet , introducing rotations as well as dilations and the vector translations as follows : the wavelet transform is defined as then the reconstruction formula takes the form where and the fourier transform is consider a family of functions of two spatial variables containing arbitrary real parameters and positive parameters , : where is the bessel modified function ( macdonald s function ) . the branch of the square root in formula ( [ def - s ] ) with positive real part is taken .the choice of the branch of the square root in the denominator of ( [ packet3 ] ) is not important , for the sake of definiteness we assume that it has positive real part .we intend to show that each of the functions from the family ( [ packet3 ] ) is suited for the role of a mother wavelet with good properties .the same is valid for their derivatives of any order with respect to spatial coordinates and time .function ( [ packet3 ] ) has appeared in in connection with the linear wave equation if we regard the parameter as time , formula ( [ packet3 ] ) gives an exact solution of ( [ wave ] ) , which is well localized if ( see below ) . if formula ( [ packet3 ] ) yields if in addition it leads to the exact solution of ( [ wave ] ) which was first reported in and discussed in detail in . in this sectionwe view as a two - dimensional mother wavelet with being a parameter , ignoring that it is a solution of ( [ wave ] ) . according to , , this is possible ,provided that the following conditions are satisfied : i.e. , and it has zero mean , i.e. , to prove ( [ ll ] ) we note that formulas ( [ def - s ] ) , ( [ theta ] ) imply that and thus we get .therefore , ( [ packet3 ] ) has neither singularities nor branch points for real and .it is a smooth function of and its derivatives of any order with respect to are also smooth functions .we also obtain } .\label{s2}\ ] ] if and are large to an extent that the third term in ( [ s2 ] ) is positive , then hence for large and the bessel modified function can be replaced by its asymptotics , resulting in .\label{as - pack}\ ] ] noting that because we conclude that has an exponential falloff and to check that the condition ( [ mean ] ) is satisfied , we calculate the fourier transform of the calculations yield ( see appendix 1 ) ^{-1 } \ , \nonumber\ ] ] , \label{four3}\ ] ] where .the formula ( [ four3 ] ) shows that owing the term in the exponent containing the denominator .therefore , the function has zero mean ( [ mean ] ) . conditions ( [ ll ] ) , ( [ mean ] ) enable to be a mother wavelet .moreover , the following relation holds : } { \partial^l k_\mathrm{x } \ ; \partial^m k_\mathrm{y } } \right|_{{k}=0 } = 0 \label{admiss - cond - moments}\ ] ] for any integer nonnegative , , , and .this condition , the smoothness and the exponential falloff of mean that any derivative of may be viewed as a mother wavelet and that all the moments of the wavelet and its derivatives vanish , i.e. , this property indicates that such wavelets could be useful in singular fields - .the wavelet has simple asymptotics for large values of , which is discussed in the next section .below we present calculations of the wavelet ( [ packet3 ] ) and its fourier transform ( [ four3 ] ) for moderate values of when no asymptotics can be applied .it should be mentioned that the wavelet ( [ packet3 ] ) represents a wave of one oscillation when ( see figures [ pic - gaus1 ] , [ pic - gaus2 ] ) .this case is applicable in optics in the case of propagation of short pulses .in this section , we study asymptotic properties of the gaussian wave packet when the parameter is large .first we note that we can replace the mcdonald function in ( [ packet3 ] ) by its exponential asymptotics ( see ) and then we get ( [ as - pack ] ) for any if . to prove this , we show that , then , which provides that and the exponential asymptotics is suitable .we intend to show first that the modulus of the exponent in ( [ as - pack ] ) has a maximum at the point .let then the relation yields ^ 2 = \mathrm{re } ( s^2 ) + \sqrt { [ \mathrm{re } ( s^2)]^2 + [ \mathrm{im } ( s^2)]^2}. \label{re - s2}\ ] ] hence ^ 2 \ge \mathrm{re}(s^2) ] and the cross section of the beam attains its minimum when .formula ( [ morlet2 ] ) gives the gaussian beam ( [ beam ] ) multiplied by the cutoff function ] which is as follows : taking into account we obtain ^{-1 } \nonumber\ ] ] .\label{gen - four-2}\ ] ] in the multidimensional case , the fourier transform of a gaussian beam must be re - calculated .we put from ( [ packetn3 ] ) for simplicity . instead of the formula ( [ iy - ix ] )will contain the product where is the number of coordinates that are transverse to the direction of propagation .we obtain a formula for by replacing by and by in ( [ ix ] ) .the analog of ( [ four - beam - interm ] ) will contain the sum instead of the term which does not yield any corrections and also the factor instead of such a factor for .therefore , ( [ gen - four-2 ] ) is modified as follows : ^{-1 } \nonumber\ ] ] , \label{gen - four - many}\ ] ] where perel m v and sidorenko m s 2003 wavelet analysis in solving the cauchy problem for the wave equation in three - dimensional space in : _ mathematical and numerical aspects of wave propagation : waves 2003 _ ed g c cohen , e heikkola et al ( springer - verlag ) pp 79498 .
an exact solution of the homogeneous wave equation , which was found previously , is treated from the point of view of continuous wavelet analysis ( cwa ) . if time is a fixed parameter , the solution represents a new multidimensional mother wavelet for the cwa . both the wavelet and its fourier transform are given by explicit formulas and are exponentially localized . the wavelet is directional . the widths of the wavelet and the uncertainty relation are investigated numerically . if a certain parameter is large , the wavelet behaves asymptotically as the morlet wavelet . the solution is a new physical wavelet in the definition of kaiser , it may be interpreted as a sum of two parts : an advanced and a retarded part , both being fields of a pulsed point source moving at a speed of wave propagation along a straight line in complex space - time . department of mathematical physics , physics faculty , + st.petersburg university , + ulyanovskaya 1 - 1 , petrodvorets , st.petersburg , 198904 , russia + ` mailto : perel.phys.spbu.ru , m.sidorenko.spb.edu ` + _ keywords _ : physical wavelet , acoustic wavelet , localized wave , pulse , wave equation , mother wavelet , continuous wavelet analysis
in regression and classification , an omnipresent challenge is the correct prediction in the presence of a huge amount of variables based on a small number of observations , and for any regularized method , one typically expects the performance to increase with increasing observations - to - variables ration . while this is true in the regions and , some estimators exhibit a peaking behavior for , leading to particularly low performance .as documented in the literature , this affects all methods that use the ( moore - penrose ) inverse of the sample covariance matrix ( see section [ sec : cov ] for more details ) .this leads e.g. to the peculiar effect that for linear discriminant analysis , the performance improves in the case if a set of uninformative variables is added to the model . in this note ,i show that this peaking phenomenon can also occur in scenarios where the moore - penrose inverse is not directly used for computing the model , but in cases where least - squares estimates are used for model selection .one particularly popular method is the lasso and its current implementation in the software . as illustrated in section [ sec : lasso ] , its parameterization of the penalty term in terms of a ration of the -norm of the lasso solution and the least - squares solution leads to problems when using cross - validation for model selection .i present a solution in terms of a normalized penalty term .for a -dimensional linear regression model the task is to estimate based on observations . as usual ,the centered and scaled observations are pooled into and . in this note ,i study the performance of the lasso for a fixed dimensionality and for a varying number of observations .common sense tells us that the test error is approximately a decreasing function of the observations - to - variables ratio .however , in several empirical studies , i observe particularly poor results for the lasso in the transition case , leading to a prominent peak in the test error curve at . in the remainder of this section ,i illustrate this unexpected behavior on a synthetic data set .i would like to stress that the peaking behavior is not due to particular choices in the simulation setup , but only depends on the ratio .i generate observations , where is drawn from a multivariate normal distribution with no collinearity . out of the true regression coefficients , a random subset of size non - zero and drawn from a univariate distribution on $ ] .the error term is normally distributed with variance such that the signal - to - noise - ratio is equal to . for the simulation , i sub - sample training sets of sizes .the sub - sampling is repeated times . on the training set of size , the optimal amount of penalizationis chosen via -fold cross - validation .the lasso solution is then computed on the whole training set of size , and the performance is evaluated by computing the mean squared error on an additional test set of size .as defined in equation ( [ eq : s]).,width=529,height=264 ] i use the ` cv.lars ` function of the ` r ` package ` lars ` version to perform the experiments .the mean test error over the runs are displayed in the left panel of figure [ fig : peak_lasso ] .as expected , the test error decreases with the number of observations . for , there is a striking peak in the test error ( marked by the letter x ) , and the performance is much worse compared to the seemingly more complex scenario of .we also observe the peaking behavior in the case where in the cross - validation split ( marked by the letter o ) .the right panel of figure [ fig : peak_lasso ] displays the cross - validated penalty term of the lasso as a function of .note that in the ` cv.lars ` function , the amount of penalization is not parameterized by but by the more convenient quantity \,.\end{aligned}\ ] ] values of close to correspond to a high value of , and hence to a large amount of penalization .the right panel of figure [ fig : peak_lasso ] shows that the peaking behavior also occurs for the amount of penalization , measured by .interestingly , the peak does not occur for , but in the case where the number of observations equals the number of variables in the cross - validation loops .this peculiar behavior is explained in the two following sections , and i also present a normalization procedure that solves this problem .it has been reported in the literature that the pseudo - inverse of the covariance matrix is a particularly bad estimate for the true precision matrix in the case .the rationale behind this effect is as follows .the moore - penrose - inverse of the empirical covariance matrix is in particular , in the small sample case , the smallest eigenvalues of the moore - penrose inverse are set to .this corresponds to cutting off directions with high frequency .while this introduces an additional bias , it tends to avoid the huge amount of variance that is due to the inversion of small but non - zero eigenvalues . in the transition case , all eigenvalues are ( with some of them very small ) andthe mse is most prominent in this situation .the striking peaking behavior for is illustrated in e.g. . as a consequence , any statistical method that uses the pseudo - inverse ofthe covariance suffers from the peaking phenomenon .-norm of the least squares estimate as a function of the number of observations.,width=264,height=264 ] consequently , the peaking behavior also occurs in ordinary least squares regression , as it uses the pseudo - inverse , this is illustrated in figure [ fig : peak_norm ] . on the training data of size , i compute the -norm of least squares estimate .the figure displays the mean norm over all runs . for ,the norm is particularly high .note furthermore that except for , the curve is rather smooth , and small changes in the number of observations only lead to small changes in the -norm of the estimate .this observation is the key to understanding the peaking behavior of the lasso . while for the estimation of the lasso coefficients itself , the pseudo - inverse of the covariance matrix does not occur , it is used for model selection , via the regularization parameter defined in equation ( [ eq : s ] ) .i elaborate on this in the next section .let me denote by the number of observations in the cross - validation splits , and by the optimal parameter chosen via cross - validation . as ,one expects the mse - optimal coefficients computed on a set of size and the mse - optimal coefficients based on a set of size to be similar , i.e. now , if , then , in each of the cross - validation splits , the number of observations equals the number of dimensions . as the least squares estimate is prone to the peaking behavior ( recall figure [ fig : peak_norm ] ) , we observe this implies that even though the -norms of the regression coefficients almost the same , their corresponding values of differ drastically . to put it the other way around :the optimal found on the cross - validation splits ( where ) is way too small , and it dramatically overestimates the amount of penalization .this explains the high test error in the case that is indicated by the letter o in figure [ fig : peak_lasso ] . for , the same argument applies .the optimal on the cross - validation splits ( where ) underestimates the amount of complexity in the case , which leads to the peak indicated by the letter x in figure [ fig : peak_lasso ] . to illustrate that the peaking problem is indeed due to the parametrization ( [ eq : s ] ), i normalize the scaling parameter in the following way .let me denote by the average over all different -norms of the least squares estimates obtained on the cross - validation splits .furthermore , is the -norm of the least squares estimates on the complete training data of size .the normalized regularization parameter is note that the function ` lars ` returns the least squares solution , hence there are no additional computational costs . to illustrate the effectiveness of the normalization ,i re - run the simulation experiments with cross - validation based on the normalized penalty parameter ( [ eq : snormal ] ) .this function - called ` mylars ` is implemented in the ` r`-package ` parcor ` version 0.1 .the results together with the results for the un - normalized parameter [ eq : s ] are displayed in figure [ fig : peak_lasso2 ] .( black solid line ) and ( blue jagged line ) as defined in equation ( [ eq : s ] ) and ( [ eq : snormal ] ) respectively.,width=529,height=264 ]the peaking phenomenon is well - documented in the literature , and it effects every estimator that uses the pseudo - inverse of the sample covariance matrix . as i illustrate in this note , this defect in the transition point can also occur in more subtle ways .for the lasso , the particular parameterization of the penalty term uses least - squares estimates , and it leads to difficulties in model selection .one can expect similar problems if one e.g. measures the fit of a model in terms of the total variance that it explains , and if the total variance is estimated using least squares . in this case , a normalization as proposed above is advisable .i observed the peaking phenomenon during the preparation of a paper with juliane schfer and anne - laure boulesteix on regularized estimation of gaussian graphical models .together with lukas meier , the three of us discussed the source of the peaking phenomenon in great detail .my colleagues ryota tomioka , gilles blanchard and benjamin blankertz provided additional material to the discussion and pointed to relevant literature .
i briefly report on some unexpected results that i obtained when optimizing the model parameters of the lasso . in simulations with varying observations - to - variables ratio , i typically observe a strong peak in the test error curve at the transition point . this peaking phenomenon is well - documented in scenarios that involve the inversion of the sample covariance matrix , and as i illustrate in this note , it is also the source of the peak for the lasso . the key problem is the parametrization of the lasso penalty as e.g. in the current ` r ` package ` lars ` and i present a solution in terms of a normalized lasso parameter .
the theory of open quantum dynamics has attracted significant interest recently due to the fast development of new experimental skills to study , and even design , the interaction between a quantum system and its environment . in many applications ,the dynamics of an open system interacting with its reservoir can be described by a quantum markov process .specifically , let us consider a finite - dimensional quantum system described in a hilbert space the state of the system is represented by a _ density operator _ on with and .density operators form a convex set with one - dimensional projectors corresponding to extreme points ( _ pure states _ ) .we denote by the set of linear operators on , with denoting the real subspace of hermitian operators . throughout the paperwe will use to denote the adjoint , for the complex conjugate , and =xy - yx, ] divided in blocks accordingly to some orthogonal hilbert space decomposition then is invariant for the dynamics if and only if : +l_s\rho_s l_s^\dag -\tfrac{1}{2}\{l_s^\dag l_s,\rho_s\}+l_p\rho_r l_p^\dag\nonumber\\ & -\tfrac{1}{2}\{l_q^\dag l_q,\rho_s\ } \label{conds}\\ 0=&-i(h_p\rho_r-\rho_s h_p)+l_s\rho_s l_q^\dag-\tfrac{1}{2}\rho_s(l_s^\dag l_p+l_q^\dag l_r ) \nonumber\\ & + l_p\rho_r l_r^\dag -\tfrac{1}{2 } ( l_s^\dag l_p + l_q^\dag l_r)\rho_r\label{condp}\\ 0=&-i[h_r,\rho_r]+l_r\rho_r l_r^\dag -\tfrac{1}{2}\{l_r^\dag l_r,\rho_r\}+l_q\rho_s l_q^\dag\nonumber\\ & -\tfrac{1}{2}\{l_p^\dag l_p,\rho_r\ } \label{condr}\end{aligned}\ ] ] by direct computation of the generator one finds its , and -blocks to be the l.h.s .of - , respectively .a given state is stationary if and only if and hence if and only if its blocks are all zero . we know from that for to be invariant, its support must be an invariant subspace , and using the constructive procedure used in the proof of theorem [ t - v - feedback ] , we can construct an block - upper - triangular that stabilizes the subspace .there are many possible choices to do that , e.g. \;\ ] ] with blocks with .therefore , we can focus on the dynamics restricted to the invariant support , and restrict our attention here to full - rank states with . to develop a constructive procedure to build a stabilizing pair with a simple structurewe make a series of assumptions and design choices : _ assumption 1 . _ _ the _ spectrum _ of is non - degenerate ( generic case ) . _ a state with non - degerate spectrum can be chosen _ arbitrarily close to any state_. without loss of generality , we can choose a basis such that is diagonal with .this assumption is instrumental for the construction of but can actually be relaxed , as we remark after the theorem [ main ] .consider the decomposition with such that the corresponding block decomposition for is , ] with note that in the reasoning of the previous section it is not necessary to impose we will make use of this fact in extending the procedure to the -dimensional case .given the previous observations , we can render the given invariant for the dynamics if we can enact dissipation driven by a lindblad operator and a hamiltonian of the form and a such that its off - diagonal elements satisfy .in the -level case let .we can iterate our procedure by induction on the dimension of we have just found the upper - left blocks of and such that is stable and attractive for the dynamics driven by the reduced matrices .assume that we have some upper - left blocks of and such that is invariant for the reduced dynamics .this is exactly assumption 2 above .let if we want to stabilize for the dynamics restricted to , we can then proceed building using design choice 1 above .design choice 2 lets us compute the off - diagonal terms of the hamiltonian , while we can pick to assume any value , since it does not enter the procedure . by iterating until obtain a tridiagonal matrix with , and , i.e. , and becomes a quintdiagonal hermitian matrix , i.e. , [ main ] if are chosen as in ( [ eq : l][eq : h ] ) then is a stationary state of the lme +\d(l,\rho). ] shows that the hamiltonian term exactly cancels the non - zero elements of , i.e. , .thus is a steady state of the system . to show how to make the unique , and hence attractive , stationary state ,assume that is another stationary state in the support of .let then any state in is also stationary .since is unbounded while is compact , there must be a stationary state at the boundary of , i.e. , with rank strictly less than .then the support of , , must be invariant , and must exhibit the following block - decompositions with respect to the orthogonal decomposition with if then it is straightforward to show that where the orthogonal projection onto , is strictly decreasing , and thus is not invariant . since is also stationary , and is a full - rank state , this is not possible and we must therefore have and thus .this implies that and are block - diagonal .both and must contain at least one eigenvector of say and orthogonality of the subspaces and along with the block - decomposition of and imply that any pair of vectors , must satisfy : these are _ necessary condition _ the existence of another stationary state , and hence for making not attractive . from the results in the appendix ( theorem [ thm : noe ] ) , a tridiagonal in the form ( [ eq : l ] )has distinct eigenvalues corresponding to eigenvectors with real entries and the first all different from zero . without loss of generality, we can assume that the first element of each ( unnormalized ) eigenvector equals for all if the eigenvectors are mutually non - orthogonal , i.e. , for all then the third equality in is automatically violated , and hence must be the unique stationary state .this condition will almost always be satisfied in practice , and it is easy to check that it always true when ( see appendix ) .however , even if has orthogonal eigenvectors we can use our freedom of choice in the diagonal elements of to render the unique stationary state .assume for some pair of eigenvectors of .let be the hamiltonian corresponding to and .choose and let be the hamiltonian corresponding to , .recalling that the first component of is for all , shows that and therefore the second equality in is violated , and is the unique stationary state ._ remark : _ theorem [ main][thm : noe ] further shows that assumption 1 on the spectrum of can be relaxed , since the fact that the spectrum is non - degenerate plays no role in the proof . the construction is effective for _ any _ full rank state on the desired support .consider a system whose evolution is governed by the fme ( [ eq : fme ] ) with , and admitting an eigenspace of dimension 2 for some eigenvalue .assume we can switch off the measurement and the feedback hamiltonian , with this choice , admits a two - dimensional _ noiseless ( or decoherence - free ) subspace _ , which can be effectively used to encode a quantum bit protected from noise .we now face the problem of _ initializing _ the quantum state _ inside the dfs _ : we thus wish to construct , and such that a given state _ of the encoded qubit _ is gas on the full hilbert space . setting and choosing an appropriate basis for , the encoded state to be stabilized takes the form the dynamical generators can be partitioned accordingly , where is the identity matrix .we compensate the feedback - correction to the hamiltonian by choosing . for use the constructive algorithm described above to render attractive by choosing , and constructing a for so that no invariant state has support in ( see theorem 12 in ) and by imposing . for need to choose an observable such that .we are thus left with freedom on the choice of which can be now used to stabilize the desired in the controlled invariant subspace denote the elements of the upper - left blocks as set . if , setting shows that , i.e. , up to the multiplicative constant the two block matrices are such that is of the form , theorem [ main ] applies and the desired state is gas . notice that if and has non - degenerate spectrum , it is easy to find another state , arbitrarily close to such that , in the basis in which is diagonal , we have thus we can attain practical stabilization of any state of the encoded qubit .efficient quantum state preparation is crucial to most of the physical implementations of quantum information technologies .here we have shown how _quantum noise _ can be designed to stabilize arbitrary quantum states .the main interest in such a result is motivated by direct feedback design for applications in quantum optical and opto - mechanical systems and quantum information processing applications . as an examplewe have demonstrated how to devise the ( open- and closed- loop ) control hamiltonians in order to asymptotically stabilize a state of a qubit encoded in a noiseless subspace of a larger system .further study is under way to address similar problems for multiple qubits , in the presence of structural constraints on the measurements , control and feedback operators , and to optimize the speed of convergence to the target state .we thank arieh iserles , lorenza viola and maria jose cantero for interesting and fruitful discussions .we collect here results about orthogonal polynomials and tridiagonal matrices that are instrumental to the proof of theorem [ main ] .details and ( missing ) proofs can be found in .an orthogonal polynomial sequence over \subset\rr$ ] is an infinite sequence of real polynomials such that for any under some inner product with a weighing function .[ favard ] [ thm : favard ] a monic polynomial sequence is an orthogonal polynomial sequence if it satisfies a three - term recurrence relation with , sequences of real numbers and .it suffices to note that the monic polynomial sequence with and for satisfies the recurrence relation and therefore is an orthogonal polynomial sequence by favard s theorem provided .thus has distinct roots for all , and the matrix has distinct eigenvalues .there always exists a diagonal matrix such that is a symmetric tridiagonal matrix : the diagonal elements must satisfy for all , which shows that is uniquely determined up to global factor .we fix .the off - diagonal elements of are .the matrix is real - symmetrix and hence with by the previous theorem the eigenvalues are real and distinct provided for all . is a real - orthogonal matrix whose columns are the normalized eigenvectors of , with as defined in ( [ elements ] ) and . since has the same eigenvalues as with eigenvectors while it is not strictly needed for the result in this work , it is worth noticing that for almost all choices of diagonal entries in the eigenvectors will be mutually non - orthogonal for fixed off - diagonal elements and , the eigenvalues and corresponding eigenvectors of can be expressed explicitly in terms of the diagonal entries . since is an order polynomial in with distinct roots , by the implicit function theorem , all roots for can be expressed locally as continuously - differentiable functions in some open neighborhood of , and similarly for the eigenvectors . to determine whether has orthogonal eigenvectors consider the functions which correspond to the inner products of the unnormalized eigenvectors of with . by orthogonality of the eigenvectors of we have for all . but and are polynomials in and and differ only by the coefficiently .thus , we will have for most unless all are equal , this argument can be made stronger and more rigorous if we assume for some , i.e. , that not all partial derivatives at the bad point vanish , in which case we can show that vanishes only on a measure - zero subset of
based on recent work on the asymptotic behavior of controlled quantum markovian dynamics , we show that any generic quantum state can be stabilized by devising constructively a simple lindblad - gks generator that can achieve global asymptotic stability at the desired state . the applicability of such result is demonstrated by designing a direct feedback strategy that achieves global stabilization of a qubit state encoded in a noise - protected subspace .
quantum key distribution ( qkd ) allows two parties ( alice and bob ) to generate a secret key despite the computational and technological power of an eavesdropper ( eve ) , who interferes with the signals .together with the vernam cipher , qkd can be used to provide information - theoretic secure communications .practical qkd systems can differ in many important aspects from their original theoretical proposal , since these proposals typically demand technologies that are beyond our present experimental capability .especially , the signals emitted by the source , instead of being single photons , are usually weak coherent pulses ( wcp ) with typical average photon numbers of 0.1 or higher .the quantum channel introduces errors and considerable attenuation ( about db / km ) that affect the signals even when eve is not present . besides, for telecom wavelengths , standard ingaas single - photon detectors can have a detection efficiency below and are noisy due to dark counts .all these modifications jeopardize the security of the protocols , and lead to limitations of rate and distance that can be covered by these techniques .a main security threat of practical qkd schemes based on wcp arises from the fact that some signals contain more than one photon prepared in the same polarization state .now eve is no longer limited by the no - cloning theorem since in these events the signal itself provides her with perfect copies of the signal photon .she can perform , for instance , the so - called _ photon number splitting _ ( pns ) attack on the multi - photon pulses .this attack gives eve full information about the part of the key generated with the multi - photon signals , without causing any disturbance in the signal polarization .as a result , it turns out that the standard bb84 protocol with wcp can deliver a key generation rate of order , where denotes the transmission efficiency of the quantum channel . to achieve higher secure key rates over longer distances , different qkd schemes , that are robust against the pns attack ,have been proposed in recent years .one of these schemes is the so - called decoy state qkd where alice varies , independently and at random , the mean photon number of each signal state that she sends to bob by employing different intensity settings .eve does not know a priori the mean photon number of each signal state sent by alice .this means that her eavesdropping strategy can only depend on the photon number of these signals , but not on the particular intensity setting used to generate them . from the measurement results corresponding to different intensity settings , the legitimate users can estimate the classical joint probability distribution describing their outcomes for each photon number state .this provides them with a better estimation of the behaviour of the quantum channel , and it translates into an enhancement of the achievable secret key rate and distance .this technique has been successfully implemented in several recent experiments , and it can give a key generation rate of order . while the security analysis of decoy state qkd included in refs . is relevant from a practical point of view , it also leaves open the possibility that the development of better proof techniques , or better classical post - processing protocols , might further improve the performance of these schemes in realistic scenarios . for instance , it is known that two - way classical post - processing protocols can tolerate a higher error rate than one - way communication techniques , or that by modifying the public announcements of the standard bb84 protocol it is possible to generate a secret key even from multi - photon signals .also , the use of local randomization and degenerate codes can as well improve the error rate thresholds of the protocols .in this paper we consider the uncalibrated device scenario and we assume the typical initial post - processing step where double click events are not discarded by bob , but they are randomly assigned to single click events . in this scenario , we derive simple upper bounds on the secret key rate and distance that can be covered by decoy state qkd based exclusively on the classical correlations established by the legitimate users during the quantum communication phase of the protocol .our analysis relies on two preconditions for secure two - way and one - way qkd .in particular , alice and bob need to prove that there exists no separable state ( in the case of two - way qkd ) , or that there exists no quantum state having a symmetric extension ( one - way qkd ) , that is compatible with the available measurements results .both criteria have been already applied to evaluate single - photon implementations of qkd . herewe employ them for the first time to investigate practical realizations of qkd based on the distribution of wcp .we show that both preconditions for secure two - way and one - way qkd can be formulated as a convex optimization problem known as a semidefinite program ( sdp ) .such instances of convex optimization problems appear frequently in quantum information theory and can be solved with arbitrary accuracy in polynomial time , for example , by the interior - point methods . as a result , we obtain ultimate upper bounds on the performance of decoy state qkd when this typical initial post - processing of the double clicks is performed .these upper bounds hold for any possible classical communication technique that the legitimate users can employ in this scenario afterwards like , for example , the sarg04 protocol , adding noise protocols , degenerate codes protocols and two - way classical post - processing protocols . the analysis presented in this manuscript can as well be straightforwardly adapted to evaluate other implementations of the bb84 protocol with practical signals as , for instance , those experimental demonstrations based on wcp without decoy states or on entangled signals coming from a parametric down conversion source .the paper is organized as follows . in sec .[ sec_a ] we describe in detail a wcp implementation of the bb84 protocol based on decoy states .next , in sec . [ sec_b ]we apply two criteria for secure two - way and one - way qkd to this scenario . herewe derive upper bounds on the secret key rate and distance that can be achieved with decoy state qkd as a function of the observed quantum bit error rate ( qber ) and the losses in the quantum channel . moreover , we show how to cast both upper bounds as sdps .these results are then illustrated in sec .[ sec_c ] for the case of a typical behaviour of the quantum channel , _i.e. _ , in the absence of eavesdropping. finally , sec .[ conc ] concludes the paper with a summary .in decoy state qkd with wcp alice prepares phase - randomized coherent states with poissonian photon number distribution . the mean photon number ( intensity ) of this distributionis chosen at random for each signal from a set of possible values . in the case of the bb84 protocol , and assuming alice chooses a decoy intensity setting , such states can be described as where the signals denote fock states with photons in one of the four possible polarization states of the bb84 scheme , which are labeled with the index . on the receiving side , we consider that bob employs an active basis choice measurement setup .this device splits the incoming light by means of a polarizing beam - splitter and then sends it to threshold detectors that can not resolve the number of photons by which they are triggered .the polarizing beam - splitter can be oriented along any of the two possible polarization basis used in the bb84 protocol .this detection setup is characterized by one _ positive operator value measure _ ( povm ) that we shall denote as . in an entanglement - based view, the signal preparation process described above can be modeled as follows : alice produces first bipartite states of the form where system is the composition of systems , , and , and the orthogonal states and record , respectively , the polarization state and decoy intensity setting selected by alice .the parameters and represent the a priori probabilities of these signals .for instance , in the standard bb84 scheme the four possible polarization states are chosen with equal a priori probabilities and for all .the signal that appears in eq .( [ mcl ] ) denotes a purification of the state and can be written as where system acts as a shield , in the sense of ref . and records the photon number information of the signals prepared by the source .this system is typically inaccessible to all the parties .one could also select as any other purification of the state .however , as we will show in sec .[ sec_b ] , the one given by eq .( [ pur_fnl ] ) is particularly suited for the calculations that we present in that section . afterwards , alice measures systems and in the orthogonal basis and , corresponding to the measurement operators . this action generates the signal states with a priori probabilities .the reduced density matrix , with , is fixed by the actual preparation scheme and can not be modified by eve . in order to include this information in the measurement process, one can add to the observables , measured by alice and bob , other observables such that form a complete tomographic set of alice s hilbert space . in order to simplify our notation , from now onwe shall consider that the observed data and the povm contain also the observables .that is , every time we refer to we assume that these operators include as well the observables .our starting point is the observed joint probability distribution obtained by alice and bob after their measurements .this probability distribution defines an equivalence class of quantum states that are compatible with it , let us begin by considering two - way classical post - processing of the data .it was shown in ref . that a necessary precondition to distill a secret key in this scenario is that the equivalence class does not contain any separable state .that is , we need to find quantum - mechanical correlations in , otherwise the secret key rate , that we shall denote as , vanishes .as it is , this precondition answers only partially the important question of how much secret key alice and bob can obtain from their correlated data .it just tells if the secret key rate is zero or it may be positive .however , this criterion can be used as a benchmark to evaluate any upper bound on .if contains a separable state then the upper bound must vanish .one upper bound which satisfies this condition is that given by the regularized relative entropy of entanglement .unfortunately , to calculate this quantity for a given quantum state is , in general , a quite difficult task , and analytical expressions are only available for some particular cases .besides , this upper bound depends exclusively on the quantum states shared by alice and bob and , therefore , it does not include the effect of imperfect devices like , for instance , the low detection efficiency or the noise in the form of dark counts introduced by current detectors .another possible approach is that based on the best separable approximation ( bsa ) of a quantum state .this is the decomposition of into a separable state and an entangled state , while maximizing the weight of the separable part .that is , any quantum state can always be written as \rho_{ent},\ ] ] where the real parameter is maximal .given an equivalence class of quantum states , one can define the maximum weight of separability within the class , , as note that the correlations can originate from a separable state if and only if .let denote the equivalence class of quantum states given by where represents again the entangled part in the bsa of the state .then , it was proven in ref . that the secret key rate always satisfies where represents the shannon mutual information calculated on the joint probability distribution .as it is , this upper bound can be applied to any qkd scheme , although the calculation of the parameters and might be a challenge .next , we consider the particular case of decoy state qkd .the signal states that alice sends to bob are mixtures of fock states with different poissonian photon number distributions of mean .this means , in particular , that eve can always perform a _ quantum non - demolition _ ( qnd ) measurement of the total number of photons contained in each of these signals without introducing any errors .the justification for this is that the total photon number information via the qnd measurement comes free " , since the execution of this measurement does not change the signals .that is , the realization of this measurement can not make eve s eavesdropping capabilities weaker . if eve performs such a qnd measurement , then the signals are transformed as where the probabilities are given by the signals have the form and the normalized states only depend on the signals and the photon number . from the tensor product structure of we learn that the signals can only contain quantum correlations between systems and . therefore , without loss of generality , we can always restrict ourselves to only search for quantum correlations between these two systems. additionally , in decoy state qkd alice and bob have always access to the conditional joint probability distribution describing their outcomes given that alice emitted an -photon state .this means that the search for quantum correlations in can be done independently for each -photon signal .that is , the legitimate users can define an equivalence class of signal states for each possible fock state sent by alice .a further simplification arises when one considers the typical initial post - processing step where double click events are not discarded by bob , but they are randomly assigned to single click events . in the case of the bb84 protocol , this action allows alice and bob to always explain their observed data as coming from a single - photon signal where bob performs a single - photon measurement .this measurement is characterized by a set of povm operators which are projectors onto the eigenvectors of the two pauli operators and , together with a projection onto the vacuum state which models the losses in the quantum channel , with and where . in particular , let denote the conditional joint probability distribution obtained by alice and bob after their measurements , with , given that alice emitted an -photon state .that is , includes the random assignment of double clicks to single click events . as before , we consider that the observables contain as well other observables that form a tomographic complete set of alice s hilbert space .we define the equivalence class of quantum states that are compatible with as then , the secret key rate can be upper bounded as where denotes the maximum weight of separability within the equivalence class , and represents the shannon mutual information calculated on , with being the entangled part in the bsa of a state and whose weight of separability is maximum . the main difficulty when evaluating the upper bound given by eq .( [ final_two ] ) still relies on obtaining the parameters and .next , we show how to solve this problem by means of a semidefinite program ( sdp ) . for that, we need to prove first the following observation ._ observation _ : within the equivalence classes of quantum signals given by eq .( [ eq_class_n_squash ] ) alice and bob can only detect the presence of negative partial transposed ( npt ) entangled states ._ proof_. the signals can always be decomposed as for some probability ] . for the given dimensionalities ,it was proven in ref . that whenever is non - negative it represents a separable state , _i.e. _ , .this means that alice and bob can only detect entangled states that satisfy .since , the previous condition is only possible when . let us now write the search of and as a sdp .this is a convex optimisation problem of the following form : where the vector represents the objective variable , the vector is fixed by the particular optimisation problem , and the matrices and are hermitian matrices . the goal is to minimize the linear function subjected to the linear matrix inequality ( lmi ) constraint .the sdp that we need to solve has the form : \\\nonumber \text{subject to } & & \sigma_{a_1b}^n({\bf x})\geq{}0 , \\\nonumber & & \text{tr}[\sigma_{a_1b}^n({\bf x})]=1 , \\\nonumber & & \text{tr}[a_{k } \otimes t_j\ \sigma_{a_1b}^n({\bf x})]=p_{kj}^n , \ \forall k , j , \\\nonumber & & \sigma_{sep}^n({\bf x})\geq{}0 , \\\nonumber & & \sigma_{sep}^{n\ \gamma}({\bf x})\geq{}0 , \\\nonumber & & \sigma_{a_1b}^n({\bf x})-\sigma_{sep}^n({\bf x})\geq{}0 , \end{aligned}\ ] ] where the objective variable is used to parametrise the density operators and . for that, we employ the method introduced in refs .the state which appears in eq .( [ primalsdp_two_way ] ) is not normalized , _i.e. _ , it also includes the parameter .the first three constraints in eq .( [ primalsdp_two_way ] ) guarantee that is a valid normalized density operator that belongs to the equivalence class , the following two constraints impose to be a separable state , while the last one implies that the entangled part of is a valid but not normalized density operator .its normalization factor is given by .if denotes a solution to the sdp given by eq .( [ primalsdp_two_way ] ) then ,\ ] ] and the state is given by the classical post - processing of the observed data can be restricted to one - way communication .depending on the allowed direction of communication , two different cases can be considered : _ direct reconciliation _( dr ) refers to communication from alice to bob , _reverse reconciliation _ ( rr ) permits only communication from bob to alice . in this section, we will only consider the case of dr .expressions for the opposite scenario , _i.e. _ , rr , can be obtained in a similar way . in ref . it was shown that a necessary precondition for secure qkd by means of dr ( rr ) is that the equivalence class given by eq .( [ eq_class ] ) does not contain any state having a symmetric extension to two copies of system ( system ) .a state is said to have a symmetric extension to two copies of system if and only if there exists a tripartite state , with , and where , which fulfills the following two properties : where the swap operator satisfies . a graphical illustration of a state which has a symmetric extension to two copies of system is given in fig . [ fig_sym ] . which has a symmetric extension to two copies of system .[ fig_sym ] ] this definition can be easily extended to cover also the case of symmetric extensions of to two copies of system , and also of extensions of to more than two copies of system or of system .the best extendible approximation ( bea ) of a given state is the decomposition of into a state with a symmetric extension , that we denote as , and a state without symmetric extension , while maximizing the weight of the extendible part , _i.e. _ , \rho_{ne},\ ] ] where the real parameter is maximal . note that this parameter is well defined since the set of extendible states is compact .equation ( [ blia ] ) follows the same spirit like the bsa given by eq .( [ eq_bsa ] ) .now , one can define analogous parameters and equivalence classes as in sec .[ sec_twoway ] .in particular , the maximum weight of extendibility within an equivalence class is defined as .that is , the correlations can originate from an extendible state if and only if .finally , one defines as the equivalence class of quantum states given by , where denotes the nonextendible part in the bea of the state .then , it was proven in ref . that the one - way secret key rate satisfies where represents the shannon mutual information now calculated on the joint probability distribution with . the analysis contained in sec .[ sec_twoway ] to derive eq .( [ final_two ] ) from eq .( [ second_bound ] ) also applies to this scenario and we omit it here for simplicity .we obtain where denotes the maximum weight of extendibility within the equivalence class given by eq .( [ eq_class_n_squash ] ) , and represents the shannon mutual information calculated on , with being the nonextendible part in the bea of a state and whose weight of extendibility is maximum . the parameter and the nonextendible state can directly be obtained by solving the following sdp : \\\nonumber \text{subject to } & & \sigma_{a_1b}^n({\bf x})\geq{}0 , \\\nonumber & & \text{tr}[\sigma_{a_1b}^n({\bf x})]=1 , \\\nonumber & & \text{tr}[a_{k } \otimes t_j\ \sigma_{a_1b}^n({\bf x})]=p_{kj}^n , \ \forall k , j , \\\nonumber & & \rho_{a_1bb'}^n({\bf x})\geq{}0 , \\\nonumber & & p\rho_{a_1bb'}^n({\bf x})p=\rho_{a_1bb'}^n({\bf x } ) , \\\nonumber & & { \rm tr}_{b'}[\rho_{a_1bb'}^n({\bf x})]=\sigma_{ext}^n({\bf x } ) , \\ \nonumber & & \sigma_{a_1b}^n({\bf x})-\sigma_{ext}^n({\bf x})\geq{}0 , \end{aligned}\ ] ] where the state is not normalized , _i.e. _ , it also includes the parameter .the first three constraints coincide with those of eq .( [ primalsdp_two_way ] ) .they just guarantee that .the following three constraints impose to have a symmetric extension to two copies of system , while the last one implies that the nonextendible part of is a valid but not normalized density operator .its normalization factor is .this sdp does not include the constraint because non - negativity of the extension , together with the condition , already implies non - negativity of . if represents a solution to the sdp given by eq .( [ primalsdp_one_way ] ) then we have that ,\ ] ] and the state is given by this section we evaluate the upper bounds on the secret key rate both for two - way and one - way decoy state qkd given by eq .( [ final_two ] ) and eq .( [ final_one ] ) .moreover , we compare our results with known lower bounds for the same scenarios .the numerical simulations are performed with the freely available sdp solver sdpt3 - 3.02 , together with the parser yalmip . to generate the observed data, we consider the channel model used in ref .this model reproduces a normal behaviour of the quantum channel , _i.e. _ , in the absence of eavesdropping .note , however , that our analysis can as well be straightforwardly applied to other quantum channels , as it only depends on the probability distribution that characterizes the results of alice s and bob s measurements .this probability distribution is given in tab .[ tab1 ] , where the conditional yields have the form ' '' '' & & & & & + ' '' '' & & & & & + & & & & & + & & & & & + & & & & & + ,\ ] ] with being the background detection event rate of the system , and where represents the overall transmittance , including the transmission efficiency of the quantum channel and the detection efficiency .the parameter denotes the quantum bit error rate of an -photon signal .it is given by +\frac{1}{2}y_0}{y_n},\ ] ] where represents the probability that a photon hits the wrong detector due to the misalignment in the quantum channel and in the detection apparatus .the parameter can be related with a transmission distance measured in km for the given qkd scheme as , where represents the loss coefficient of the optical fiber measured in db / km .the total db loss of the channel is given by .as discussed in sec .[ sec_b ] , the reduced density matrix of alice , that we shall denote as , is fixed and can not be modified by eve .this state has the form , where is given by eq .( [ eq_nsq ] ) . in the standard bb84 protocolthe probabilities satisfy .we obtain , therefore , that can be expressed as to include this information in the measurement process , we consider that alice and bob have also access to the results of a set of observables that form a tomographic complete set of alice s hilbert space .in particular , we use a hermitian operator basis .these hermitian operators satisfy and have a hilbert - schmidt scalar product . the probabilities , with given by eq .( [ reduced ] ) .the resulting upper bounds on the two - way and one - way secret key rate are illustrated , respectively , in fig .[ fig_two_way ] and fig .[ fig_one_way ] .they state that no secret key can be distilled from the correlations established by the legitimate users above the curves , _ i.e. _ , the secret key rate in that region is zero .these figures include as well _ lower _ bounds for the secret key rate obtained in refs .note , however , the security proofs included in refs . implicitly assume that alice and bob can make public announcements using two - way communication , and only the error correction and privacy amplification steps of the protocol are assumed to be realized by means of one - way communication .we consider the uncalibrated device scenario and we study two different situations in each case : ( 1 ) no errors in the quantum channel , _i.e. _ , , , and ( 2 ) and .this last scenario corresponds to the experimental parameters reported by gobby - yuan - shields ( gys ) in ref .figure [ fig_two_way ] and fig . [ fig_one_way ] do not include the sifting factor of for the bb84 protocol , since this effect can be avoided by an asymmetric basis choice for alice and bob .moreover , we consider that in the asymptotic limit of a large number of transmitted signals most of them represent signal states of mean photon number . that is , the proportion of decoy states used to test the behaviour of the quantum channel within the total number of signals sent by alice is neglected .this means that in eq .( [ rn ] ) satisfies and given by eq .( [ final_two ] ) in logarithmic scale in comparison to known lower bounds for the same scenario given in ref .the figure includes two cases .( 1 ) no errors in the quantum channel , _i.e. _ , and . in this case, the upper bound ( ub ) is represented by a thin solid line , while the lower bound ( lb ) is represented by a thin dashed line .( 2 ) and , which correspond to the gys experiment reported in ref . . in this case, the upper bound ( ub ) is represented by a thick solid line , while the lower bound ( lb ) after 3 b steps is represented by a thick dashed line .we assume asymmetric basis choice to suppress the sifting effect .[ fig_two_way ] ] given by eq .( [ final_one ] ) in logarithmic scale in comparison to known lower bounds for the same scenario given in refs .the figure includes two cases .( 1 ) no errors in the quantum channel , _i.e. _ , and . in this case, the upper bound ( ub ) rr is represented by a thin solid line , while the lower bound ( lb ) is represented by a thin dashed line .( 2 ) and , which correspond to the gys experiment reported in ref . . in this case , the upper bound ( ub ) rr is represented by a thick solid line , while the lower bound ( lb ) is represented by a thick dashed line .the two lines on the left hand side of the graphic represent upper bounds for the case of dr ( case ( 1 ) short dashed line , case ( 2 ) dash - dotted line ) .the inset figure shows an enlarged view of the upper bounds for a total db loss ranging from 0 to 5 db .we assume asymmetric basis choice to suppress the sifting effect .[ fig_one_way ] ] in the case of no errors in the quantum channel ( case ( 1 ) above ) the lower bounds for two - way and one - way qkd derived in refs . coincide .furthermore , for low values of the total db loss , the upper bounds shown in the figures present a small bump which is specially visible in this last case .the origin of this bump is the potential contribution of the multi - photon pulses to the key rate .let us now consider the cutoff points for decoy state qkd in the case of errors in the quantum channel ( case ( 2 ) above ) .these are the values of the total db loss for which the secret key rate drops down to zero in fig .[ fig_two_way ] and fig .[ fig_one_way ] .we find that they are given , respectively , by : db ( lower bound two - way after 3 b steps ) , db ( upper bound two - way ) , db ( lower bound one - way ) , and db ( upper bound one - way with rr ) .these quantities can be related with the following transmission distances : km , km , km and km .here we have used db / km and the efficiency of bob s detectors is .it is interesting to compare the two - way cutoff point of km with a similar distance upper bound of km provided in ref . for the same values of the experimental parameters .note , however , that the upper bound derived in ref . relies on the assumption that a secure key can only be extracted from single photon states .that is , it implicitly assumes the standard bb84 protocol .if this assumption is removed and one also includes in the analysis the potential contribution of the multi - photon signals to the key rate ( due , for instance , to the sarg04 protocol ) , then the cutoff point provided in ref . transforms from km to km , which is above the km presented here .figure [ fig_one_way ] shows a significant difference between the behaviour of the upper bounds for one - way classical post - processing with rr and dr .most importantly , the upper bounds on for the case of dr can be below the lower bounds on the secret key rate derived in refs . .note , however , that the scenario considered here is slightly different from the one assumed in the security proofs of refs .in particular , the analysis contained in sec .[ sec_one_way ] for the case of dr does not allow _ any _ communication from bob to alice once the conditional probabilities are determined .this means , for instance , that bob can not even declare in which particular events his detection apparatus produced a click " .however , as mentioned previously , refs . implicitly assume that only the error correction and privacy amplification steps of the protocol are performed with one - way communication .if the analysis performed in sec .[ sec_one_way ] is modified such that bob is now allowed to inform alice which signal states he actually detected , then it turns out that the resulting upper bounds in this modified scenario coincide with those derived for the case of rr . to include this initial communication step from bob to alice in the analysis , one can use the following procedure . let the projector be defined as then , one can add to eq .( [ primalsdp_one_way ] ) one extra constraint and substitute the condition by equation ( [ hoy_playa ] ) refers to the normalized state that is postselected by alice and bob once bob declares which signals he detected .equation ( [ hoy_playa2 ] ) indicates that the bea has to be applied to this postselected state . finally ,each term in the summation given by eq .( [ final_one ] ) has to be multiplied by the yield , _i.e. _ , the probability that bob obtains a click " conditioned on the fact that alice sent an -photon state .our numerical results indicate that the upper bounds given by eq .( [ final_two ] ) and eq .( [ final_one ] ) are close to the known lower bounds available in the scientific literature for the same scenarios .however , one might expect that these upper bounds can be further tightened in different ways .for instance , by substituting in eq .( [ final_two ] ) and eq .( [ final_one ] ) the shannon mutual information with any other tighter upper bound on the secret key rate that can be extracted from a classical tripartite probability distribution measured on a purification of the state ( in the case of two - way qkd ) or of the state ( one - way qkd ) . moreover , as they are , eq .( [ final_two ] ) and eq .( [ final_one ] ) implicitly assume that the legitimate users know precisely the number of photons contained in each signal emitted .however , in decoy state qkd alice and bob have only access to the conditional joint probability distribution describing their outcomes given that alice emitted an -photon state , but they do not have single shot photon number resolution of each signal state sent . as a side remark , we would like to emphasize that to calculate the upper bounds given by eq .( [ final_two ] ) and eq .( [ final_one ] ) it is typically sufficient to consider only a finite number of terms in the summations .this result arises from the limit imposed by the unambiguous state discrimination ( usd ) attack .this attack does not introduce any errors in alice s and bob s signal states .moreover , it corresponds to an entanglement - breaking channel and , therefore , it can not lead to a secure key both for the case of two - way and one - way qkd .the maximum probability of unambiguously discriminating an -photon state sent by alice is given by for typical observations this quantity can be related with a transmission efficiency of the quantum channel , _i.e. _ , an that provides an expected click rate at bob s side equal to .this last condition can be written as whenever the overall transmission probability of each photon satisfies , then any pulse containing or more photons is insecure against the usd attack .after a short calculation , we obtain that the total number of -photon signals that need to be considered in the summations of eq .( [ final_two ] ) and eq .( [ final_one ] ) can be upper bounded as }\big\rfloor & \textrm{ even}\\ \big\lfloor\frac{1}{2\log_2[\sqrt{2}(1-\eta)]}\big\rfloor & \textrm{ odd}. \end{array } \right.\ ] ]in this paper we have derived upper bounds on the secret key rate and distance that can be covered by two - way and one - way decoy state quantum key distribution ( qkd ) .our analysis considers the uncalibrated device scenario and we have assumed the typical initial post - processing step where double click events are randomly assigned to single click events .we have used two preconditions for secure two - way and one - way qkd . in particular , the legitimate users need to prove that there exists no separable state ( in the case of two - way qkd ) , or that there exists no quantum state having a symmetric extension ( one - way qkd ) , that is compatible with the available measurements results .both criteria have been previously employed in the scientific literature to evaluate single - photon implementations of qkd .here we have applied them to investigate a realistic source of weak coherent pulses , and we have shown that they can be formulated as a convex optimization problem known as a semidefinite program ( sdp ) .such instances of convex optimization problems can be solved efficiently , for example by means of the interior - point methods . as a result, we have obtained fundamental limitations on the performance of decoy state qkd when this initial post - processing of the double clicks is performed .these upper bounds can not be overcome by any classical communication technique ( including , for example , sarg04 protocol , adding noise protocols , degenerate codes and two - way classical post - processing protocols ) that the legitimate users may employ to process their correlated data afterwards .moreover , our results seem to be already close to well known lower bounds for the same scenarios , thus showing that there are clear limits to the further improvement of classical post - processing techniques in decoy state qkd . the analysis presented in this paper could as well be straightforwardly adapted to evaluate other implementations of the bb84 protocol with practical signals like , for example , those experimental demonstrations based on wcp without decoy states or on entangled signals coming from a parametric down conversion source .m.c . especially thanks hlo and n. ltkenhaus for hospitality and support during his stay at the university of toronto and at the institute for quantum computing ( university of waterloo ) where this manuscript was finished .this work was supported by the european projects secoqc and qap , nserc , quantum works , csec , cfi , cipi , cifar , the crc program , mitacs , oit , oce , by xunta de galicia ( spain , grant no .incite08pxib322257pr ) , and by university of vigo ( program axudas mobilidade dos investigadores " ) .99 n. gisin , g. ribordy , w. tittel and h. zbinden , rev .phys . * 74 * , 145 ( 2002 ) ; m. duek , n. ltkenhaus and m. hendrych , progress in optics * 49 * , edt .e. wolf ( elsevier ) , 381 ( 2006 ) .v. scarani , h. bechmann - pasquinucci , n. j. cerf , m. duek , n. ltkenhaus and m. peev , preprint quant - ph/0802.4155 , accepted for publication in rev .g. s. vernam , j. am .* xlv * , 109 ( 1926 ) . b. huttner , n. imoto , n. gisin , and t. mor , phys .a * 51 * , 1863 ( 1995 ) ; g. brassard , n. ltkenhaus , t. mor and b. c. sanders , phys . rev . lett .* 85 * , 1330 ( 2000 ) . w. k. wootters and w. h. zurek , nature * 299 * , 802 ( 1982 ) . c. h. bennett and g. brassard , proc .conference on computers , systems and signal processing , bangalore , india , ieee press , new york , 175 ( 1984 ) .h. inamori , n. ltkenhaus and d. mayers , eur .j. d * 41 * , 599 ( 2007 ) .d. gottesman , h .- k .lo , n. ltkenhaus and j. preskill , quant .comput . * 4 * , 325 ( 2004 ) .hwang , phys .lett . * 91 * , 057901 ( 2003 ) .lo , x. ma and k. chen , phys .* 94 * , 230504 ( 2005 ) .wang , phys .lett . * 94 * , 230503 ( 2005 ) ; x. ma , b. qi , y. zhao and h .- k .lo , phys .a * 72 * , 012326 ( 2005 ) ; x .- b .wang , phys .a * 72 * , 012322 ( 2005 ) ; x .- b .wang , phys .a * 72 * , 049908 ( e ) ( 2005 ) .v. scarani , a. acn , g. ribordy and n. gisin , phys .lett . * 92 * , 057901 ( 2004 ) .m. koashi , phys .lett . * 93 * , 120501(2004 ) ; k. tamaki , n. ltkenhaus , m. koashi and j. batuwantudawe , preprint quant - ph/0607082 ; k. inoue , e. waks and y. yamamoto , phys .a * 68 * , 022317 ( 2003 ) .y. zhao , b. qi , x. ma , h .- k .lo and l. qian , phys .96 * , 070502 ( 2006 ) ; y. zhao , b. qi , x. ma , h .- k . lo and l. qian , proc . of ieee international symposium on information theory ( isit06 ) , 2094 ( 2006 ) ; c .- z .peng , j. zhang , d. yang , w .- b .gao , h .- x .ma , h. yin , h .- p .zeng , t. yang , x .- b .wang and j .- w .pan , phys .. lett . * 98 * , 010505 ( 2007 ) ; d. rosenberg , j. w. harrington , p. r. rice , p. a. hiskett , c. g. peterson , r. j. hughes , a. e. lita , s. w. nam and j. e. nordholt , phys . rev . lett .* 98 * , 010503 ( 2007 ) ; t. schmitt - manderbach , h. weier , m. frst , r. ursin , f. tiefenbacher , t. scheidl , j. perdigues , z. sodnik , c. kurtsiefer , j. g. rarity , a. zeilinger and h. weinfurter , phys .* 98 * , 010504 ( 2007 ) ; z. l. yuan , a. w. sharpe and a. j. shields , appl .. lett . * 90 * , 011118 ( 2007 ) ; z .- q . yin , z .- f .han , w. chen , f .- x .l . wu and g .- c .guo , chin .lett * 25 * , 3547 ( 2008 ) ; j. hasegawa , m. hayashi , t. hiroshima , a. tanaka and a. tomita , preprint quant - ph/0705.3081 ; j. f. dynes , z. l. yuan , a. w. sharpe and a. j. shields , optics express * 15 * , 8465 ( 2007 ) .d. gottesman and h .- k .lo , ieee trans .theory * 49 * , 457 ( 2003 ) .x. ma , c .- h .f. fung , f. dupuis , k. chen , k. tamaki and h .- k .lo , phys .a * 74 * , 032330 ( 2006 ) .b. kraus , n. gisin and r. renner , phys .lett . * 95 * , 080501 ( 2005 ) ; r. renner , n. gisin and b. kraus , phys . rev .a * 72 * , 012332 ( 2005 ) ; j. m. renes and graeme smith , phys . rev . lett . * 98 * , 020502 ( 2007 ) .p. w. shor and j. a. smolin , preprint quant - ph/9604006v2 ; d. p. divincenzo , p. w. shor and j. a. smolin , phys .a * 57 * , 830 ( 1998 ) ; g. smith and j. a. smolin , phys .* 98 * , 030501 ( 2007 ) ; h .- k .lo , quantum inf .* 1 * , 81 ( 2001 ) ; g. smith , j. m. renes and j. a. smolin , phys . rev . lett .* 100 * , 170502 ( 2008 ) ; o. kern and j. m. renes , quantum inf .* 8 * , 756 ( 2008 ) .n. ltkenhaus , phys .a * 59 * , 3301 ( 1999 ) ; n. ltkenhaus , appl .b : lasers opt . *69 * , 395 ( 1999 ) ; n. ltkenhaus , phys . rev .a * 61 * , 052304 ( 2000 ) .m. curty , m. lewenstein and n. ltkenhaus , phys .lett . * 92 * , 217903 ( 2004 ) .m. curty , o. ghne , m. lewenstein and n. ltkenhaus , phys .a * 71 * , 022306 ( 2005 ) .t. moroder , m. curty and n. ltkenhaus , phys .a * 74 * , 052301 ( 2006 ) . t. moroder , m. curty and n. ltkenhaus , phys .a * 73 * , 012311 ( 2006 ) .m. curty and t. moroder , phys .a * 75 * , 052336 ( 2007 ) .l. vandenberghe and s. boyd , siam review * 38 * , 49 ( 1996 ) ; s. boyd and l. vandenberghe , _ convex optimization _ ( cambridge university press , cambridge , england , 2004 ) .k. horodecki , m. horodecki , p. horodecki and j. oppenheim , preprint arxiv : quant - ph/0506189 . herewe define quantum correlations as those correlations that can not be explained by means of an intercept - resend attack .k. horodecki , m. horodecki , p. horodecki and j. oppenheim , phys .lett . * 94 * , 160502 ( 2005 ) ; v. vedral and m. b. plenio , phys .1619 ( 1998 ) . k. audenaert , j. eisert , e. jan , m. b. plenio , s. virmani and b. de moor , phys .87 * , 217902 ( 2001 ) .m. lewenstein and a. sanpera , phys .lett . * 80 * , 2261 ( 1997 ) ; s. karnas and m. lewenstein , j. phys .a * 34 * , 6919 ( 2001 ) .m. duek , m. jahma and n. ltkenhaus , phys . rev . a * 62 * , 022306 ( 2000 ) . t. tsurumaru and k. tamaki , physa * 78 * , 032302 ( 2008 ) ; n. beaudry , t. moroder and n. ltkenhaus , phys .* 101 * , 093601 ( 2008 ) .a. peres , phys .lett . * 77 * , 1413 ( 1996 ) .b. kraus , j. i. cirac , s. karnas and m. lewenstein , phys .a * 61 * , 062302 ( 2000 ) .note that every equality constraint of the form , for any functions and , can always be represented by two inequality constraints and \geq{}0 $ ] .moreover , two ( or even more ) lmi constraints , can always be combined into a single new lmi constraint as before restricting alice and bob to only communicate one - way , one typically allows them to realize an initial two - way communication step to estimate the joint probability distribution describing their measurements outcomes .this is the approach that we follow in this paper . in decoy state qkdthis is also important in order to distinguish between signal and decoy pulses and to estimate the conditional probabilities .f. grosshans , g. van assche , j. wenger , r. brouri , n. cerf and p. grangier , nature ( london ) * 421 * , 238 ( 2003 ) .a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .lett . * 88 * , 187904 ( 2002 ) ; a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .a * 69 * , 022308 ( 2004 ) ; a. c. doherty , p. a. parrilo , and f. m. spedalieri , phys .a * 71 * , 032333 ( 2005 ) . from now on, the term extension will always stand for a symmetric extension to two copies of system or .we will not make any further distinction between the different types of extension and we simply call the state extendible .the extension to two copies of system corresponds to dr , and extensions to two copies of system corresponds to rr .k. c. toh , r. h. tutuncu and m. j. todd , optim .methods software * 11 * , 545 ( 1999 ) .j. lfberg , in _ proceedings of the cacsd conference _ , taipei , taiwan , p. 284x. ma , ph.d .thesis , university of toronto , 2008 . c. gobby , z. l. yuan and a. j. shields , appl .. lett . * 84 * , 3762 ( 2004 ) .lo , h. f. c. chau and m. ardehali , j. cryptology * 18 * , 133 ( 2005 ) .m. horodecki , p. w. shor , and m. b. ruskai , rev .phys . * 15 * , 629 ( 2003 ) ; m. b. ruskai , rev . math . phys .* 15 * , 643 ( 2003 ) .
the use of decoy states in quantum key distribution ( qkd ) has provided a method for substantially increasing the secret key rate and distance that can be covered by qkd protocols with practical signals . the security analysis of these schemes , however , leaves open the possibility that the development of better proof techniques , or better classical post - processing methods , might further improve their performance in realistic scenarios . in this paper , we derive upper bounds on the secure key rate for decoy state qkd . these bounds are based basically only on the classical correlations established by the legitimate users during the quantum communication phase of the protocol . the only assumption about the possible post - processing methods is that double click events are randomly assigned to single click events . further we consider only secure key rates based on the uncalibrated device scenario which assigns imperfections such as detection inefficiency to the eavesdropper . our analysis relies on two preconditions for secure two - way and one - way qkd : the legitimate users need to prove that there exists no separable state ( in the case of two - way qkd ) , or that there exists no quantum state having a symmetric extension ( one - way qkd ) , that is compatible with the available measurements results . both criteria have been previously applied to evaluate single - photon implementations of qkd . here we use them to investigate a realistic source of weak coherent pulses . the resulting upper bounds can be formulated as a convex optimization problem known as a semidefinite program which can be efficiently solved . for the standard four - state qkd protocol , they are quite close to known lower bounds , thus showing that there are clear limits to the further improvement of classical post - processing techniques in decoy state qkd .
quantum key distribution ( qkd ) admits two remote parties , known as alice and bob , to share unconditional secret key , even the eavesdropper ( eve ) has ultimate power admitted by the quantum mechanics .although the unconditional security have been proved for both the ideal system and the practical system in past years , some assumptions are set to limit eve s attack strategy or to ignore some imperfections existed in the practical qkd system . generally speaking, the practical qkd system is imperfect .any deviation between the standard security analysis and the practical qkd system will leave a loophole for eve to obtain more information . in the worst case ,eve can exploit all these imperfections together to maximize her information about the secret key .thus it is important to do research on the practical qkd system carefully and close these loopholes to guarantee the unconditional security of key .in fact , some potential attacks using the imperfection of a practical qkd system have been discovered , for example , timing side channel attack , faked states attack , blinding attack , trojan - horse attacks , time - shifted attack , phase - remapping attack .therefore , when the qkd system is used in the practical situation , the legitimate parties should consider the potential attack according to any imperfection existed in the practical system and find defense strategies against them .in all the practical qkd system based on long distance fiber , the major difficulty is to maintain the stability and compensate the birefringence of fiber . in order to resolve this problem , muller _ et al . _proposed an interesting two - way _ plug - and - play _ scheme , which can compensate the birefringence automatically . in this system , bob sends a strong reference pulse to alice .then alice encodes her information to the reference pulse , attenuates it to single photon level , and sends it back to bob .since the pulse travels back and forth in the quantum channel , the birefringence is compensated automatically . however , since alice admits the pulse go in and go out of her zone , it will leave a backdoor for eve to implement variable trojan - horse attack . in this paper, we propose a passive faraday mirror ( pfm ) attack in two way _ plug - and - play _ qkd system based on the imperfection of faraday mirror ( fm ) which plays a very important role in compensating the birefringence of fiber .our results show that , for the bb84 protocol , when the fm deviates from the ideal situation , the dimension of hilbert space spanned by the states sent by alice is three instead of two . thus it will give eve more information to spy the secret key .when the legitimate parties are unaware of our attack , unconditional security of the generated key must be compromised .thus , in practical situation , it is very important for alice and bob to consider our attack when they judge whether the _ plug - and - play _ qkd system is secure or not . in the following , we first , in sec.[sec : fm ] and sec.[sec : attack ] , introduce the imperfection of fm and analyze a pfm attack based on this imperfection . in sec.[sec : sim ] , we find the minimal qber between alice and bob induced by eve , when she uses an optimal and suboptimal povm measurement strategy to implement the intercept - and - resend attack . in sec.[sec : con ] , we give a brief conclusion of this paper .in this section , we first introduce the two way _ plug - and - play _ system briefly , and show why the fm can be used to compensate the birefringence of fiber .then we show how the imperfection of fm can be used by eve to spy the secret key .a simple diagram of _ plug - and - play _ system without eve is shown in fig.[fig:1](a ) .bob sends a strong reference pulse to alice , which is horizontally polarized .the pulse will be divided equally into two parts by a beam splitter ( bs ) , noted as _ a _ and _ b_. a polarization controller ( not shown in fig.[fig:1 ] ) is used to change the polarization of _ b _ to guarantee it can pass the polarization beam splitter ( pbs ) totally . generally speaking , due to the birefringence of fiber , the polarization of _ a _ and_ b _ are random , when they arrive at alice s zone sequentially .however , a fm can be used to compensate the birefringence of channel automatically .a _ and _ b _ return bob s zone , their polarization are orthogonal to that of their initial state .then they will travel the other path and interfere in bs .therefore , fm plays an important role in compensating the birefringence of fiber .now , we show why the fm can do this .the fm is a combination of a faraday rotator and an ordinary mirror .in ideal situation , and the jones matrix of fm can be written as : thus the polarization of the outgoing state is always orthogonal to that of the incoming state , regardless of the input polarization state .it is easy to prove that for any birefringence medium , the following equation always holds , that is : where and are the jones matrices of birefringence medium when the photon travels forward and backward the quantum channel , which are given by : where , are the propagation phases of ordinary and extraordinary rays and is the rotation angle between the reference basis and the eigenmode basis of the birefringence medium .eq . shows clearly that , in the ideal situation , the _ plug - and - play _ system can compensate the birefringence of medium automatically . here, we remark that although the _ plug - and - play _ system will suffer from the untrusted source " problem in which the source incoming alice s zone is controlled by eve totally , the security has been rigorously proved in a few recent works .thus , this problem is not considered in this paper and the additional setups for alice to monitor the untrusted source " are also not shown in fig.[fig:1 ] . in the discussion above, we have shown that , in the ideal situation , fm can be used to compensate the birefringence of fiber automatically . however , in practical case , the angle of the faraday rotator in fm is not exact .is not valid and the jones matrix of a practical fm should be rewritten as : where is the angle of faraday rotator in a practical fm . generally speaking, is small .for example , in the center wavelengths 1550 nm and 1310 nm , the maximal rotation angle tolerance is ( at ) for the popular fm produced by _ newport _ and _ general photonics _ .thus , in this paper , we only consider the case that .when fm is imperfect , the birefringence of fiber can not be compensated totally and additional qber will be induced . however , the additional qber is just the minor bug of the practical fm , since is very small .the major one is that the imperfection of fm will leave a loophole for eve to spy the secret key . in the following ,we show how eve can use this imperfection , which is called pfm attack in this paper .the pfm attack is shown in part(b ) of fig.[fig:1 ] . in the diagram , we only draw the major part of eve s attack that how eve probes alice s information . in order to do this , eve sends two time modes _ c _ and _ d _ to alice .note that the two modes should be coherent , which can be obtained by splitting a pulse with a bs like the generation of mode _ a _ and _ b _ sent by bob .the polarization of the two modes should also be the same , which is controlled by eve s polarization controller(pc ) .we assume the polarization state of photon that sent to alice by eve is given by : note that the polarization of incoming state is controlled by eve totally , thus and are any complex number that satisfy .simply , we only consider the the special case that eve sets and in this paper .it is easy to prove that the jones vectors of output polarization state for the two time modes can be written as : where and are the jones vectors of mode _ c _ and mode _d_. here we assume that only _c _ is modulated by alice . are the indices of the four states modulated by alice . is the phase difference between states , in the standard bb84 scheme , , but if eve combines our attack with the phase - remapping attack , ] in the following discussion . here , we remark again that the following results are obtained based on the suboptimal strategy described above . if eve can find the global solution of eq ., she can improve the following results .furthermore , note that is a singular point in pfm attack , since , in this point , the dimension of hilbert space spanned by the four states of eq . is reduced to two .it means that eve can not implement our attack in this point .thus , this point is excluded in the following simulation .fig.[fig:2 ] shows clearly the probability that eve obtains outcome successfully , , changes with for given .the larger is , the easier eve can load her attack . here, we remark that can be explained as the maximal transmittance of channel between alice and bob that eve can load this attack successfully under the suboptimal strategy for given and .for example , when and , .it means that if eve wants to implement this attack successfully , the transmittance of channel between alice and bob should be smaller than , which corresponds to a 124 km long fiber ( the typical loss of fiber is about 0.21 db / km ) .furthermore , can also be explained as that , for a given transmittance of channel , eve can not exploit the imperfection of fm that is smaller than a given value .for example ,when and , is smaller than .thus for a 142.5 km long fiber ( the transmittance is about ) , eve can not exploit the imperfection of fm that .the qber between alice and bob induced by eve s attack is shown in fig.[fig:3 ] .it shows that even in the case that , the qber induced by eve s attack is much lower than 25% which is qber induced by the general intercept - and - resend attack .it is also lower than 20% , which is the maximal tolerable qber in the bb84 protocol under the two - way post - processing when eve does not exploit the imperfection of fm .furthermore , if eve combines the phase - remapping attack with our attack , she can reduce the qber to a very small level .for example , if eve sets the phase difference , the qber induced by her attack is just 3.57% , which is lower than 11% that is the maximal tolerable qber for the bb84 protocol under the collective attack and one - way post - processing .then no secret key can be generated when the qber estimated by alice and bob is larger than this value .therefore , it is necessary for the legitimate parties to consider our attack in a practical _ plug - and - play _ qkd system .it is interesting that , when is given , the qber induced by eve is almost constant and independent of the degree of the imperfection of fm , which is shown in fig.[fig:3 ] clearly .the main reason is that is very small .in fact , will changes with slightly , but the difference is so small that it can be ignored . in order to show the conclusion clearly , we consider eq . with .it is easy to check that , under the suboptimal strategy of eq . , , here , and are the minimal un - zero eigenvalue of and .a simple evolution show that , when , the three eigenvalues of are , .thus .the same result can be obtained for .therefore , the error rate of bob induced by eve can be written as , which shows clearly that is constant in order of .in fact , the strict numerical simulation shows that , for given , is almost constant in order of for each . note that , although is almost independent of , will affect obviously ( see fig.[fig:2 ] ) .fig.[fig:3 ] shows that when eve combines the phase remapping attack with the imperfection of fm , the qber induced by her attack can be reduced to a very small level .for example , when she sets , the qeer is just 4.72% .however , when the qber is reduced , the probability that eve implements her attack successfully will also be reduced , see fig.[fig:2 ] .therefore , when eve loads her attack , she should make a trade - off between the qber and the efficiency to maximize her information . in the following ,we compare our attack with the phase remapping attack .the probability that eve obtains outcome successfully is shown in fig.[fig:4 ] .it shows that the probability that eve implements the phase remapping attack successfully is much larger than that of our attack .although eve can increase the probability that she implements her attack successfully by increasing the phase difference , it will induce more qber which is shown in fig.[fig:5 ] .fig.[fig:5 ] shows that , when the phase difference is increased , the qber induced by eve s attack will increase quickly .this conclusion holds for both the phase remapping attack and our attack .however , the qber under our attack is much lower than that of the phase remapping attack .for example , when , for the phase remapping attack , but in our attack , .in the two - way _ plug - and - play _ qkd system , a perfect fm can be used to compensate the birefringence of fiber automatically and perfectly .however , the practical fm is imperfect .although the deviation from the ideal case is small and the qber induced by this imperfection is slight , it will leave a loophole for eve to spy the secret key between alice and bob .in fact , when the practical fm deviates from the ideal case , the dimension of hilbert space spanned by the states sent by alice is three instead of two .thus the standard security analysis is invalid here , some careful strategy should be adopted by the legitimate parties to monitor this imperfection . in this paper, we propose a pfm attack in two way _ plug - and - play _ qkd system based on the imperfection of fm .the results show that , under this attack , the qber between alice and bob induced by eve is much lower than 25% , which is the qber for the general intercept - and - resend attack when fm is perfect .furthermore , when eve combines the imperfection of fm with phase remapping attack , the qber induced by her attack can be lower than 11% , which is the maximal tolerable qber for the bb84 protocol under the collective attack and one - way post - processing .therefore , in the practical case , the legitimate parties should pay more attention to the imperfection of fm , otherwise , the secrecy of generated key will be compromised .however , we remark that , although eve can combine pfm attack with phase - remapping attack to reduce qber between alice and bob induced by her attack , the probability that she can load this attack successfully is dependent on the loss of channel obviously . in other words , eve can only implement pfm attack in long distance qkd system , which can be estimated in fig.[fig:2 ] and fig.[fig:4 ] .this work is supported by national natural science foundation of china grants no.61072071 .shi - hai sun is supported by hunan provincial innovation foundation for postgraduate no.cx2010b007 and fund of innovation , graduate school of nudt no.b100203 .there are three problems to find the global optimal solution of eq.(14 ) : ( 1 ) there are two penalty functions , it is needed to make a trade - off between them , thus it is a problem to deal with it in the numerical simulation .( 2 ) for the five povm operators belonging to 3-d hilbert space , the number of variable is 39 , it is too much for the simulation .( 3)in our problem , the geometrical format of the restrictions is irregular , thus it is hard to find a suitable algorithm to solve this problem .however , it does not matter , since we just want to show the loophole caused by the imperfection of fm , if we can find a set of povm operators to show this loophole , it is enough .here we associate with the channel transmittance , but do not consider the transmittance of bob s setups which includes both the transmittance of bob s optical setups and the efficiency of bob s single photon detector ( spd ) , there are two reasons for this consideration : ( 1 ) the channel can be controlled by eve , but the bob s setups are placed in bob s secure zone .thus eve can replace the practical channel with a lossless channel , but she can not change the efficiency of bob s detectors . ( 2 ) in order not to be discovered , eve should keep bob s count rate unchanged . when eve is absent , bob s count rate is , where is the channel transmittance between alice and bob , is the transmittance of bob s setups . in this paper, we assume eve resends a single photon pulse to bob according to her detection result .thus , which means .in fact , we remark that , although eve can not change the transmittance of bob s setups , she can compensate it with a birght light whose intensity is . then .thus , when , , which means eve also can associate with the overall efficiency of qkd system by compensating the transmittance of bob s setups .however , when a birght light is used by eve , it will introduce other problems , for example , the double click .therefore , in this paper , we only associate with the channel transmittance .
the faraday mirror ( fm ) plays a very important role in maintaining the stability of two way _ plug - and - play _ quantum key distribution ( qkd ) system . however , the practical fm is imperfect , which will not only introduce additional quantum bit error rate ( qber ) but also leave a loophole for eve to spy the secret key . in this paper , we propose a passive faraday mirror attack in two way qkd system based on the imperfection of fm . our analysis shows that , if the fm is imperfect , the dimension of hilbert space spanned by the four states sent by alice is three instead of two . thus eve can distinguish these states with a set of povm operators belonging to three dimension space , which will reduce the qber induced by her attack . furthermore , a relationship between the degree of the imperfection of fm and the transmittance of the practical qkd system is obtained . the results show that , the probability that eve loads her attack successfully depends on the degree of the imperfection of fm rapidly , but the qber induced by eve s attack changes with the degree of the imperfection of fm slightly .
there are currently two main methods for automatic part - of - speech tagging .the prevailing one uses essentially statistical language models automatically derived from usually hand - annotated corpora .these corpus - based models can be represented e.g. as collocational matrices ( garside et al .1987 ; church 1988 ) , hidden markov models ( cf . cutting et al .1992 ) , local rules ( e.g. hindle 1989 ) and neural networks ( e.g. schmid 1994 ) .taggers using these statistical language models are generally reported to assign the correct and unique tag to 95 - 97% of words in running text , using tag sets ranging from some dozens to about 130 tags .the less popular approach is based on hand - coded linguistic rules .pioneering work was done in the 1960 s ( e.g. greene and rubin 1971 ) .recently , new interest in the linguistic approach has been shown e.g. in the work of ( karlsson 1990 ; voutilainen et al .1992 ; oflazer and kuruz 1994 ; chanod and tapanainen 1995 ; karlsson et al .1995 ; voutilainen 1995 ) .the first serious linguistic competitor to data - driven statistical taggers is the english constraint grammar parser , engcg ( cf .voutilainen et al .1992 ; karlsson et al .the tagger consists of the following sequentially applied modules : 1 .tokenisation 2 .morphological analysis 1 .lexical component 2 .rule - based guesser for unknown words 3 .resolution of morphological ambiguities the tagger uses a two - level morphological analyser with a large lexicon and a morphological description that introduces about 180 different ambiguity - forming morphological analyses , as a result of which each word gets 1.7 - 2.2 different analyses on an average .morphological analyses are assigned to unknown words with an accurate rule - based ` guesser ' .the morphological disambiguator uses constraint rules that discard illegitimate morphological analyses on the basis of local or global context conditions .the rules can be grouped as ordered subgrammars : e.g. heuristic subgrammar 2 can be applied for resolving ambiguities left pending by the more ` careful ' subgrammar 1 .older versions of engcg ( using about 1,150 constraints ) are reported ( voutilainen et al .1992 ; voutilainen and heikkil 1994 ; tapanainen and voutilainen 1994 ; voutilainen 1995 ) to assign a correct analysis to about 99.7% of all words while each word in the output retains 1.04 - 1.09 alternative analyses on an average , i.e. some of the ambiguities remain unresolved .these results have been seriously questioned .one doubt concerns the notion `` correct analysis '' . for example church ( 1992 ) argues that linguists who manually perform the tagging task using the double - blind method disagree about the correct analysis in at least 3% of all words even after they have negotiated about the initial disagreements .if this were the case , reporting accuracies above this 97% ` upper bound ' would make no sense .however , voutilainen and jrvinen ( 1995 ) empirically show that an interjudge agreement virtually of 100% is possible , at least with the engcg tag set if not with the original brown corpus tag set .this consistent applicability of the engcg tag set is explained by characterising it as grammatically rather than semantically motivated .another main reservation about the engcg figures is the suspicion that , perhaps partly due to the somewhat underspecific nature of the engcg tag set , it must be so easy to disambiguate that also a statistical tagger using the engcg tags would reach at least as good results .this argument will be examined in this paper . it will be empirically shown ( i ) that the engcg tag set is about as difficult for a probabilistic tagger as more generally used tag sets and ( ii ) that the engcg disambiguator has a clearly smaller error rate than the probabilistic tagger when a similar ( small ) amount of ambiguity is permitted in the output .a state - of - the - art statistical tagger is trained on a corpus of over 350,000 words hand - annotated with engcg tags , then both taggers ( a new version known as engcg-2 with 3,600 constraints as five subgrammars , and a statistical tagger ) are applied to the same held - out benchmark corpus of 55,000 words , and their performances are compared .the results disconfirm the suspected ` easiness ' of the engcg tag set : the statistical tagger s performance figures are no better than is the case with better known tag sets .two caveats are in order .what we are not addressing in this paper is the work load required for making a rule - based or a data - driven tagger .the rules in engcg certainly took a considerable effort to write , and though at the present state of knowledge rules could be written and tested with less effort , it may well be the case that a tagger with an accuracy of 95 - 97% can be produced with less effort by using data - driven techniques .another caveat is that engcg alone does not resolve all ambiguities , so it can not be compared to a typical statistical tagger if full disambiguation is required .however , voutilainen ( 1995 ) has shown that engcg combined with a syntactic parser produces morphologically unambiguous output with an accuracy of 99.3% , a figure clearly better than that of the statistical tagger in the experiments below ( however , the test data was not the same ) . before examining the statistical tagger ,two practical points are addressed : the annotation of the corpora used , and the modification of the engcg tag set for use in a statistical tagger .the stochastic tagger was trained on a sample of 357,000 words from the brown university corpus of present - day english that was annotated using the engcg tags .the corpus was first analysed with the engcg lexical analyser , and then it was fully disambiguated and , when necessary , corrected by a human expert .this annotation took place a few years ago .since then , it has been used in the development of new engcg constraints ( the present version , engcg-2 , contains about 3,600 constraints ) : new constraints were applied to the training corpus , and whenever a reading marked as correct was discarded , either the analysis in the corpus , or the constraint itself , was corrected . in this way , the tagging quality of the corpus was continuously improved .our comparisons use a held - out benchmark corpus of about 55,000 words of journalistic , scientific and manual texts , i.e. , no training effects are expected for either system .the benchmark corpus was annotated by first applying the preprocessor and morphological analyser , but not the morphological disambiguator , to the text .this morphologically ambiguous text was then independently and fully disambiguated by two experts whose task was also to detect any errors potentially produced by the previously applied components .they worked independently , consulting written documentation of the tag set when necessary .then these manually disambiguated versions were automatically compared with each other . at this stage , about 99.3% of all analyses were identical .when the differences were collectively examined , virtually all were agreed to be due to clerical mistakes . only in the analysis of 21 words , different ( meaning - level ) interpretations persisted , andeven here both judges agreed the ambiguity to be genuine .one of these two corpus versions was modified to represent the consensus , and this ` consensus corpus ' was used as a benchmark in the evaluations . as explained in voutilainen and jrvinen ( 1995 ) ,this high agreement rate is due to two main factors .firstly , distinctions based on some kind of vague semantics are avoided , which is not always case with better known tag sets .secondly , the adopted analysis of most of the constructions where humans tend to be uncertain is documented as a collection of tag application principles in the form of a grammarian s manual ( for further details , cf .voutilainen and jrvinen 1995 ) .the corpus - annotation procedure allows us to perform a text - book statistical hypothesis test .let the null hypothesis be that any two human evaluators will necessarily disagree in at least 3% of the cases . under this assumption ,the probability of an observed disagreement of less than 2.88% is less than 5% .this can be seen as follows : for the relative frequency of disagreement , , we have that is approximately , where is the actual disagreement probability and is the number of trials , i.e. , the corpus size .this means that where is the standard normal distribution function .this in turn means that here is 55,000 and . under the null hypothesis , is at least 3% and thus : we can thus discard the null hypothesis at significance level 5% if the observed disagreement is less than 2.88% .it was in fact 0.7% before error correction , and virtually zero ( ) after negotiation .this means that we can actually discard the hypotheses that the human evaluators in average disagree in at least 0.8% of the cases before error correction , and in at least 0.1% of the cases after negotiations , at significance level 5% .the engcg morphological analyser s output formally differs from most tagged corpora ; consider the following 5-ways ambiguous analysis of `` walk '' : .... walk walk < sv >< svo > v subjunctive vfin walk < sv >< svo > v imp vfin walk < sv >< svo > v inf walk < sv > < svo > v pres -sg3vfin walk n nom sg .... statistical taggers usually employ single tags to indicate analyses ( e.g. `` nn '' for `` n nom sg '' ) . therefore a simple conversion program was made for producing the following kind of output , where each reading is represented as a single tag : .... walk v - subjunctive v - imp v - inf v - pres - base n - nom - sg .... the conversion program reduces the multipart engcg tags into a set of 80 word tags and 17 punctuation tags ( see appendix ) that retain the central linguistic characteristics of the original engcg tag set .a reduced version of the benchmark corpus was prepared with this conversion program for the statistical tagger s use .also engcg s output was converted into this format to enable direct comparison with the statistical tagger .the statistical tagger used in the experiments is a classical trigram - based hmm decoder of the kind described in e.g. , and numerous other articles .following conventional notation , e.g. and , the tagger recursively calculates the , , and variables for each word string position and each possible state : here where is the event of the word being emitted from state and is the event of the word being the particular word that was actually observed in the word string. note that for ; \cdot a_{jk_{t+1}}\\ \beta_t(i ) & = & \sum_{j=1}^n \beta_{t+1}(j ) \cdot p_{ij } \cdot a_{jk_{t+1}}\\ \delta_{t+1}(j ) & = & \left[\max_i \delta_t(i ) \cdot p_{ij}\right ] \cdot a_{jk_{t+1}}\end{aligned}\ ] ] where are the transition probabilities , encoding the tag n - gram probabilities , and are the lexical probabilities . here is the random variable of assigning a tag to the word and is the last tag of the tag sequence encoded as state .note that need not imply .more precisely , the tagger employs the converse lexical probabilities this results in slight variants , , and of the original quantities : and thus and the rationale behind this is to facilitate estimating the model parameters from sparse data . in more detail , it is easy to estimate for a previously unseen word by backing off to statistics derived from words that end with the same sequence of letters ( or based on other surface cues ) , whereas directly estimating is more difficult .this is particularly useful for languages with a rich inflectional and derivational morphology , but also for english : for example , the suffix `` -tion '' is a strong indicator that the word in question is a noun ; the suffix `` -able '' that it is an adjective .more technically , the lexicon is organised as a reverse - suffix tree , and smoothing the probability estimates is accomplished by blending the distribution at the current node of the tree with that of higher - level nodes , corresponding to ( shorter ) suffixes of the current word ( suffix ) .the scheme also incorporates probability distributions for the set of capitalized words , the set of all - caps words and the set of infrequent words , all of which are used to improve the estimates for unknown words .employing a small amount of back - off smoothing also for the known words is useful to reduce lexical tag omissions .empirically , looking two branching points up the tree for known words , and all the way up to the root for unknown words , proved optimal .the method for blending the distributions applies equally well to smoothing the transition probabilities , i.e. , the tag n - gram probabilities , and both the scheme and its application to these two tasks are described in detail in , where it was also shown to compare favourably to ( deleted ) interpolation , see , even when the back - off weights of the latter were optimal .the variables enable finding the most probable state sequence under the hmm , from which the most likely assignment of tags to words can be directly established .this is the normal modus operandi of an hmm decoder . using the variables, we can calculate the probability of being in state at string position , and thus having emitted from this state , conditional on the entire word string . by summing over all states that would assign the same tag to this word ,the individual probability of each tag being assigned to any particular input word , conditional on the entire word string , can be calculated : this allows retaining multiple tags for each word by simply discarding only low - probability tags ; those whose probabilities are below some threshold value .of course , the most probable tag is never discarded , even if its probability happens to be less than the threshold value . by varying the threshold, we can perform a recall - precision , or error - rate - ambiguity , tradeoff .a similar strategy is adopted in .the statistical tagger was trained on 357,000 words from the brown corpus , reannotated using the engcg annotation scheme ( see above ) . in a first set of experiments , a 35,000 word subset of this corpuswas set aside and used to evaluate the tagger s performance when trained on successively larger portions of the remaining 322,000 words .the learning curve , showing the error rate after full disambiguation as a function of the amount of training data used , see figure [ figx ] , has levelled off at 322,000 words , indicating that little is to be gained from further training .we also note that the absolute value of the error rate is 3.51% a typical state - of - the - art figure . here , previously unseen words contribute 1.08% to the total error rate , while the contribution from lexical tag omissions is 0.08% .95% confidence intervals for the error rates would range from 0.30% for 30,000 words to 0.20% at 322,000 words .the tagger was then trained on the entire set of 357,000 words and confronted with the separate 55,000-word benchmark corpus , and run both in full and partial disambiguation mode .table [ table ] shows the error rate as a function of remaining ambiguity ( tags / word ) both for the statistical tagger , and for the engcg-2 tagger .the error rate for full disambiguation using the variables is 4.72% and using the variables is 4.68% , both with confidence degree 95% .note that the optimal tag sequence obtained using the variables need not equal the optimal tag sequence obtained using the variables .in fact , the former sequence may be assigned zero probability by the hmm , namely if one of its state transitions has zero probability .
concerning different approaches to automatic pos tagging : engcg-2 , a constraint - based morphological tagger , is compared in a double - blind test with a state - of - the - art statistical tagger on a common disambiguation task using a common tag set . the experiments show that for the same amount of remaining ambiguity , the error rate of the statistical tagger is one order of magnitude greater than that of the rule - based one . the two related issues of priming effects compromising the results and disagreement between human annotators are also addressed .
tracking is a crucial problem in computer vision and has been studied for decades . however , previous methods have not fully deciphered solutions to challenges in tracking such as illumination , occlusion , and scale . the recent application of deep learning methods have greatly improved performance in other computer vision problems such as object detection and action recognition . in this paper , we investigate the visual tracking problem from a deep learning approach .moreover , we show that utilizing the deep features extracted from a pre - trained network produces a more effective and precise means for tracking . current state of the art trackers are able to address a few specific classical challenges each , such as scale or illumination. however , none are able universally handle the variety of issues that may occur in a given video .handcrafted features such as color histogram , histogram of oriented gradients(hog ) , and scale - invariant feature transform ( sift)the backbone of most previous trackers are also prone to these problems . in this workwe aim to investigate the use of recently developed deep features in the context of tracking .features extracted from a pre - trained deep network have shown to be reliable for many computer vision applications , such as object detection and action recognition . however , the usage of these features for visual tracking has not yet been explored . herewe propose a tracking pipeline which takes advantage of both appearance and motion features extracted from a pre - trained deep network .we show that the new features are capable of handling multiple of the common tracking challenges , such as illumination and occlusion and we show that it achieves better results compared to competitive approaches .our tracking algorithm starts by collecting positive and negative training samples from the video sequence .positive samples contain the target whereas negative samples contain less than of the target .the tracker was given the bounding box of the target from the first video frame s annotated ground truth , which included the center of the annotated bounding box as well as the width and height of the box .for the next frames , a simple tracker is used to track the location of the target in order to collect more positive training samples . using these n locations ,we collected labeled positive and negative training samples through data augmentation . by permuting the bounding boxes through rotating and shifting the images several pixels each side , we collected 25 positive samples and 50 negative samples per frame . for the dt ,n was set to four so that we could collect a total of 400 training samples .choosing the value of depends on the challenges faced in the first frames of a video sequence . for cases where the target did not experience a large deformation or illumination change in the first frames , lowering increased the efficiency of the tracker and produced better results .however , lowering results in a trade off yields a lower total number of training samples , decreasing accuracy in videos where the first frames do not provide a clear image of the target . to collect testing samples for each frame, we took the location of the target in the previous frame and randomly chose test samples around that location using a gaussian distribution since targets would not change location rapidly from frame to frame .the distribution addressed this by weighting the test samples near the last location more than test samples further away from the last location .the initial tracking algorithm used a convolutional neural network ( cnn ) inspired by krizhevsky s network to extract the features .the network has five convolutional layers and two fully connected layers with a max - pooling layer in between each convolution and a softmax regression loss .we obtained a model of the network that was pre - trained on the imagenet data set .our algorithm passed each of the labeled training images through the network and extracted the feature vector from the second fully connected layer .we passed these feature vectors to an svm classifier for training . during testing, the same network is used to extract the features for every candidate window .once the features are computed , the model parameters learned in training are passed to the svm for classification to get the confidence score for each sample .the second cnn tested added an additional fully connected layer to the structure in ii.b .this third fully connected layer had two outputs to match our two classes : background and foreground.in this model , the classifier and features were learned simultaneously to increase efficiency , a quality necessary in an online tracker .furthermore , the low number of training samples for each video sequence prompted us to add fine - tuning for the three fully connected layers with our labeled training samples to the pre - trained model from ii.b .this approach was much simpler than re - training a cnn for each video sequence , given the low number of our training samples , and it made the tracking pipeline to be more efficient , a characteristic necessary in our goals to create an online tracking algorithm .motion information has shown to be a crucial component of tracking , especially when targets are occluded or have sudden changes in appearance . with a single stream network ,our tracker is capable of extracting meaningful features , yet , due to common tracking problems such as occlusion , the tracker would indicate that the foreground was moving to another spot much further away in the frame very suddenly .our initial attempt to address this issue used a linear motion velocity model in the network described in ii.b that did not prove to be effective .however , inspired by the work of simonyan and zisserman , we use a double stream network that adds a motion stream network to our appearance stream network to incorporate temporal information .our model is shown in figure .[ fig : figuremodel ] . for the motion stream network , we calculated optical flow between every adjacent two frames of the sequence and produced rgb optical flow images using liu s optical flow implementation .we passed these rgb optical flow images into a network identical to the network described in ii.c and trained a separate svm classifier for the motion stream network .we ran both sets of test images through both networks and combined the scores output from the svm using late fusion to take both the motion and appearance of the target in the video into account when predicting the location of the object in the next frame .the dt uses a two - stream network where the two streams are updated at different frequencies .the importance of updating the models can be seen in the issue of tracking objects whose appearance and direction of motion change .after qualitative observations that the direction and speed of the target changed more rapidly than its appearance , the motion stream network was updated more frequently than the appearance stream network .the motion stream network was updated every four frames whereas the appearance stream network was updated either every 50 frames or if the confidence values of the test images dropped below a defined threshold .this threshold was set to .a value below indicates that a sudden occlusion or illumination change has occurred .this method of updating the model allows the dt to handle challenges including over - fitting , change in motion , target shape , occlusion , and illumination . handing scale change is an important component of our tracking pipeline .if the target changes in size while the bounding boxes stay the same size , the resulting test images will contain a large amount of background information or only a portion of the target , leading to incorrectly labeled positive training samples . from prior experimentation, we have found that the features our network learned were scale invariant to some extent and our network accurately handled the location of the target during the beginning of the scale change .thus , updating the model based on a slight scale change was not necessary , and we could instead pass test samples that contain different scales to address scale change .using the original methodology of collecting training and testing samples and then using the two - stream network described in iii.a , the tracker chooses the location with the highest confidence score , assigning that as the location of the target . at that chosen location ,the image is scaled to 20 different sizes , which are passed as test images to the appearance network .the scale with the highest confidence score is chosen before the tracker updates the size of the bounding box .the main advantage of this approach is that we do not need to sample candidate from all potential locations in different scales , thus reducing computational complexity .the deep tracker was tested on the 315 video sequences from the amsterdam library of ordinary videos for tracking ( alov++ ) and the 29 video sequences from the visual tracker benchmark .these data sets were chosen based on their videos diversity in circumstance various combinations of classical computer vision problems such as occlusion , illumination , shape change , and low contrast were present in these videos .both data sets included the videos in single frames , the ground - truth annotations , and the results of the state of the art trackers that had been run on the data set .the alov++ data set had ground - truth annotations for every 5 frames and the benchmark data set had ground - truth annotations for every frame .both ground - truth annotations acknowledged scale change .we evaluated our tracker against two state of the art tracking methods across two benchmark data sets .the scores are shown in the below tables . in the visual tracker benchmark and the alov++ data set ,our dt outperforms the state of the art tracker by at least 4.28% and 2.77% respectively .[ cols="^,^,^,^",options="header " , ] furthermore , we compared the dt to the top ten competitive methods using success and precision plots .these plots use receiver operating characteristic curves ( roc ) to illustrate the inverse relationship between sensitivity and specificity .these curves were generated by plotting the comparison of true positives out of the number of total true positives and the fraction of false positives out of the total number of false positives .the success plot shows the overlap in the dt s bounding boxes , the ground - truth , and the ratio of successful tracking , whereas the precision plot shows the error in the center location of the bounding box .the overall success and precision plots are shown below . + + + success and precision plots were also calculated for specific classical computer vision problems .the dt outperformed state of the art tracking methods in several categories including occlusion , fast motion , background clutter , out of plane rotation , out of view , illumination variation , deformation , motion blur , and low resolution .our deep tracking algorithm takes advantage of deep features extracted from a pre - trained network , incorporating the information conveyed from both motion and appearance in a dual stream pipeline .further expansions on this work will focus on training a cnn for the motion stream network using specifically rgb optical flow images .y. bengio , p. lamblin , d. popovici , and h. larochelle . `` greedy layer - wise training of deep networks , '' in _ advances in neural information processing systems _ , 2007 .p. lamblin and y. bengio .`` important gains from supervised fine - tuning of deep architectures on large labeled sets , '' in _ advances in neural information processing systems _ , 2010 .y. yang , g. shu , and m. shah .`` semi - supervised learning of feature hierarchies for object detection in a video , in _ ieee conference on computer vision and pattern recognition _ , 2013 .k. simonyan and a. zisserman `` two - stream convolutional networks for action recognition in videos , '' in _ advances in neural information processing systems _ , 2014 .a. krizhevsky , i. sutskever , and g. e. hinton . ' ' imagenet classification with deep convolutional neural networks , " in _ advances in neural information processing systems _ , caffe : convolutional architecture for fast feature embedding , `` in _ proceedings of the acm international conference on multimedia _ , 2014 .k. simonyan and a. zisserman . ' ' two - stream convolutional networks for action recognition in videos , `` in _ advances in neural information processing systems _ , 2014 .c. liu . ' ' beyond pixels : exploring new representations and applications for motion analysis . " doctoral thesis ._ massachusetts institute of technology ._ may 2009 .a. w. m. smeulders , d. m. chu , r. cucchiara , s. calderara , a. dehghan , and m. shah .`` visual tracking : an experimental survey , '' in _ ieee transaction on pattern analysis and machine intelligence _ , 2013 .y. wu , j. lim , and m. h. yang .`` online object tracking : a benchmark , '' in _ ieee conference on computer vision and pattern recognition _ , 2013 .d. erhan , y. bengio , a. courville , and p. vincent .`` visualizing higher - layer features of a deep network , '' in _ dept .iro , universit de montral , tech .rep 4323 _ , 2009 .s. hare , a. saffari , s. golodetz , v. vineet , m. cheng , s. l. hicks , p. h.s .`` struck : structured output tracking with kernels , '' in _ ieee transactions on pattern analysis and machine intelligence _ , 2014 .w. zhong , h. lu , m. yang .`` robust object tracking via sparsity - based collaborative model , '' in _ ieee conference on computer vision and pattern recognition _ , 2012 .
in this paper , we study a discriminatively trained deep convolutional network for the task of visual tracking . our tracker utilizes both motion and appearance features extracted from a pre - trained dual stream deep convolution network . we show that the features extracted from our dual - stream network can provide rich information about the target , leading to competitive performance against other state of the art tracking methods . = xfloat xfloat # 1[#2 ] # 1[#2 ] normalsize
imagine you measure in the laboratory a given quantity for six different _ systems _ : system 1 , system 2 , , and system 6 ( they could be cell types , people , proteins or dna vectors , even the same system at different times if the quantity is expected to evolve in some reproducible manner ) . you want to be sure that you are making no mistakes , so you repeat the whole set of six measures three times , say , in different days ( you try hard so that the only thing that changes from one time to the next is the day ) .we will call each one of these repeated experiments an _ assay _ , in this case , assay 1 , assay 2 and assay 3 . at the end of the process , you are in possession of values of the quantity ; six for each assay , three for each system .now imagine you obtain the values in tab . [tab : uncorrected_raw ] ( the strange names for the six systems in the first column will be explained later ) .the first thing we can say about the results is that they do not look good at all .the standard deviation from the average is comparable to the average itself for most of the systems , and only on a couple of them you are ` lucky ' enough so that the former is about half the value of the latter .you check the corresponding chart in fig .[ fig : errorbars_uncorrected ] , and you see the same despairing situation .the error bars are humongous ! .[tab: uncorrected_raw]activity of the metluc protein ( quantity ) under the control of six different promoter sequences ( the six systems ) measured in three assays .the last two columns correspond to the average of the three assays for each system , and the associated standard deviation ( or error ) .the units as well as the rest of the experiment s details are described in sec .[ subsec : experiment ] . [ cols="<,>,>,>,>,^,>",options="header " , ] * it demands an arbitrary choice ( that of the normalizing system ) which seems ad hoc and prevents automatization in some degree . related to this ,the fact that the corrected result for the normalizing system has zero standard deviation does not seem easy to interpret , nor completely legitimate . *if we recall the general formula for the propagation of errors , where is a function of random variables with standard deviations ( errors ) , we can use it to compute the error in the normalized quantity , where is the measured result for the system ( in a given assay ) and is the quantity measured for the system chosen to normalize the results : we see that the error in the normalized quantity relative to the value of itself is the sum of the relative errors of and .now , if we happen to choose a particular normalizing system with high relative error , this could spoil the whole assay when we divide all the results by , even if the rest of measures were accurate . *the described normalizing procedure seems fit to eliminate multiplicative systematic errors , but not additive ones .our method suffers from none of these problems : * no choice of a ` special ' normalizing system is needed .( there is a choice of a reference assay , but it is made in a justified way , as we have explained . ) * in a manner of speaking , it distributes the normalization among all the values in a given assay , thus minimizing the probability that one specially bad apple spoils the whole basket . *it eliminates both multiplicative and additive systematic errors .if we check exhaustive textbooks in biostatistics , such as , or more wide ranging ones , such as , we do not find any account of a correcting method that is similar to what we propose here .some of the texts come close sometimes , but they never hit the target .one way in which they often come close is when they discuss _ repeated measures_. see for example , , or , , and for detailed discussions of the concept in biosciences . `repeated measures ' consists of an experimental setup very similar to the one used here and described in sec .[ subsec : setup ] , i.e. , measuring the same quantity on systems and repeating the experiment times , but it contains a fundamental difference : it tackles measurements _ that are expected to change from repetition to repetition _[ e.g. , a time series , or table ii of discussed in sec .[ subsec : setup ] ] .it is a key of our setup that we expect the results of several repetitions to be _the same_. this is why it makes sense for us to correct them , which would be unnatural in the repeated - measures setup . also , for repeated measures , it is not a requirement that we are not interested in the absolute value but only in the inter - system variation . in our case , this is essential . in , another similar situation to the one we have considered hereis dealt with , namely _ blocking _, however , they do not discuss what to do if there is an obvious linear correlation between the blocks ( as in their figure 13.6a ) .their example in figure 13.12 also seems ripe to apply our method , but they take no correcting action on it .one of the reasons that we imagine could be behind the fact that no precedents of our straightforward method are found in the literature ( as far as we have been able to scan it ) has to do with the usual interpretation of the range of application of the least - squares fit protocol .typically , fitting some values in the -axis against those on the -axis is used to assess a possible linear relationship between _ two different quantities _ ( apples and oranges , say ) .so much so that is typically called the _ independent variable _ , while is the _ dependent _ one . in our approach, it is a key conceptual step to realize that it actually makes sense to investigate the linear correlation of some quantity _ with itself _ ( measured in two different assays ) , and consequently interpret any difference between the two as experimental error ( in the manner we explained before ) .another reason that is possibly behind the absence of precedents is the fact that , despite being quite intuitive to us , systematic errors of the _ multiplicative _ kind are very rarely discussed in the literature .systematic errors are normally considered to be additive . after a thorough search we have only found anecdotal mentions in a paper that discusses the influence of natural fires on the air pollution of the moscow area , in a proceedings paper about anticorrosion coating , in a recent work concerned with calibration of spectrographs for detecting earth - mass planets around sun - like stars , and in a similar paper focused in the detection and study of quasars . in all these worksthe authors consider the possibility of a multiplicative systematic error in their models or measurements , but they take no action to correct it .something very similar happens in , where the existence of multiplicative systematic errors is acknowledged in the context of analytical chemistry , as well as the necessity to eliminate them . in , the possibility of both additive and multiplicative systematic errorsis discussed , as well as their respective relation with non - zero -intercepts and non - unit slopes . finally , in , the authors not only discuss multiplicative systematic errors ( which they also call _ gain shifts _ or _ gain errors _ ) , but they provide several examples where this multiplicative systematic error can appear .although more space is dedicated in these last three works to the discussion of multiplicative systematic errors , the authors do not provide any method for eliminating them either .in addition , it is worth mentioning that , in and in , the authors consider the error to be defined with respect to ` true ' ( or at least more accurate ) results ; in the first case to calibrate experimental protocols , in the second one to calibrate measuring devices . as we explained when discussing the choice of the reference assay ,our perspective on this issue is different , and so it is the approach .for example , if you want to correct your results against some ` better ' data , you are presumably interested not only in the variations of the measured quantity , but also in its absolute value .we have only found one work , concerned with gas electron diffraction data , in which the authors _ both _ consider the existence of multiplicative systematic errors _ and _ take actions to correct them . however , the proposed correction is particular to the concrete problem studied , and the experimental setup is different to the one described in sec .[ subsec : setup ] : the authors refer to systematic errors in experimental data with respect to the ` true ' values , not to systematic errors between different measures of the same quantity as we do here .we have introduced a method for correcting the data in experiments in which a single quantity is measured for a number of systems in multiple repetitions or assays .if we are not interested in the absolute value of but only in the inter - system variations , and the results in different assays are highly correlated with one another , we can use the proposed method to eliminate both additive and systematic differences ( errors ) between each one of the assays and a suitably chosen reference one . as we have shown using a real example of a cell biology experiment, this correction can considerably reduce the standard deviation in the systems averages across assays , and consequently improve the statistical significance of the data .the method is of very general applicability , not only to experimental results but possibly also to numerical simulations , as long as the structure of the setup and the requirements on the data are those just mentioned and carefully discussed in sec .[ subsec : setup ] .this , together with its simplicity of application ( the only mathematical infrastructure needed to apply it is basically least - squares linear fits ) , makes the method of very wide interest in any quantitative scientific field that deals with data subject to uncertainty . some possible lines of future work include the application of the method to a wider variety of problems , a deeper statistical analysis of its properties and the assumptions behind it , or the extension to systematic differences of higher - than - linear order that we briefly mentioned in sec .[ sec : discussion ] .we would like to thank professors jess pea , silvano pino , juan puig , ricardo rosales and javier sancho for recommending to us the reference statistics and biostatistics textbooks that we have used in the writing of the manuscript .this work has been supported by the grants fis2009 - 13364-c02 - 01 ( ministerio de ciencia e innovacin , spain ) , uz2012-cie-06 ( universidad de zaragoza , spain ) , grupo consolidado `` biocomputacin y fsica de sistemas complejos '' ( dga , spain ) , also by grants bfu2009 - 11800 ( ministerio de ciencia e innovacin , spain ) , and uz2010-bio-03 and uz2011-bio-02 ( universidad de zaragoza , spain ) to j.a.c .a. g. glenday , d. f. phillips , m. webber , c .- h .li , g. furesz , g. chang , l .- j .chen , f. x. krtner , d. d. sasselov , a. h. szentgyorgyi , and r. l. walsworth . .in ian s. mclean , suzanne k. ramsay , and hideki takami , editors , _ ground - based and airborne instrumentation for astronomy iv _ , volume 8446 , 2012 . .
you measure the value of a quantity for a number of systems ( cells , molecules , people , chunks of metal , dna vectors , etc . ) . you repeat the whole set of measures in different occasions or _ assays _ , which you try to design as equal to one another as possible . despite the effort , you find that the results are too different from one assay to another . as a consequence , some systems averages present standard deviations that are too large to render the results statistically significant . in this work , we present a novel correction method of very low mathematical and numerical complexity that can reduce the standard deviation in your results and increase their statistical significance as long as two conditions are met : inter - system variations of matter to you but its absolute value does not , and the different assays display a similar tendency in the values of ; in other words , the results corresponding to different assays present high linear correlation . we demonstrate the improvement that this method brings about on a real cell biology experiment , but the method can be applied to any problem that conforms to the described structure and requirements , in any quantitative scientific field that has to deal with data subject to uncertainty . + * keywords : * multiplicative systematic error , reducing standard deviation , multiple assays , inter - system variation , linear correlation , statistical significance +
the structure and dynamics of an accretion disk around a neutron star or a black hole is often strongly influenced by its irradiation from the compact object or the disk itself . in galactic sources , for example ,the optical emission ( see , e.g. , de jong , van paradijs , & augusteijn 1996 ) , the stability ( see , e.g. , van paradijs 1996 ; king , kolb , & burderi 1996 ) , and the warping of the accretion disk ( pringle 1996 ; wijers & pringle 1999 ) are probably determined by x - ray irradiation from the central source . in extragalactic sources , the observed broad iron emission lines ( see nandra 1997 for a review ) , the weak or absent lyman edges ( sincell & krolik 1997 ; antonucci 1999 ) , as well as the disk warping ( pringle 1996 ) may also be caused by x - ray irradiation of the accretion disk . calculating the structure even of a non - illuminated accretion disk is not trivial , because of the complicated physical processes involved , such as the non - lte character of the transport of radiation ( see , e.g. , hubeny & hubeny 1998 ) , and assumptions regarding the vertical profile of the viscous heating ( see , e.g. , laor & netzer 1989 ) .the problem becomes even more difficult when illumination is taken into account , both because additional physical process , such as photoionization ( see , e.g. , raymond 1993 , ko & kallman 1994 ) , become dominant , and because of the multidimensional character of the problem .the transport of radiation through an accretion disk illuminated by a distant source is in general two - dimensional and not axisymmetric .the spectrum of the reflected radiation , when only compton scattering is taken into account , has often been solved in terms of the distribution of photon escape probabilities from the reflecting medium ( e.g. , lightman & rybicki 1980 ) or using green s functions ( e.g. , poutanen , nagendra , & svenson 1996 ) .when absorption and emission processes are taken into account , the problem is often solved assuming plane - parallel symmetry ( e.g. , sincell & krolik 1997 ) , or employing variants of the -iteration ( e.g. , basko , sunyaev , & titarchuk 1974 ; matt , fabian , & ross 1993 ) and monte - carlo methods ( e.g. , george & fabian 1991 ) . solving directly the radiative transfer equation in two spatial dimensions is challenging , especially when the energies of the photons change by compton scattering and the problem becomes non - local both in space and in photon energy ( see , e.g. , mihalas , auer , & mihalas 1978 ; auer & paletou 1994 ; dykema , klein , & castor 1996 ; dullemond & turolla 2000 ; busche & hillier 2000 for methods of solving problems in two spatial dimensions ) . in general , a problem that is two - dimensional in coordinate space requires the solution of the radiative transfer equation in five dimensions , two in coordinate space and three in photon momentum space . in the case of illumination of a geometrically thin accretion disk , however , the problem can be simplified significantly . because the photon mean free paths in the radial and azimuthal directions are much smaller than any characteristic length scale in the accretion flow , a geometrically thin accretion disk can be decomposed into a finite number of plane - parallel , obliquely illuminated slabs . solving the radiative transfer equation for each slab requires only four dimensions , one in coordinate space and three in photon momentum space .when the angular dependence of the interaction cross section between photons and matter can be expanded into a finite number of legendre polynomials , the problem can be simplified even further ( chandrasekhar 1960 ) . in this case , the radiative transfer equation is equivalent to a finite number of equations over one dimension in coordinate space and two dimensions in photon momentum space , i.e. , to a finite number of radiative transfer problems in one spatial dimension .these equations can be solved in general using existing numerical methods . in this paperi derive the system of equations ( reduced to one spatial dimension ) that describe , in an obliquely illuminated slab , isotropic absorption and emission , as well as compton scattering to first order in and , where is the photon energy and and are the electron temperature and rest mass .i then explore a variant of the feautrier method for solving the resulting transfer equations , in which the scattering kernels do not have forward - backward symmetry ( see milkey , shine , & mihalas 1975 ) .i illustrate the use of the derived equations and method of solution by solving simple problems related to the albedo of a cold disk .more detailed calculations of the coupled radiation and gas properties using the approach described in this paper will be reported elsewhere .i solve the radiative transfer problem in a plane - parallel slab illuminated by some arbitrary , external source of photons .i describe all physical quantities in this system using an orthogonal , cartesian reference frame , with its -axis parallel to the finite dimension of the slab .i also set , where is the speed of light , is boltzman s constant , and is planck s constant .i assume that the electrons in the slab have density and temperature , where is the electron rest mass .i also assume that the illuminating radiation is a parallel pencil beam of net flux . neglecting induced processes , the radiative transfer problem is linear and therefore the effect on any arbitrary illumination pattern can be computed by summing the solutions obtained for each plane - parallel pencil beam .the direction of illumination is described by the vector , where and are the directional angles .i describe the radiation field in terms of the monochromatic specific intensity , where is the distance from the edge of the slab , and are the directional angles of the propagation vector , and is the photon energy .because i study non - polarized radiation , i have suppressed the dependence of the specific intensity on polarization mode .i assume that absorption and emission in the slab are isotropic and denote the absorption coefficient by and the source function by . keeping only terms to first order in and ,the radiative transfer equation that describes absorption , emission , and compton scattering can be written as ( pomraning 1973 ) - n_e \sigma_{\rm t}\left(1 - 2\frac{\e}{m_e}\right ) i(z,\hat{l},\e)\nonumber\\ & & \qquad\qquad+n_e\sigma_{\rm t } \int d\omega ' \sum_{n=0}^3 \left(\frac{2n+1}{4\pi}\right ) p_n(\hat{l}\cdot\hat{l}')s_n i(z,\hat{l}',\e)\ ; , \label{genrte } \end{aligned}\ ] ] where is the angle - integrated cross section for thomson scattering , is the solid - angle element around , is the legendre polynomial of order , and \;,\nonumber\\ s_2 & = & \frac{1}{10 } \left[1-\frac{\e}{m_e}\left(1- \e\frac{\partial}{\partial \e}\right)- \frac{t_e}{m_e}\left(6 + 2\e\frac{\partial}{\partial \e } -\e^2\frac{\partial^2}{\partial \e^2}\right)\right ] \;,\nonumber\\ s_3 & = & \frac{3}{70}\left[\frac{\e}{m_e}\left(1- \e\frac{\partial}{\partial \e}\right)+ \frac{t_e}{m_e}\left(4 + 2\e\frac{\partial}{\partial \e } -\e^2\frac{\partial^2}{\partial \e^2}\right)\right]\;. \label{sn } \end{aligned}\ ] ] defining the energy - dependent optical depth as \;,\ ] ] the relative importance of absorption and scattering as and and the redistribution function in the scattering kernel as with , the radiative transfer equation ( [ genrte ] ) becomes equation ( [ 4drte ] ) is a second - order , intergrodifferential equation in a four - dimensional phase space and is equivalent to a system of four equations in a three - dimensional phase space ( see chandrasekhar 1960 , this is true because the expansion in legendre polynomials of the redistribution function ( [ redis ] ) terminates after the first four terms .following chandrasekhar ( 1960 ) , i will write the radiative transfer equation ( [ 4drte ] ) in terms of the specific intensity of the diffuse radiation field , i.e. , of the photons that have interacted with the gas at least once . expanding the specific intensity of the diffuse radiation field as \ ; , \label{expans}\ ] ] where defines the plane of illumination , the transfer equation for the diffuse field becomes equivalent to the system of equations ( cf .chandasekhar 1960 , 48.1 ) fe^{-\tau/\mu_{\rm i}}\ ; , \label{gensys } \end{aligned}\ ] ] where is kronecker s delta , and are the associated legendre s functions of the first kind defined by written in an explicit form , the zeroth - order equation is \rs\int_{-1}^1 \i^0(\mu')d\mu'\nonumber\\ & & \qquad -\left[\frac{3}{2}s_1\mu-\frac{21}{8}s_3 ( 5\mu^3 - 3\mu ) \right]\rs\int_{-1}^1\mu'\i^0(\mu')d\mu'\nonumber\\ & & \qquad -\frac{15}{8}s_2 ( 3\mu^2 - 1)\rs\int_{-1}^1 \mu'^2 \i^0(\mu')d\mu ' -\frac{35}{8}s_3(5\mu^3 - 3\mu)\rs\int_{-1}^1 \mu'^3 \i^0(\mu')d\mu'\nonumber\\ & & \qquad -\frac{1}{4 } \left [ s_0 - 3s_1\mu\mu_{\rm i } + \frac{5}{4}s_2(3\mu^2 - 1)(3\mu_{\rm i}^2 - 1 ) -\frac{7}{4}s_3(5\mu^3 - 3\mu)(5\mu_{\rm i}^3 - 3\mu_{\rm i})\right]\rs fe^{\tau/\mu_{\rm i}}\;. \label{pde0 } \end{aligned}\ ] ] the first - order equation is \rs\int_{-1}^1 { \cal l}_{\rm d}^1(\mu')d\mu'\nonumber\\ & & \qquad -\frac{15}{4}s_2 \mu \rs\int_{-1}^1\mu ' { \cal l}_{\rm d}^1(\mu')d\mu ' -\frac{105}{32}s_3(5\mu^2 - 3)\rs\int_{-1}^1 \mu'^2 { \cal l}_{\rm d}^1(\mu')d\mu'\nonumber\\ & & \qquad -\frac{3}{4}\left[s_1(1-\mu_{\rm i})^{1/2}- 5s_2\mu\mu_{\rm i}(1 -\mu_{\rm i}^2)^{1/2}\right.\nonumber\\ & & \qquad\qquad\qquad \left .+ \frac{7}{8}s_3(5\mu^2 - 3)(5\mu_{\rm i}^2 - 3 ) ( 1-\mu_{\rm i}^2)^{1/2}\right]\rs fe^{-\tau/\mu_{\rm i}}\ ; , \label{pde1 } \end{aligned}\ ] ] where .the second - order equation is \rs fe^{-\tau/\mu_{\rm i}}\;. \label{pde2 } \end{aligned}\ ] ] finally , the third - order equation is equations ( [ pde0])([pde3 ] ) are four , second - order , partial differential equations . because they describe the evolution of the diffuse radiation field , they can be solved with the following boundary conditions , where is the total vertical optical depth of the slab , and in practice , the vertical optical depth of an accretion disk can be very large and , therefore , the second of boundary conditions ( [ tau_bound ] ) can be exchanged with another condition that is easier to handle numerically .the first two moments of the specific intensity of the diffuse radiation field , which are necessary for calculating the energy and momentum exchange between photons and matter , can be calculated as and where and 2 , i showed that the specific radiative transfer problem in two spatial dimensions described by equation ( [ genrte ] ) has been reduced to four problems in one spatial dimension each , which are easier to solve .however , solving the latter problems still requires special care , for a number of reasons .first , the interaction of the illuminating radiation with the disk material takes place in the outermost layers of the slab , which are optically thin . as a result ,the method of solution of the transfer equation must be accurate in the limit of low optical depth .second , in a typical disk - illumination problem , the interaction of the illuminating radiation with the disk material is dominated by true - absorption at low photon energies but is scattering - dominated at high photon - energies . for this reason ,simple -iteration procedures are not adequate and either accelerated iterative procedures ( e.g. , the accelerated -iteration or the method of variable eddington factors ) or other non - iterative procedures ( e.g. , the feautrier method ) must be employed . however , even the latter methods are not directly applicable to the problem studied here because the redistribution integrals in the right - hand sides of the four transfer equations ( [ pde0])([pde3 ] ) are not forward - backward symmetric . in this sectioni describe a variant of the feautrier method that has the desired properties for solving the four one - dimensional radiative transfer equations ( [ pde0])([pde3 ] ) .i follow in general the procedure outlined by milkey et al .( 1975 ) , pointing out the differences that arise from the particular properties of the problem studied here .i choose as independent variables the thomson scattering optical depth which is independent of photon energy , as well as the photon energy , and the direction of propagation .i then write the four one - dimensional radiative transfer equations in the general form where and are the symmetric and antisymmetric parts of the source functions and scattering integrals in equations ( [ pde0])([pde3 ] ) , which depend implicitly on the radiation field . defining the feautrier variables \ ] ] and \;,\ ] ]the transfer equation takes the form of the system of equations ( milkey et al . 1975 ) note , that because of the lack of forward - backward symmetry in the scattering kernels , equations ( [ ueq])-([veq ] ) can not be combined into a single second - order equation , as in the usual feautrier method ( see , e.g. , mihalas 1978 ) .i then discretize all quantities over grid points in the variable , grid points in photon energy , and grid points in the direction of propagation . for simplicity ,i use , e.g. , to denote the first feautrier variable of order of the diffuse radiation field , evaluated on the grid point in the quantity , on the grid point in photon energy , and on the grid point in the direction of propagation , such that . in order to recover the diffusion of photons in energy space because of compton scattering by thermal electrons , i use a second - order differencing scheme in photon energy .for example , i denote the first and second derivatives of the first feautrier variable with respect to energy by and where i have defined the operators and in differencing with respect to the variable , i note that the quantity is density - like and i , therefore , use a center - differencing scheme for all interior grid points , i.e. , on the other hand , the quantity is flux - like and i therefore use for all interior grid points using the above differencing schemes , the difference equations in all interior grid points become where i have written explicitly the dependence of the source functions on the radiation field as note here that the above source functions have a different structure than those of milkey et al .( 1975 ) , because of the presence of the antisymmetric term that does not depend on the diffuse radiation field . at the illuminated surface of the slab , which i denote by , the boundary condition ( [ tau_bound ] ) translates into and applying equation ( [ veq_int ] ) on the first half of the first grid cell , denoted by , i obtain at a very large optical depth , which i denote by , i set the flux equal to zero , i.e. , , and applying equation ( [ veq_int ] ) on the last half of the last grid cell , denoted by , i obtain note that this boundary condition is different than equation ( [ tau_bound ] ) . finally ,at the first and last energy grid - points , boundary conditions ( [ en_bound ] ) become simply the system of algebraic equations ( [ ueq_int])([bound3 ] ) can be written in the general matrix form where , , , and are diagonal matrices , and are full matrices , and and are vectors. equations ( [ system ] ) can then be solved recursively from to , using the relations where note here a misprint in the elimination scheme of milkey et al .( 1975 ) as well as a difference with the above scheme that arises because of the presence of the antisymmetric term .it is important to point out here that the method presented in this section is very efficient since it requires the solution of only four equations of simplicity equal to the minimum required for calculating the interaction of an external radiation field with an accretion disk .therefore , the computational cost of this method is exactly equal to four times the cost required for solving the simplest problem of normal illumination of a plane - parallel slab . as an example , the solution of a problem with 20 grid points in optical depth , 10 grid points in angle , and 60 grid points in energy ( i.e. , similar to the one shown in fig. 1 ) requires only 2 cpu minutes on a 500 mhz alpha processor .in this section , i use the method described in 3 for calculating the albedo of an illuminated , cold accretion disk .i consider concentric annuli of the disk , which i approximate by plane - parallel slabs .i take into account compton scattering as well as bound - free absorption from a cold gas ( morrison & mccammon 1983 ) and neglect all other radiation processes .when the radiation field is weak , the ratio of the bound - free absorption coefficient to the scattering cross section is independent of the electron density and temperature . as a result, i can solve the radiative transfer problem in terms of the electron - scattering optical depth without the need to consider the vertical disk structure .when the radiation field is strong , the illumination of the disk will affect both its ionization balance and the absorption coefficients , and hence the overall solution will depend explicitly on the vertical structure of the disk ( see , e.g. , nayakshin & kallman 2001 ) . in general, the radiative transfer equation ( [ 4drte ] ) is linear in the specific intensity .i can , therefore , write where and describe the radiation field due to viscous heating and illumination respectively , and solve for the two radiation fields independently .note here that decomposition ( [ dec ] ) is only formally valid , since the absorption and emission coefficients , as well as the scattering kernel may depend on the electron density and temperature that are determined by the total radiation field .the model parameters for the calculations include the angle of illumination , with respect to the normal and the spectrum of the illuminating radiation , which i set to a power - law of photon index , i.e. , .figure 1 shows the contribution to the vertical flux ( eq . [ [ h2 ] ] ) of the various orders of the decomposition ( [ expans ] ) , for an illumination angle of and a power - law spectrum of photon index . even though the term of zeroth - order provides the dominant contribution to the vertical flux ,the contribution of higher - order terms is not negligible .this is shown in figure 2 , where the contribution of the high - order terms to the vertical flux is plotted for different angles of illumination .the total correction , caused by the obliqueness of the illumination , ranges between % .note here that the relative contributions of the different orders of the specific intensity plotted in figure 2 correspond to the case of zero electron temperature and increase for finite electron temperatures because of the asymmetry of the compton terms in the scattering kernel . in the context of accretion onto galactic compact objects ,the albedo of an accretion disk is usually defined in terms of the fraction of the illuminating flux that does not heat the disk gas ( see , e.g. , de jonk et al .1996 ) . in this section ,i first give the general expression for the albedo of an accretion disk that is illuminated obliquely and then evaluate it for the case of geometrically thin , optically thick , cold accretion disks .starting from equation ( [ dec ] ) and integrating the transfer equation for over photon energy and directional angle i obtain where and are the energy - integrated , zeroth and first angular moments of and the rate , at which the illuminating radiation heats the gas is therefore {\rm i}\ ; , \label{q}\ ] ] and the albedo can be written in terms of the volume integral of as \frac{j_{\rm i}}{f } d\tau_{\rm es}\nonumber\\ & = & 1-\frac{2}{\mu_{\rm i}f}\int_0^{\tau_{\rm es , max}}d\tau_{\rm es } \left[\chi + n_e\sigma_{\rm t } \left ( \frac{\langle\e\rangle}{m_{\rm e } } -4\frac{t_{\rm e}}{m_{\rm e}}\right ) \right]\int_\epsilon d\epsilon \int_{-1}^1 d\mu \i^0(\tau_{\rm es},\e,\mu)\ ; , \label{albedo } \end{aligned}\ ] ] where is the energy - integrated flux of the illuminating radiation and is the vertical height of the accretion disk .expression ( [ albedo ] ) allows the study of the vertical profile of the heating of the accretion disk by the irradiating atmosphere as well as the calculation of its albedo , by solving _ only _ the zeroth - order transfer equation for .figure 3 shows the albedo of a cold accretion disk , calculated in the kev energy range , for different power - law spectra and angles of illumination .as expected , for larger photon indices of the illuminating radiation , the fraction of low - energy photons , which are efficiently absorbed , is higher and hence the albedo of the disk is lower . at the same time , as the angle of illumination increases , the photons interact with the electrons in a shallower layer of the accretion disk and therefore have a higher chance of escaping after one interaction , increasing the value of the albedo .the overall effect of our treatment of the obliqueness of irradiation is this systematic increase of the disk albedo with illumination angle , which can be up to a factor of larger that in the case of normal illumination .in this paper , i studied the transfer of radiation in an accretion disk that is obliquely illuminated by an external source of radiation .i showed that the resulting transport problem can be decomposed exactly into four one - dimensional problems , which i solved using a variant of the feautrier method .i then applied this method in calculating the albedos of cold accretion disks .the calculated values for the albedos are , even for the softer spectra and larger illumination angles considered here .these values are small and can not account , for example , for the observed optical magnitudes of galactic low - mass x - ray binaries , which require albedos in excess of ( see de jong et al.1996 ) .figure 3 shows that relying on a near - grazing illumination of the accretion disk is not enough to account for the observations .high ionization fractions at the surface layers of the disk , which would reduce the absorption of photons , or even the existence of a highly ionized scattering wind above the accretion disk is probably required for the calculated albedos to reach the high values inferred from observations . in this study ,i assumed for simplicity that all heavy elements in the accretion disk are neutral and , therefore their interaction with the illuminating photons is described by the bound - free opacities of morrison & mccammon ( 1983 ) . in reality , however , the heated skin of the accretion disk will be collisionally- and photo - ionized and its vertical ionization and thermal balance will need to be calculated self - consistently with the radiation field ( e.g. , raymond 1993 ; ko & kallman 1994 ) .note , however , that the calculation of both the ionization balance and the radiative equilibrium depend only on the zeroth moment of the specific intensity ( see eq .[ [ j ] ] and [ [ q ] ] ) and , therefore , require the solution of only the zeroth - order transfer equation . as a result, the properties of the disk gas can be calculated exactly in a simple , one - dimensional configuration and the full angular dependence of the radiation field can then be calculated with prescribed gas properties .the results of a self - consistent caclulation of the radiation and gas properties will be reported elsewhere .i am grateful to g. rybicki for bringing to my attention the possibility of decomposing a multi - dimensional transfer equation into a small number of one - dimensional equations and for carefully reading the manuscript .i also thank feryal zel for many useful discussions , especially on the implementation of the feautrier method in problems with no forward - backward symmetry .this work was supported by a postdoctoral fellowship of the smithsonian institution and also , in part , by nasa .
the illumination of an accretion disk around a black hole or neutron star by the central compact object or the disk itself often determines its spectrum , stability , and dynamics . the transport of radiation within the disk is in general a multi - dimensional , non - axisymmetric problem , which is challenging to solve . here , i present a method of decomposing the radiative transfer equation that describes absorption , emission , and compton scattering in an obliquely illuminated disk into a set of four one - dimensional transfer equations . i show that the exact calculation of the ionization balance and radiation heating of the accretion disk requires the solution of only one of the one - dimensional equations , which can be solved using existing numerical methods . i present a variant of the feautrier method for solving the full set of equations , which accounts for the fact that the scattering kernels in the individual transfer equations are not forward - backward symmetric . i then apply this method in calculating the albedo of a cold , geometrically thin accretion disk .
recent years have witnessed the growing interest in the synchronization of networked systems because it is a ubiquitous phenomena in nature and because of its potential applications on secure communication , distributed generation of the grid , clock synchronization , formation control of multiple robots , and so on .the state synchronization problem might be rooted in the work of wu and chua and recently has been rejuvenated in linear systems with attentions on the accessibility of partial states or switching graph .different from the state synchronization that happens between identical systems , the output synchronization can arise between non - identical systems and thereby is more realistic .output synchronization for nonlinear input - output passive systems has been studied in , where under the passive - based design , the output synchronization can be achieved for many cases including balanced graph , nonlinear coupling function and communication delay . in ,the velocity synchronization problem for second - order integrators has been investigated .the above two studies only take aim at driving the outputs of the agents to each other asymptotically but do not care what the outputs will synchronize on . in ,the linear sor has been addressed for identical multi - agent systems under the dynamic relative state feedback . therethe agents have not only their outputs synchronize but also evolve ultimately on an a trajectory produced by a predefined reference exosystem . in , it is shown that the internal model principle is the sufficient and necessary condition for non - trivial linear output synchronization .there a dynamic controller has been presented for leaderless sor of linear systems with switching graph .robust linear sor have been studied in by only using relative output information .the leader - following sor of linear systems with switching graph has been investigated in . as for sor of nonlinear multi - agent systems , there are a few works reported .gazi has utilized the nonlinear output regulation method to deal with the formation control problem . there , however , the reference signal , which is stricter than the reference system , is assumed to be known by all agents so that the problem reduces to the completely decoupled output regulation problem .liu has studied the leader - following sor under the error feedback for a no - loop graph . moreover ,the robustness is addressed with two extra assumptions : the reference exosystem is linear and the solution of regulator equation are the k - th polynomials .xu and hong have studied the multi - agent systems consisting of two level networks , physical coupling and communication graph . therea networked internal model is proposed for the solution of sor .they also assume that the graph contain no - loop .this paper addresses the sor problem for general nonlinear multi - agent systems with switching topology .our framework is similar to that in and in that the information delivered among the network is assumed to be the state of the local exosystem constructed by agent itself .both the dynamic state feedback controller and the dynamic output feedback controller are proposed .we show that the sor can be achieved without extra conditions imposed on the agent dynamics when the switching graph satisfies the bounded interconnectivity times condition ( jointly connected condition ) .the most relevant to our work is the recent work in where , however , the graph is fixed and the regulator equation is strengthened to one for the whole multi - agent system so that the result obtained is not scalable .the remainder of this paper is organized as follows .problem formulation , as well as two kinds of controllers , is presented in section [ sec02 ] .main results are shown in section [ sec03 ] ; the exponential synchronization of coupled exosystems is first shown and then synchronized output regulation is proved for both kinds of controllers .the extension to leader - following case is addressed in section [ sec04 ] .a simulation example is illustrated in section [ sec05 ] , followed by a conclusion in section [ sec06 ] .consider a multi - agent system consisting of agents .each agent has the following dynamics modeled by where is the state , defined on a neighborhood of the origin of , is the input , and is the output .the vector and the columns of matrix are smooth ( i.e. , ) vector fields on . is a smooth mapping defined on .each agent drives its output to track the output of a common exosystem , as formulated by the first equation describes an autonomous system , the so - called exosystem , defined in a neighborhood of the origin of .the second equation means that the output should track a reference signal produced by the exosystem .the vector is a smooth vector field on and is a smooth map defined on . as for the sor ,the requirements on output are two folds : one is that belongs to a fixed family of trajectories determined by the pair of with the corresponding initial condition being allowed to vary on a predefined set ; the other is that for all .the first one is the output regulation problem , which might be solved by a decentralized way and the second one is the synchronization problem , which has to rely on the information exchange to solve .generally , a digraph is used to depict the communication channels of multi - agent system , where node set is the index set of agents and edge set consists of ordered pair of nodes , called edge .an edge if and only if there is communication channel from node to node , where node is called parent node and node is called child node .a directed path of digraph is a sequence of edges with form .a tree is a graph where every node has exactly one parent node except for one node , the so - called root node , which has no parent node but has a directed path to every other node .the graph is a subgraph of if and .the tree is a spanning tree of graph if is a subgraph of with . a switching graph , defined on a piecewise constant switching signal , is denoted by , where set indexes the total number digraphs and with .the time instants when switches is denoted by an increasing sequence , with .denote by the value of when .denote by and the adjacency matrix and the laplacian matrix of , respectively .we assume that any two consecutive switching instants are separated by a dwell - time , i.e. , so as to guarantee that the switching graph is non - chattering and zeno behavior can not occur .a union graph } ] is defined by }\triangleq ( \mathcal{v } , \bigcup_{t\in[t^1,t^2]}\mathcal{e}_{\sigma(t)}) ] . in order for the sor ,one natural route is firstly to synchronize the exosystems of all agents by exchanging the their state and then to drive the agent output to track the output of the local exosystem . in the first step, each agent builds the following coupled exosystem , based on the communication graph , where denotes the row and column element of adjacency matrix of graph .if , then ; otherwise , . in this case , the tracking error for each agent is defined as two kinds of controllers are considered in this paper , * * distributed dynamic state feedback controller * where is a ( for some integer ) mapping defined on , satisfying . combining and yields the following closed - loop system , which has an equilibrium at , for all . * * distributed dynamic output feedback controller * where is the observer state , defined on a neighborhood of the origin of . for each , is a vector field on ( for some integer ) .the closed - loop system under controller has the form which has an equilibrium , for all , when for all .the purpose of sor includes three aspects , local asymptotically stable , output regulation and output synchronization .define the stacked vector ^t ] and ^t ] has a spanning tree embedded .assumptions a1) ) are standard for nonlinear output regulation problems .assumption a4 ) is referred to as the bounded interconnectity times condition in , which is the weakest condition for the consensus seeking of a switching diagraph and has many invariant versions , such as the jointly connected condition and uniformly quasi - strongly connected condition .noticing that the coupled exosystem is independent of the agent dynamics , the closed - loop systems and can be regarded as to be driven by an lumped exosystem of dimensions .since the lumped exosystem has the dimension in excess of what is required , its dynamics must contain some decay modes , that is , in some vector directions is asymptotically converging to zeros . in order for the second condition 1b ) or 2b ), the undecayed mode must be the flow determined by the vector field , which means that all the exosystems should synchronize .to this end , the following result is recalled ( corollary 7 in ) , and rephrased as follows , [ le01 ] given a multi - agent system with communication graph assumption a4 ) .if the largest lyapunov exponent of system and the consensus convergence rate of are such that then system is locally exponentially synchronizable . denote by the flow of vector field , defined for all , with .then the maximum lyapunov exponent of dynamic system is defined as on the other hand , consider the consensus rate of a switching graph . define the time interval sequence by , .let and be two time instants located in the time slots of and with , respectively. given a switching graph satisfying assumption a4 ) with given and , its consensus convergent rate is defined as the supremum of the contract rate of transition matrix , where the contract rate is defined as , and the transition matrix is defined as where is the laplacian matrix of graph . throughout of this paper , denotes the vector with all elements being .[ le02 ] given a coupled exosystem with assumptions a1 ) and a4 ) , then there is a dynamic system such that for all initial conditions , there is a initial condition such that the error exponentially converges to zero , for all . according to assumption a4 ) and definition, it follows that transition matrix is stochastic , indecomposable and aperiodic , and has exactly one trivial eigenvalue associated with eigenvector . therefore , . on the other hand , with assumption a1 ) ,if not , for any given , there is a time , such that for any , for all .this is contradictory to the feature of poisson stable .with and , making use of lemma [ le01 ] yields that there is a neighborhood of origin , for all initial condition , all the exosystems have their states exponentially synchronize on a manifold determined by with . before proceeding the main results ,some matrices are firstly introduced , arising from the linearization of nonlinear dynamics on the equilibrium of origin .{x_i=0 } , \quad b_i = g_i(0 ) , \quad \quadc_i= \left [ \frac{\partial h_i } { \partial x_i}\right]_{x_i=0}.\ ] ] denote by the reminder of the linear approximation of a vector function , that is , let denote the jacobian matrix of vector function , that is , .it follows that and .define then by mean - value theorem , secondly , a useful lemma is presented , which plays a key role in the proof of main results for both state feedback and output feedback cases . [ le03 ] given a multi - agent system with assumptions a1 ) , a2 ) and a4 ) .suppose that for all , there exist mapping , with , and , with , both defined in a neighborhood of origin , satisfying the conditions [ iii09 ] then under the following controller where is such that is hurwitz , conditions 1a ) and 1b ) will be satisfied . according to assumption a2 ) , it is true that there exists a matrix such that is hurwitz for all . noting that , condition 1a ) follows directly .below we show condition 1b ) . by lemma [ le02 ] , there is a positive scalar such that for some positive scalar .consider the vector . with ,its dynamics has the form making use of linear approximation and mean value theorem , one has , and {xi}+ g_i(x_i)\mathrm{m}c_i(w_0,\delta{w_i})\delta{w_i } \\ & + \left(\sum_{j=1}^{m_i } c_{ij}(w_0)\mathrm{m}g_{ij}(\pi_i(w_0),e_{xi})\right)e_{xi } \end{split}\ ] ] where denotes the column vector of matrix function and denotes the element of . with the above two equations , equation can be rewritten as where \end{split}.\ ] ] since is hurwitz , there are a symmetric positive definite matrix and a positive scalara such that by assumption a1 ) and noticing that condition 1a ) holds , thereexist sufficiently small and ( notice that depends on , ) , such that the trajectories of and of the closed - loop system satisfy where denotes the maximum eigenvalue of . then consider the lyapunov function , whose derivative satisfies where is a constant scalar and denoting the minimal eigenvalue of .define , then and which is equivalent to . with, one further obtains from which , as , and so does .using , it can be further concluded that condition 1b ) will be satisfied .it should be pointed out that the proof of the above lemma is not based on the center manifold method , which can not be directly applied here in the presence of switching graphs .now we are ready to present the main result for state feedback case .[ th01 ] under assumption a1 ) , a2 ) and a4 ) , the state feedback synchronized regulator problem is solvable for the multi - agent system if and only if for all , there exist mapping , with , and , with , both defined in a neighborhood of origin , satisfying the conditions [ iii09 ] necessity is obvious by considering the special situation with .the sufficiency follows immediately from lemma [ le03 ] .theorem [ th01 ] says that the solvability condition for the local sor is the same as that for the local output regulation of a single agent .[ th02 ] under assumptions a1) ) , the output feedback synchronized regulator problem is solvable for the multi - agent system if and only if there exist mapping , with , and , with , both defined in a neighborhood of origin , satisfying the conditions [ iii09 ] necessity is clear .below we show the sufficiency by using a constructive method . assumption a2 ) and a3 ) mean that there are matrices and such that are hurwitz . by them ,the following matrix is also hurwitz .suppose there are two maps and satisfying , then set the dynamic controller to be [ iii24 ] for all .define the augmented state ^t ] has exactly the form and that controller can be rewritten as (\tilde{x}_i-\tilde{\pi}_i(w_i)) ] has a spanning tree embedded .assumption a4 ) is not necessary for assumption a5 ) . on the other hand ,noticing that edge does not belong to , a spanning tree , if exists , must be rooted at node .below for simplicity , the result of leader - following output regulation only for output feedback case is presented straightforwardly .given a multi - agent system and an exosystem satisfying assumptions a1) ) and .there is a dynamic controller of the form such that for all sufficiently small , , and , the trajectory of the closed - loop system is bounded and satisfies if and only if there exist mapping , with , and , with , both defined in a neighborhood of origin , satisfying the conditions for a leaderless multi - agent system , if one agent does not adjust the exosystem constructed by itself so that equivalently the agent does not receive the information of others ( but sent its information to others ) , and assumption a4 ) is still satisfied , then the leaderless case reduces to the leader - following case . in this consideration, the leader - following case can be regraded as a special leaderless case .for the sack of simpleness , a multi - agent system of three nodes is taken as an illustrated example .these nodes are described respectively by the following equations .+ * agent-1 is with state , and and .* agent-2 is with state and * agent-3 with state and the dynamics that their outputs want to manifest is a sinusoid wave , formulated by where denotes the angle frequency of the sinusoid wave . herenotations and denote the -th element of and , respectively . it can be verified that for the three agents , the regulator equation has solutions with , respectively , also , it can be seen that the agents satisfy assumption a2 ) and a3 ) .the feedback gains are designed as , \quad l_2^t= [ -8 , -20]\\ & k_3 = [ -11,-8 ] , \quad l_3^t= [ -10,-30 ] \end{aligned}\ ] ] for agent is a state feedback controller , while for agent and are output feedback controllers .the communication graph is switched randomly among three digraphs in fig .[ fig01 ] with a fixed time interval .simulation results are shown in fig . [ fig02 ] with angle frequency , where all initial conditions are randomly produced with each element being in the region of $ ] .it can be seen after transition time , all the agents have their outputs not only synchronize but also demonstrate a sinusoid wave , although the associated communication graph is randomly switching . , , and the graph index ,title="fig : " ] +it has been shown that the sufficient and necessary condition that the agent dynamics should satisfy for the solvability of sor problem is the same as that for nonlinear output regulation problem .both the dynamic state feedback controller and the dynamic output feedback controller have been respectively presented .both of them can achieve the sor if the switching graph satisfies the bounded interconnectivity times condition .extension to error feedback controller is an appealing topic for future work .
this paper considers the synchronized output regulation ( sor ) problem of nonlinear multi - agent systems with switching graph . the sor means that all agents regulate their outputs to synchronize on the output of a predefined common exosystem . each agent constructs its local exosystem with the same dynamics as that of the common exosystem and exchanges the state information of the local exosystem . it is shown that the sor is solvable under the assumptions same as that for nonlinear output regulation of a single agent , if the switching graph satisfies the bounded interconnectivity times condition . both state feedback and output feedback are addressed . a numerical simulation is made to show the efficacy of the analytic results . synchronized output regulation ( sor ) , nonlinear system , multi - agent , switching graph .
singular boundary value problems ( sbvps ) is an important class of boundary value problems , and arises frequently in the modeling of many actual problems related to physics and engineering areas such as in the study of electro hydrodynamics , theory of thermal explosions , boundary layer theory , the study of astrophysics , three layer beam , electromagnetic waves or gravity driven flows , inelastic flows , the theory of elastic stability and so on . in general , sbvps is difficult to solve analytically .therefore , various numerical techniques have been proposed to treat it by many researchers .however , the solution of sbvps is numerically challenging due to the singularity behavior at the origin . in this work ,we are interested again in the following sbvps arising frequently in applied science and engineering : subject to the boundary value conditions and where and are any finite real constants . if , ( 1 ) becomes a cylindrical problem , and it becomes a spherical problem when .it is assumed that is continuous , exists and is continuous and for any such that equation ( 1 ) has a unique solution .the sbvps ( 1)-(3 ) with different arise in the study of various scientific problems for certain linear or nonlinear functions .the common cases related to the actual problems are summarized as follows .the first case for and emerges from the modeling of steady state oxygen diffusion in a spherical cell with michaelis - menten uptake kinetics . in this case, represents the oxygen tension ; and are positive constants involving the reaction rate and the michaelis constant .hiltmann and lory proposed the existence and uniqueness of the solution for and .analytical bounding functions were given in .the numerical methods to solve the sbvps for this case have attracted a reasonable amount of research works , such as the finite difference method ( fdm ) , the cubic spline method ( csm ) , the sinc - galerkin method ( sgm ) , the adomian decomposition method ( adm ) and its modified methods , the variational iteration method ( vim ) , the series expansion technique ( sem ) and the b - spline method ( bsm ) .the second case arises in the study of the distribution of heat sources in the human head , in which and in , point - wise bounds and uniqueness results were presented for the sbvps with the nonlinear function of the forms given by ( 4 ) and ( 5 ) .quite a little amount of works by using different approaches , including the fdm , the csm and the sgm , have been proposed to obtain the approximate solutions of this case .the third important case of physical significance is when and which arises in studying the theory of thermal explosions and the electric double layer in a salt - free solution .a variety of numerical methods have been applied to handle such sbvps , for example , the fourth order finite difference method ( ffdm ) , the modified adomian decomposition method , the taylor series method ( tsm ) and the bsm .besides , chandrasekhar derived another case for and which is a physical constant .this case is in connection with the equilibrium of thermal gas thermal .the numerical solution of this kind of equation for was considered by using various methods , such as the ffdm , the vim , the sem and the modified adomian decomposition method .all the aforementioned methods can yield a satisfied result .however , each of these methods has its own weaknesses .for example , the vim has an inherent inaccuracy in identifying the lagrange multiplier , and fails to solve the equation when the nonlinear function is of the forms ( 5 ) and ( 6 ) . those methods such as the fdm , the sem , the sgm and the spline method require a tedious process and huge volume of computations in dealing with the linearization or discretization of variables .the adm needs to obtain the corresponding volterra integral form of the given equation , via which one can overcome the difficulty of singular behavior at .the modified adm needs to introduce a twofold indefinite integral operator to give better and accurate results ; moreover , the success of method in relies on constructing green s function before establishing the recursive relation for applying the adm to derive the solution components .all those manners are at the expense of computation budgets . besides, none of above methods is applied to handle the equations with all forms of nonlinearities ( 4)-(7 ) . in recent years , a lot of attentions have been devoted to the applications of differential transform method ( dtm ) and its modifications .the dtm proposed by pukhov at the beginning of 1980s . however , his work passed unnoticed . in 1986 , zhou reintroduced the dtm to solve the linear and nonlinear equations in electrical circuit problems .the dtm is a semi - numerical - analytic method that generates a taylor series solution in the different manner . in the past forty years, the dtm has been successfully applied to solve a wide variety of functional equations ; see and the references therein . although being powerful , there still exist some difficulties in solving various of equations by the classical dtm .some researchers have devoted to deal with these obstacles so as to extend the applications of the dtm .for example , in view of the dtm numerical solution can not exhibit the real behaviors of the problem , odibat _et al_. proposed a multi - step dtm to accelerate the convergence of the series solution over a large region and applied successfully to handle the lotka - volterra , chen and lorenz systems . in , authors suggested an alternative scheme to overcome the difficulty of capturing the periodic behavior of the solution by combining the dtm , laplace transform and pad approximants .another difficulty is to compute the differential transforms of the nonlinear components in a simple and effective way . by using the traditional approach of the dtm, the computational difficulties will inevitably arise in determining the transformed function of an infinity series .compared to the traditional method , chang and chang proposed a relatively effective algorithm for calculating the differential transform through a derived recursive relation . yet , by using their method , it is inevitable to increase the computational budget , especially in dealing with those differential equations which have two or more nonlinear terms being investigated .recently , the authors disclosed the relation between the adomian polynomials and the differential transform of nonlinearities , and developed an inspiring approach to handle the nonlinear functions in the given functional equation .meanwhile , the problem of tedious calculations in dealing with nonlinear problems by using the adm has also been improved considerably by duan .all of these effective works make it possible to broaden the applicability and popularity of the dtm considerably .the aim of this work is to develop an efficient approach to solve the sbvps ( 1)-(3 ) with those nonlinear terms ( 4)-(7 ) .this scheme is mainly based on the improved differential transform method ( idtm ) , which is the improved version of the classical dtm by using the adomian polynomials to handle the differential transforms of those nonlinear functions ( 4)-(7 ) .no specific technique is required in dealing with the singular behavior at the origin . meanwhile , unlike some existing approaches , the proposed method tackles the problem in a straightforward manner without any discretization , linearization or perturbation .the numerical solution obtained by the proposed method takes the form of a convergent series with those easily computable coefficients through the adomian polynomials of those nonlinear functions as the forms of ( 4)-(7 ) .the rest of the paper is organized as follows . in the next section , the concepts of dtm and adomian polynomials are introduced .algorithm for solving the problem ( 1)-(3 ) and an upper bound for the estimation of approximate error are presented in section 3 .section 4 shows some numerical examples to testify the validity and applicability of the proposed method . in section 5 , we end this paper with a brief conclusion .in the adomian decomposition method ( adm ) , a key notion is the adomian polynomials , which are tailored to the particular nonlinearity to easily and systematically solve nonlinear differential equations .the interested readers are referred to refs . for the details of the adm . for the applications of decomposition method ,the solution of the given equation in a series form is usually expressed by and the infinite series of polynomials for the nonlinear term , where is called the adomian polynomials , and depends on the solution components . the traditional algorithm for evaluating the adomoan polynomials first provided in by the formula a large amount of works have been applied to give the more effective computational method for the adomian polynomials . for fast computer generation ,we favor duan s corollary 3 algorithm among all of these methods , as it merely involves the analytic operations of addition and multiplication without the differentiation operator , which is eminently convenient for symbolic implementation by computer algebraic systems such as maple and mathematics .the method to generate the adomian polynomials in is described as follows : such that it is worth mentioning that duan s algorithm involving ( 11 ) and ( 12 ) has been testified to be one of the fastest subroutines on record , including the fast generation method given by adoamin and rach .the differential transform of the differentiable function at is defined by }_{x=0},\ ] ] and the differential inverse transform of is described as where is the original function and is the transformed function . for the practical applications , the function is expressed by a truncated series and eq .( 14 ) can be written as it is not difficult to deduce the transformed functions of the fundamental operations listed in table 1 ..the fundamental operations of the dtm . [ cols="<,<",options="header " , ] + consider the following sbvp with nonlinear term different from the forms ( 4)-(7 ) which arises in the radial stress on a rotationally symmetric shallow membrane cap : subject to the boundary conditions the adomian polynomials of nonlinear term in this problem are computed as like the previous problems 3 and 4 , a closed - form solution to this equation can not be written down .so we instead investigate the absolute residual error functions and the maximal error remainder parameters to examine the accuracy and the reliability of our numerical results . here , the absolute residual error functions are and the maximal error remainder parameters are in fig .3 , we plot the absolute residual error functions for through 14 by step 2 .the logarithm plot for the maximal error remainder parameters for the same is shown in fig .4 , which demonstrates an approximately exponential rate of convergence of the obtained truncated series solutions and thus the presented method converges rapidly to the exact solution .even though there is no exact solution for this problem , the following 10th order approximation has an accuracy of o( ) and can be used for practical applications this work , a reliable approach based on the idtm is presented to handle the numerical solutions of a class of nonlinear sbvps arising in various physical models .this scheme takes the form of a truncated series with easily computable coefficients via the adomian polynomials of those nonlinearities in the given problem . with the proposed algorithm, there is no need of discretization of the variables , linearization or small perturbation .numerical results show that the proposed method works well for the sbvps ( 1)-(3 ) with a satisfying low error . besides , it is obvious that evaluation of more components of the approximate solution will reasonably improve the accuracy of truncated series solution by using the proposed method .comparisons of the results reveal that the present method is very effective and accurate .moreover , we are convinced that the idtm can be extended to solve the other type of functional equations involving nonlinear terms more easily as the adomian polynomials are applicable for any analytic nonlinearity and can be generated quickly with the aid of the algorithm proposed by duan .it is necessary to point out that algebraic equation ( 23 ) is a nonlinear one , and we shall inevitably encounter the bad roots while solving it . the criterion to separate the good root from a swarm of bad ones is convergence because it represents the value of unknown function at the origin and will not change for the different .this work was supported by the scientific research fund of zhejiang provincial education department of china ( no.y201430940 ) and k.c .wong magna fund in ningbo university , and partially supported by the national nature science foundation of china ( no .11226243 ) .lin , oxygen diffusion in a spherical cell with nonlinear oxygen uptake kinetics , j. theor . biol .60(2)(1976)449 - 457 .mcelwain , a re - examination of oxygen diffusion in a spherical cell with michaelis - menten oxygen uptake kinetics , j. theor .71(2)(1978)255 - 263 .p. hiltmann , p. lory , on oxygen diffusion in a spherical cell with michaelis - menten oxygen uptake kinetics , bull .biol . 45(5)(1983)661 - 664. n. anderson , a.m. arthurs , analytical bounding functions in a spherical cell with michaelis - menten oxygen uptake kinetics , bull .47(1)(1985)145 - 153 .pandey , a finite difference method for a class of singular two point boundary value problems arising in physiology , int . j. comput . math .65(1 - 2)(1997)131 - 140 .j. rashidinia , r. mohammadi , r. jalilian , the numerical solution of non - linear singular boundary value problems arising in physiology , appl .185(1)(2007)360 - 367 .ravi kanth , v. bhattacharya , cubic spline for a class of non - linear singular boundary value problems arising in physiology , appl .174(1)(2006)768 - 774 .e. babolian , a. eftekhari , a. saadatmandi , a sinc - galerkin technique for the numerical solution of a class of singular boundary value problems , comp .appl . math .( 2013)1 - 19 .khuri , a. sayfy , a novel approach for the solution of a class of singular boundary value problems arising in physiology , math .comput . modell .52(3 - 4)(2010)626 - 636 .a.m. wazwaz , r. rach , j .- s .duan , adomian decomposition method for solving the volterra integral form of the lane - emden equations with initial and boundary conditions , appl .219(10)(2013)5004 - 5019 .r. singh , j. kumar , an efficient numerical technique for the solution of nonlinear singular boundary value problems , comput .185(4)(2014)1282 - 1289 .ravi kanth , k. aruna , he s variational iteration method for treating nonlinear singular boundary problems , comput . math .appl . 60(3)(2010)821 - 829 .a.m. wazwaz , the variational iteration method for solving nonlinear singular boundary value problems arising in various physical models , commun .nonlinear sci .16(10)(2011)3881 - 3886 .m. turkyilmazoglu , effective computation of exact and analytic approximate solutions to singular nonlinear equations of lane - emden - fowler type , appl .math . modell .37(14 - 15)(2013)7539 - 7548 .h. alar , n. alar , m. zer , b - spline solution of non - linear singular boundary value problems arising in physiology , chaos soliton .39(3)(2009)1232 - 1237 .u. flesch , the distribution of heat sources in the human head : a theoretical consideration , j. theor .biol . 54(2)(1975)285 - 287 .gray , the distribution of heat sources in the human head theoretical considerations , j. theor . biol .82(3)(1980)473 - 476 .duggan , a.m. goodman , pointwise bounds for a nonlinear heat conduction model of the human head , bull .biol . 48(2)(1986)229 - 236 .m. kumar , n. singh , modified adomian decomposition method and computer implementation for solving singular boundary value problems arising in various physical problems , comput .34(11)(2010)1750 - 1760 .chang , taylor series method for solving a class of nonlinear singular boundary value problems arising in applied science , appl .comput . 235(25)(2014)110 - 117 .chang , electroosmotic flow in a dissimilarly charged slit microchannel containing salt - free solution , eur .b - fluid . 34(2012)85 - 90 .pukhov , differential transforms of functions and equations , naukova dumka , kiev , 1980 ( in russian ) .pukhov , differential transforms and circuit theory , int .j. circ . theor .10 ( 1982)265 - 276 .pukhov , differential transformations and mathematical modeling of physical processes , naukova dumka , kiev , 1986 ( in russian ) .zhou , differential transformation and its applications for electrical circuits , wuhan china : huazhong university press , 1986 ( in chinese ) .zhou , s. xu , a new algorithm based on differential transform method for solving multi - point boundary value problems , int .j. comput . math .( 2015 ) 1 - 14 .odibat , c. bertelle , m.a .aziz - alaoui , g.h.e .duchamp , a multi - step differential transform method and application to non - chaotic or chaotic systems , comput .59(4)(2010)1462 - 1472 .a. gkdoan , m. merdan , a. yildirim , the modified algorithm for the differential transform method to solution of genesio systems , commum .nonlinear sci .17(1)(2012 ) 45 - 51 .s. momani , v.s .ertrk , solutions of non - linear oscillators by the modified differential transform method , comput .math . appl . 55(4)(2008)833 - 842 .chang , i - l .chang , a new algorithm for calculating one - dimensional differential transform of nonlinear functions , appl .comput . 195(2)(2008)799 - 808 .a. elsaid , fractional differential transform method combined with the adomian polynomials , appl .218(12)(2012)6899 - 6911 .h. fatoorehchi , h. abolghasemi , improving the differential transform method : a novel technique to obtain the differential transforms of nonlinearities by the adomian polynomials , appl . math .37(8)(2013)6008 - 6017 .duan , recurrence triangle for adomian polynomials , appl .comput . 216(4)(2010)1235 - 1241 .duan , an efficient algorithm for the multivariable adomian polynomials , appl .217(6)(2010)2456 - 2467 .duan , convenient analytic recurrence algorithms for the adomian polynomials , appl .217(13)(2011)6337 - 6348 .g. adomian , a review of the decomposition method and some recent results for nonlinear equations , math . comput . modell . 13(7)(1990)17 - 43 .g. adomian , solving frontier problems of physics : the decomposition method , kluwer academic : dordrecht , 1994 .g. adomian , r. rach , inversion of nonlinear stochastic operators , j. math .appl . 91(1)(1983)39 - 46 .r. rach , a new definition of the adomian polynomials , kybernetes 37(7)(2008)910 - 955 .r. rach , a convenient computational form for the adomian polynomials , j. math .appl . 102(2)(1984)415 - 419 .a.m. wazwaz , a new algorithm for calculating adomian polynomials for nonlinear operators , appl .111(1)(2000)33 - 51 .k. abbaoui , y. cherruault , v. seng , practical formulae for the calculus of multivariable adomian polynomials , math .modell . 22(1)(1995)89 - 93 .f. abdelwahid , a mathematical model of adomian polynomials , appl .141(2 - 3)(2003)447 - 453 .m. azreg - anou , a developed new algorithm for evaluating adomian polynomials , cmes - comput . model .eng . 42(1)(2009)1 - 18 .the absolute residual error functions for ( left ) and ( right ) of example 3 .+ figure 2 .the logarithmic plots for the maximal error remainder parameters for through by step and ( up , left ) , ( up , right ) , ( down ) of example 3 . + figure 3 . the absolute residual error functions for ( left ) and ( right ) of example 5 .+ figure 4 .the logarithmic plot for the maximal error remainder parameters for through by step of example 5 .
in this work , an effective numerical method is developed to solve a class of singular boundary value problems arising in various physical models by using the improved differential transform method ( idtm ) . the idtm applies the adomian polynomials to handle the differential transforms of the nonlinearities arising in the given differential equation . the relation between the adomian polynomials of those nonlinear functions and the coefficients of unknown truncated series solution is given by a simple formula , through which one can easily deduce the approximate solution which takes the form of a convergent series . an upper bound for the estimation of approximate error is presented . several physical problems are discussed as illustrative examples to testify the validity and applicability of the proposed method . comparisons are made between the present method and the other existing methods . : singular boundary value problem ; differential transform method ; adomian polynomials ; improved differential transform method ; approximate series solutions
indistinguishability of nonorthogonal states is a basic feature of quantum mechanics that has deep implications in many areas , as quantum computation and communication , quantum entanglement , cloning , and cryptography . since the pioneering work of helstrom on quantum hypothesis testing, the problem of discriminating nonorthogonal quantum states has received a lot of attention , with some experimental verifications as well .the most popular scenarios are : the minimum - error probability discrimination , where each measurement outcome selects one of the possible states and the error probability is minimized ; the optimal unambiguous discrimination , where unambiguity is paid by the possibility of getting inconclusive results from the measurement ; the minimax strategy where the smallest of the probabilities of correct detection is maximized . stimulated by the rapid developments in quantum information theory , the problem of discrimination has been addressed also for bipartite quantum states , along with the comparison of global strategies where unlimited kind of measurements is considered , with the scenario of locc scheme , where only local measurements and classical communication are allowed . the concepts of nonorthogonality and distinguishability can be applied also to quantum operations , namely all physically allowed transformations of quantum states , and some work has been devoted to the problem of discriminating unitary transformations and more general quantum operations .the quantum indistinguishability principle is closely related to another very popular , yet often misunderstood , principle ( formerly known as heisenberg principle ) : it is not possible to extract information from a quantum system without perturbing it somehow .in fact , if the experimenter could gather information about an unknown quantum state without disturbing it at all , even if such information is partial , by performing further non - disturbing measurements on the same system , he could finally determine the state , in contradiction with the indistinguishability principle .actually , there exists a precise tradeoff between the amount of information extracted from a quantum measurement and the amount of disturbance caused on the system , analogous to heisenberg relations holding in the preparation procedure of a quantum state .quantitative derivations of such a tradeoff have been obtained in the scenario of quantum state estimation .the optimal tradeoff has been derived in the following cases : in estimating a single copy of an unknown pure state , many copies of identically prepared pure qubits , a single copy of a pure state generated by independent phase - shifts , an unknown maximally entangled state , an unknown coherent state and gaussian state , and an unknown spin coherent state .experiment realization of minimal disturbance measurements has been also reported . in the present paperwe review the characterization of the tradeoff relation in quantum state discrimination of ref . , and suggest an experimental realization of the minimum - disturbing measurement . in this case , an unknown quantum state is chosen with equal _ a priori _ probability from a set of two non orthogonal pure states , and the error probability of the discrimination is allowed to be suboptimal ( thus intuitively causing less disturbance with respect to the optimal discrimination ) . a measuring strategy that achieves the optimal tradeoffis shown to smoothly interpolate between the two limiting cases of maximal information extraction and no measurement at all .the issue of the information - disturbance tradeoff for state discrimination can become of practical relevance for posing general limits in information eavesdropping and for analyzing security of quantum cryptographic communications .after briefly reviewing the optimal information - disturbance tradeoff in quantum state discrimination and the corresponding measurement instrument , we analyze two possible experimental realization of the minimum - disturbing measurement .typically , in quantum state discrimination we are given two ( fixed ) non orthogonal pure states and , with _ a priori _ probabilities and , and we want to construct a measurement discriminating between the two .we can describe a measurement by means of an _ instrument _ , namely , a collection of completely positive maps , labelled by the measurement outcomes . using the kraus decomposition , one can always write . in the casethe sum comprises just one term , namely , , the map is called _ pure _ , since it maps pure states into pure states .the trace =\tr[\pi_i\rho] ] . the averaged reduced state coming from ignoring the measurement outcome is simply obtained using the _ trace - preserving _map .the trace - preservation constraint for implies that the set of positive operators is actually a positive operator - valued measure ( povm ) , satisfying the completeness condition .quantum state discrimination is then performed by a two - outcome instrument whose capability of discriminating between and can be evaluated by the average success probability = \sum_{i=1}^2p_i\tr[\pi_i|\psi_i\>\<\psi_i|].\ ] ] notice that actually depends only on the povm . the probability quantifies the amount of information that the instrument is able to extract from the ensemble . among all instruments achieving average success probability ( the bar over means that we fix the value of ) , we are interested in those minimizing the average disturbance caused on the unknown state , that we evaluate in terms of the average fidelity , namely , differently from , the disturbance strongly depends on the particular form of the instrument .this means that there exist many different instruments achieving the same , but giving different values of .let be the disturbance produced by the _least disturbing _ instrument that discriminates from with average success probability .intuitive arguments suggest that the larger is , the larger must correspondingly be ( i. e. , the larger is the amount of information extracted , the larger is the disturbance caused by the measurement ) . the precise derivation of the optimal tradeoff has been obtained in ref . , along with the corresponding optimal measurement , for equal _ a priori _ probabilities , i. e. . in the following we briefly review the main results .let us start reviewing the case of the measurement maximizing .notice that , given two generally non orthogonal pure states and , it is always possible to choose an orthonormal basis , placed symmetrically around and ( see fig . [fig : quadrante ] ) , on which both states have real components , namely and fidelity . in this case , it is known that the maximum achievable is given by which is obtained by the orthogonal von neumann measurement . the instrument achieving , that minimizes the disturbance is given by where is the unitary operator , and satisfies the equation equation ( [ eq : ui ] ) represents a measure - and - prepare realization : the observable is measured and , depending on the outcome , the quantum state is prepared .the states are symmetrically tilted with respect to the s , see fig .[ fig : quadrante ] . the presence of the tilt can be understood by noticing that minimum error discrimination can never be error - free for non orthogonal states . even using the optimal helstrom s measurement, there is always a non zero error probability , and , the closer the input states are to each other , the smaller the success probability is .hence , it is reasonable that , the closer the input states are , the less `` trustworthy '' the measurement outcome is , and the average disturbance is minimized by cautiously preparing a new state that actually is a coherent superpositions of both hypotheses and .the minimum disturbance for helstrom s optimal measurement is given by notice that reaches its maximum for , namely , when and are `` unbiased '' with respect to each other ( ) . by allowing a suboptmal discrimination , with success probability , one can cause less disturbance . in this case , by parametrizing the average success probability thorugh a control parameter as follows with , one can search , among all possible measurements achieving , the one minimizing the disturbance .it turns out that , for any value of , the minimum disturbance is achieved by the _ pure _instrument , where with .the unitary operator in the above equation generalizes that in eq .( [ u1 ] ) as follows with it follows that every instrument that achieves average success probability must cause _ at least _ an average disturbance .\ ] ] just by varying the control parameter , it is possible to smoothly move between the limiting cases . for , we obtain the identity map , that is , the no - measurement case . for , we obtain helstrom s instrument in eq .( [ eq : ui ] ) , and eq . ( [ eq : tilt - t ] ) reproduces the tilt given in eq .( [ eq : tilt ] ) .however , the crucial difference between helstrom s limit ( ) and the intermediate cases is that , for , the optimal instrument _ can not _ be interpreted by means of a measure - and - prepare scheme , and the unitaries and in eq .( [ eq : kraus - t ] ) represent feedback rotations for outcomes and . by eliminating the parameter from eqs .( [ eq : prob1-t ] ) and ( [ eq : dist1-t ] ) , one can obtain the optimal tradeoff between information and disturbance , for any value of .in this section we want to show two experimental schemes for the realization of the minimum - disturbing measurement .the two - level input system is encoded on photons degrees of freedom .since we are interested not only in the success probability but also in the posterior state of the system _ after _ the measurement , we have to focus on indirect measurement schemes , in which the system is previously made interact with a probe , and , after such interaction , a projective measurement is performed on the probe .the _ mathematical _parameter controlling the tradeoff in eq .( [ eq : prob1-t ] ) can then be put in correspondence with a _ physical _ parameter controlling the strength of the interaction between the system and the probe .the case means that the interaction is actually factorized and that the subsequent measurement on the probe does not provide any information about the system and the latter is completely unaffected by the probe s measurement .this is precisely the no - measurement case . on the contrary, identifies a _ completely entangling _ interaction , or , in other words , a situation in which a measurement on the probe gives the largest amount of information about the system , consequently causing the largest disturbance . in the following ,two possible settings are discussed : the first one , which is deterministic and involves the _ dual - rail _ representation of qubits , and the second one which is probabilistic and involves the qubit encoding on the polarization state of a single photon . in ref . it is shown how to achieve a maximally entangling gate of the c - not type , i. e. by combining , in the dual - rail representation of qubits , two hadamard gates with a non - linear interaction caused by a kerr medium coupling the two modes ( system ) and ( probe ) . more explicitly , by varying the interaction time ( or length ) between the system mode and the probe mode inside the kerr medium , it is possible to achieve the following unitary evolution : the two limiting cases correspond to for which , and for which realizes a perfect c - not gate .then , to measure the von neumann observable on the probe is equivalent to apply the instrument in eq .( [ eq : kraus - t ] ) onto the system , with . the feedback unitary rotation ( [ eq : unitary - t ] ) can be subsequently applied conditional to the probe measurement outcome .this scheme is _, that is , no events have to be discarded .however , approaching the limiting value ( or , equivalently , ) is quite hard , since too large nonlinearity is needed .hence , such a setup can be useful only for regimes with .the second proposal is a modification of the setup already used in ref . to experimentally realize a _universal _ minimum - disturbing measurement . with respect to ref . , only the feedback rotations are different .this setup has the great advantage of being completely achievable by linear optics . in order to entangle the system with the probe, it needs an _ entangling measurement _ to be performed on the joint system - probe state .such a measurement is in fact a parity check , namely a measurement of the observable however , since the outcome `` '' corresponds to a situation in which the input photon and the probe photon are indistinguishable , we are forced to post - select just one half of the events , discarding those corresponding to the outcome . a part of this major drawback , limiting the actual usefulness of such a measuring instrument in practical application , using this setup it is possible to explore the whole range of the parameter values $ ] , contrarily to what happens using non - linear mediaf. b. acknowledges japan science and technology agency for support though the erato - sorst project on quantum computation and information .m. f. s. acknowledges miur for partial support through prin 2005 .99 c. w. helstrom , _ quantum detection and estimation theory _( academic press , new york , 1976 ) . for a recent review , see j. bergou , u. herzog , and m. hillery , _ quantum state estimation _ , lecture notes in physics vol .649 ( springer , berlin , 2004 ) , p. 417 ; a. chefles , _ ibid . _ , p. 467 .b. huttner , a. muller , j. d. gautier , h. zbinden , and n. gisin , phys .rev . a * 54 * , 3783 ( 1996 ) ; s. m. barnett and e. riis , j. mod . opt . * 44 * , 1061 ( 1997 ) ; r. b. m. clarke , a. chefles , s. m. barnett , and e. riis , phys rev a. * 63 * , 040305(r ) ( 2001 ) ; r. b. m. clarke , v. m. kendon , a. chefles , s. m. barnett , e. riis , and m. sasaki , phys . rev .a * 64 * , 012303 ( 2001 ) ; m. mohseni , a. m. steinberg , and j. a. bergou , phys .lett . * 93 * , 200403 ( 2004 ) .i. d. ivanovic , phys .a * 123 * , 257 ( 1987 ) ; d. dieks , phys .lett . a * 126 * , 303 ( 1988 ) ; a. peres , phys .a * 128 * , 19 ( 1988 ) ; g. jaeger and a. shimony , phys .a * 197 * , 83 ( 1995 ) ; a. chefles , phys .a * 239 * , 339 ( 1998 ) .g. m. dariano , m. f. sacchi , and j. kahn , phys .rev a * 72 * , 032310 ( 2005 ) .j. walgate , a. j. short , l. hardy , and v. vedral , phys . rev .* 85 * , 4972 ( 2000 ) ; s. virmani , m. f. sacchi , m. b. plenio , and d. markham , phys .a * 288 * , 62 ( 2001 ) ; y .- x . chen and d. yang , phys .a * 64 * , 064303 ( 2001 ) ; * 65 * , 022320 ( 2002 ) ; z. ji , h. cao , and m. ying , phys . rev .a * 71 * , 032323 ( 2005 ) .a. m. childs , j. preskill , and j. renes , j. mod . opt . * 47 * , 155 ( 2000 ) ; a. acn , phys . rev . lett . * 87 * , 177901 ( 2001 ) ; g. m. dariano , p. lo presti , and m. g. a. paris , phys . rev .lett . * 87 * , 270404 ( 2001 ) .m. f. sacchi , phys . rev . a * 71 * , 062340 ( 2005 ) .w. heisenberg , zeitsch . phys . *43 * , 172 ( 1927 ) ; m. o. scully , b .-englert , and h. walther , nature * 351 * , 111 ( 1991 ) ; b .-englert , phys .* 77 * , 2154 ( 1996 ) ; c. a. fuchs and k. jacobs , physa * 63 * , 062305 ( 2001 ) ; h. barnum , e - print quant - ph/0205155 ; g. m. dariano , fortschr . phys . * 51 * , 318 ( 2003 ) ; m. ozawa , ann . phys . * 311 * , 350 ( 2004 ) ; l. maccone , phys . rev .a * 73 * , 042307 ( 2006 ) . c. a. fuchs , fortschr46 * , 535 ( 1998 ) .k. banaszek , phys .lett . * 86 * , 1366 ( 2001 ) .g. m. dariano and h. p. yuen , phys .lett . * 76 * , 2832 ( 1996 ) .a. s. holevo , _ probabilistic and statistical aspects of quantum theory _ ( north holland , amsterdam , 1982 ) .s. massar and s. popescu , phys .lett . * 74 * , 1259 ( 1995 ) ; r. derka , v. buzek , and a. k. ekert , phys .lett . * 80 * , 1571 ( 1998 ) ; j. i. latorre , p. pascual , and r. tarrach , phys .lett . * 81 * , 1351 ( 1998 ) ; g. vidal , j. i. latorre , p. pascual , and r. tarrach , phys .a * 60 * , 126 ( 1999 ) ; a. acn , j. i. latorre , and p. pascual , phys .a * 61 * , 022113 ( 2000 ) ; g. chiribella , g. m. dariano , p. perinotti , and m. f. sacchi , phys . rev .a * 70 * , 062105 ( 2004 ) ; g. chiribella , g. m. dariano , and m. f. sacchi , phys .a * 72 * , 042338 ( 2005 ) .k. banaszek and i. devetak , phys .a * 64 * , 052307 ( 2001 ) .l. mita jr . , j. fiurek , and r. filip , phys .a * 72 * , 012311 ( 2005 ) .m. f. sacchi , phys .. lett . * 96 * , 220502 ( 2006 ) .u. l. andersen , m. sabuncu , r. filip , and g. leuchs , phys .lett . * 96 * , 020409 ( 2006 ) .m. g. genoni and m. g. a. paris , phys .a * 74 * , 012301 ( 2006 ) .m. f. sacchi , unpublished .f. sciarrino , m. ricci , f. de martini , r. filip , and l. mita jr . ,96 * , 020408 ( 2006 ) .f. buscemi and m. f. sacchi , quant - ph/0610196 ( to appear on phys .e. b. davies and j. t. lewis , commun .* 17 * , 239 ( 1970 ) ; m. ozawa , j. math . phys .* 5 * , 848 ( 1984 ) .k. kraus , _ states , effects , and operations : fundamental notions in quantum theory _notes phys .* 190 * ( springer - verlag , 1983 ) .m a nielsen and i l chuang , _ quantum information and computation _ ( cambridge university press , cambridge , 2000 ) .see in particular section 7.4.2 .m. fleischhauer , a. imamoglu , and j. p. marangos , rev . mod. phys . * 77 * , 633 ( 2005 ) .
we propose two experimental schemes for quantum state discrimination that achieve the optimal tradeoff between the probability of correct identification and the disturbance on the quantum state .
galactic globular clusters ( ggcs ) are extremely important astrophysical objects since _ ( i ) _ they are prime laboratories for testing stellar evolution ; _ ( ii ) _ they are `` fossils '' from the epoch of galaxy formation , and thus important cosmological tools ; _ ( iii ) _ they serve as test particles for studying the dynamics of the galaxy .a few years ago our group started a long - term project devoted to study the global stellar population in a sample of `` proto - type '' ggcs following a multi - wavelength approach : ir and optical observations to study cool giants and uv observations to study blue hot sequences ( horizontal branch ( hb ) , blue stragglers stars ( bss ) , etc ) . in this paperi report a short summary of the most recent results .the advantage of observing cool giants in the near ir is well known since many years .the contrast between the red giants and the unresolved background population in the ir bands is greater than in any optical region , so they can be observed with the highest s / n ratio also in the innermost region of the cluster .moreover , when combined with optical observations , ir magnitudes provide useful observables such as the v k color , an excellent indicator of the stellar effective temperature ( t ) , and allows a direct comparison with theoretical model predictions . since the peeonering work by frogel and collaborators ( frogel , cohen & persson 1983 ) in the early 80 s , many groups have performed systematic ir observations in ( mainly ) heavily - obscured ggcs ( see frogel et al .1995 ; kuchinski & frogel , 1995 ; minniti et al . 1995 ; davidge 2000 ; ortolani et al .2001 ) . in ferraro et al .( 2000 , hereafter f00 ) a new set of high quality near - ir color magnitude diagrams ( cmds ) was presented for a sample of 10 ggcs , spanning a wide range in metallicity .we used this homogeneous data base to define a variety of observables allowing the complete characterization of the photometric properties of the red giant branch ( rgb ) , namely : _ ( a ) _ the location of the rgb in the cmd both in ( j k) and ( v k) colors at different absolute k magnitudes ( 3 , 4 , 5 , 5.5 ) and in temperature ; _ ( b ) _ its overall morphology and slope ; _ ( c ) _ the luminosity of the bump and of the tip .all these quantities have been measured with a homogeneous procedures applied to each individual cmd by adopting the distance moduli scale defined in ferraro et al .( 1999a , hereafter f99 ) .the mean ridge lines for the selected clusters , in various planes are shown in figure 1 .a set of relations linking the photometric parameters to the cluster global metallicity ( ] , which is fully consistent with the most recent direct spectroscopic determination ( pancino et al 2002 ) .a complete , quantitative understanding of the physics of mass loss processes and the precise knowledge of the gas and dust content in ggcs is crucial in the study of population ii stellar systems and their impact on the galaxy evolution . despite its importance, mass loss is still a poorly understood process . in order to shed some light on mass loss processes along the rgb we performed ( origlia et al 2002 ) a deep mid - ir survey with isocam of the very central regions of six , massive clusters : 47 tuc , ngc 362 , cen , ngc 6388 , m15 and m54 .mid - ir observations are the ideal tool to study mass loss , since they could detect an outflowing gas fairly far away from the star ( typically , tens / few hundreds stellar radii ) . two different filters ( [ 12 ] , [ 9.6 ] ) in the 10 m spectral regionhave been used .the mid - ir colors have been then combined with near - ir colors in order to obtain photometric indices ( ] ) , which are sensible tracers of circumstellar dust excess .figure 3 shows the and )_0 ] are classified as sources with significant dust excess and are marked with filled symbols in the figure .there are a series of interesting results suggested by this figure : _ ( i ) _ all the stars showing evidence of mid - ir circumstellar dust excess are in the upper 1.5 bolometric magnitudes of the rgb , suggesting that _ significant mass loss occurs only at the very end of the rgb evolutionary stage _ ; _ ( ii ) _ only 20 out of the 52 ( ) isocam sources detected in the upper 1.5 bolometric magnitudes of the rgb ( ) show evidence of circumstellar dust excess ; by correcting for stars not detected because of the low spatial resolution of isocam , dusty envelopes are inferred around about 15% of the brightest giants , this suggests that _ the mass loss process is episodic _ ; _ ( iii ) __ there is no evidence of any dependence of mass loss occurrence on the cluster metallicity_.although the cmd of an old stellar population ( as a ggc ) is dominated , in the _ classical _ -plane , by the cool stellar component , relatively populous hot stellar components do exist in ggcs and are strong emitters in the uv ( hot post - asymptotic giant branch stars , blue hb , bss , various by - products of binary system evolution , and so on ) .the advent of the hubble space telescope ( hst ) , whith its unprecedented spatial resolution and imaging / spectroscopic capabilities in the uv , has given a new impulse to the study of hot stars in ggcs .we are involved in a long - term observational programme which uses hst to perform uv observations in a selected sample of ggcs . in this sectioni summarize the most recent results obtained for bss ( a few additional results on the search of peculiar objects can be found in the poster contribution by sabbi et al . and ferraro et al . in this book ) .blue straggler stars ( bss ) , first discovered by sandage ( 1953 ) in m3 , are commonly defined as stars brighter and bluer ( hotter ) than the main sequence ( ms ) turnoff ( to ) , lying along an apparent extension of the ms , and thus mimicking a rejuvenated stellar population .the existence of such a population has been a puzzle for many years , and even now its formation mechanism is not completely understood , yet . at present , the leading explanations involve mass transfer between binary companions or the merger of a binary star system or the collision of stars ( whether or not in a binary system ) .direct measurements ( shara et al . 1997 ; gilliland et al .1998 ) and indirect evidence have in fact shown that bss are more massive than the normal ms stars , pointing again toward a collision or merger of stars .thus , the bss represent the link between classical stellar evolution and dynamical processes ( see bailyn 1995 ) .the realization that bss are the ideal diagnostic tool for a quantitative evaluation of the dynamical interaction effects inside star clusters has led to a remarkable burst of searches and systematic studies , using uv and optical broad - band photometry .our group has actively participated to this extensive surveys and has published some of the first and most complete catalogs of bss in gcs ( fusi pecci et al 1992 ; ferraro , bellazzini & fusi pecci 1995 ; ferraro et al 2002 ) .these works have significantly contributed to form the nowadays commonly accepted idea that bss are indeed a normal component of stellar populations in clusters , since they are present in all of the properly observed ggcs . however , according to fusi pecci et al .( 1992 ) bss in different environments could have different origin .in particular , bss in loose ggcs might be produced from coalescence of primordial binaries , while in high density ggcs ( depending on survival - destruction rates for primordial binaries ) bss might arise mostly from stellar interactions , particularly those which involve binaries .thus , while the suggested mechanisms for bss formation could be at work in clusters with different environments ( ferraro , bellazzini , & fusi pecci , 1995 ; ferraro et al .1999 ) there is evidence that they could also act simultaneously within the same cluster ( as in the case of m3 , see ferraro et al .1993 ; ferraro et al .moreover , as shown by ferraro et al .( 2002 ) , both the bss formation channels ( primordial binary coalescence and stellar interactions ) seem to be equally efficient in producing bss in different environments , since the two clusters that show the largest known bss specific frequency , i.e. ngc 288 ( bellazzini et al .2002 ) and m 80 ( ferraro et al .1999 ) , represent two exteme cases of central density concentration among the ggcs ( and ) .particularly interesting is the case of m80 which shows an exceptionally high bss content : more than 300 bss have been discovered in m80 ( ferraro et al 1999 ) .this is _ the largest and most concentrated bss population ever found in a ggc_. since m80 is the ggc which has the largest central density among those not yet core - collapsed , this discovery could be the first direct evidence that stellar collisions could indeed be effective in delaying the core collapse .figure 4 shows the ( ) cmds for six clusters observed in the uv with hst ( ferraro et al 2002 ) .more than 50,000 stars are plotted in the six panels of figure 4 .the cmd of each cluster has been shifted to match that of m3 using the brightest portion of the hb as the normalization region .the solid horizontal line ( at ) in the figure shows the threshold magnitude for the selection of bright ( hereafter bbss ) sample . such a dataset allows a direct comparison of the photometric properties of bbss in different clusters . in particular ,we have found evidence ( ferraro et al .2002 ) for a possible connection between the presence of a blue tail in the hb and the bss uv - magnitude distribution : ggcs without hb blue tails have bss - luminosity function ( lf ) extending to brighter uv magnitudes with respect to ggcs with blue tails . in figure 5 the magnitude distributions ( equivalent to a lf ) of bbss for the six clusters are compared . in doing thiswe use the parameter defined as the magnitude of each bbss ( after the alignment showed in figure 4 ) with respect to the magnitude threshold ( assumed at - see figure 4 ). then . from the comparison shown in figure 5 ( _ panel(a ) _ )the bbss magnitude distributions for m3 and m92 appear to be quite similar and both are significantly different from those obtained in the other clusters .this is essentially because in both clusters the bbss magnitude distribution seems to have a tail extending to brighter magnitudes ( the bbss magnitude tip reaches ) .a ks test applied to these two distributions yields a probability of that they are extracted from the same distribution . in _panel(b ) _ we see that the bbss magnitude distribution of m13 , m10 and m80 are essentially indistinguishable from each other and significantly different from m3 and m92 .a ks test applied to the three lfs confirms that they are extracted from the same parent distribution . moreover , a ks test applied to the total lfs obtained by combining the data for the two groups : m3 and m92 ( _ group(a ) _ ) , and m13 , m80 and m10 ( _ group(b ) _ ) shows that the the bbss - lfs of_ group(a ) _ and _ group(b ) _ are not compatible ( at level ) .it is interesting to note that the clusters grouped on the basis of bbss - lfs have some similarities in their hb morphology .the three clusters of _group(b ) _ have an extended hb blue tail ; the two clusters of _group(a ) _ have no hb extention .could there be a connection between the bbss photometric properties and the hb morphology ?this possibility needs to be further investigated .it is a pleasure to thank all the collaborators involved in this vast project . in particular ,i want to thank elena pancino for a critical reading of this manuscript and livia origlia for her continuos support .the financial support of the _ agenzia spaziale italiana _ ( asi ) and of the _ ministero della istruzione delluniversit e della ricerca _ ( miur ) is kindly acknowledged .
i report on some recent results in the framework of a complex project aimed to characterize the photometric properties of stellar populations in galactic globular clusters . # 1_#1 _ # 1_#1 _ = # 1 1.25 in .125 in .25 in